text stringlengths 4 602k |
|---|
Will this formula work every single time?
How Archimedes proved it
Here is how Archimedes proved it. Draw any circle. Make a point anywhere on the circumference of the purple circle. Use that point as the center of a blue circle with the same radius as the purple circle. The edge of the blue circle should touch the center of the purple circle.
Make two equilateral triangles
Draw the line segment connecting the centers of the two circles. That’s the radius of the both circles. Now draw the line connecting the center of the blue circle to where it crosses the purple circle on both sides, and complete the triangles. You should have two equilateral triangles whose sides are equal to the radius of the purple circle.
Make a hexagon
Now extend all of the radius lines so they become diameter lines, all the way across the circle, and finish drawing all of the triangles to connect them. You’ve got six equilateral triangles now, that make an orange hexagon. So the perimeter of your hexagon is the same as six times the radius of your circle.
(Now pi comes into it)
But your circumference is a little bigger than the perimeter of your hexagon, because the shortest distance between two points is always a straight line. This shows you that the circumference of the purple circle has to be more than 6r, so if C=2πr then π (pi) has to be a little bigger than 3, which it is.
Drawing a dodecagon
Now let’s try to get a little closer to the real value of π. Suppose we make our triangles narrower, so that instead of drawing a hexagon, we draw a dodecagon – a shape with twelve sides? We can do that by drawing more points halfway between the points of the hexagon.
If we do that, we’ll see that that the perimeter of the dodecagon is a little bigger, and closer to being a circle. We have twelve congruent isosceles triangles. Each triangle has two sides that are as long as the radius of the circle, and a third side that we want to know the length of, in order to figure out the perimeter of the dodecagon.
Perimeter of a dodecagon
It’s pretty hard to figure out the perimeter of the dodecagon. Start by drawing a red line from the center of the purple circle to one of the points of the dodecagon (B).
We also know that the line AD is the same as half the radius of the circle (remember the hexagon was made of equilateral triangles), so AD = 2. Because the bluish triangle is a right triangle, we can use the Pythagorean Theorem to tell us that the third side, CD, is the square root of 12. (4 squared = 2 squared + 12).
Or you can get the same proof from this video
Stay with me…
We also know that the red line CB is the same as the radius of the circle, so that’s also 4. So the distance from point B to point D must be the radius (4) minus the length of CD, or the square root of 12. BD = 4 – the square root of 12. Now look at the little orange triangle ABD. That’s also a right triangle, and now we know two of its sides. AD = 2, and BD = 4 – the square root of 12.
We can use the Pythagorean Theorem again to calculate that the green line AB must be 2.07, so the whole perimeter is 12 x 2.07 = 24.84. The circumference of the purple circle has to be more than 24.72, or more than 6.21 r. If C = 2πr, then π has to be a little bigger than 3.1.
The more sides, the closer to pi
The more sides we draw on our polygon, the closer we will get to the real value of pi (3.14159 etc.). Using a polygon with 96 sides, Archimedes was able to calculate that π was a little bigger than 3.1408, which is pretty close. |
ジョンズ・ホプキンズ大学(Johns Hopkins University) による Visualizing Data in the Tidyverse の受講者のレビューおよびフィードバック
Data visualization is a critical part of any data science project. Once data have been imported and wrangled into place, visualizing your data can help you get a handle on what’s going on in the data set. Similarly, once you’ve completed your analysis and are ready to present your findings, data visualizations are a highly effective way to communicate your results to others. In this course we will cover what data visualization is and define some of the basic types of data visualizations.
In this course you will learn about the ggplot2 R package, a powerful set of tools for making stunning data graphics that has become the industry standard. You will learn about different types of plots, how to construct effect plots, and what makes for a successful or unsuccessful visualization.
In this specialization we assume familiarity with the R programming language. If you are not yet familiar with R, we suggest you first complete R Programming before returning to complete this course.... |
Add your rating See all 6 kid reviews. COM, simple to complex math problems are cleverly disguised as puzzles, kid-focused word problems, and concepts illustrated with graphics. Not all of the content is in game format however -- drill and practice computations are plentiful, and video tutorials walk kids through math problems with voice-over and whiteboard-type demonstrations.
Educational resources, thinking games, learning activities, and teaching ideas for math educators. Monday, January 14, Thinking Blocks and the Common Core Thinking Blocks is an online problem solving tool that enables students to build physical models of math word problems.
Using brightly colored blocks, students represent mathematical relationships and identify known and unknown quantities. The model provides students with a powerful image that organizes information and simplifies the problem solving process. By modeling increasingly complex word problems, students develop strong reasoning skills which will facilitate the transition from arithmetic to algebra.
Thinking Blocks programs include word problems involving addition and subtractionmultiplication and divisionfractionsand ratio and proportion.
Within each of the four main modules, there are both simple and multi-step word problems for students to solve. These problem sets refer to generalized visual models that can be applied to a variety of scenarios.
While results are not stored from session to session, each student's progress can be monitored during a single practice period. Thinking Blocks supports many of the common core state standards for mathematics, particularly those found in grades 1 through 6. The standards described below come directly from the Common Core website.
Use addition and subtraction within 20 to solve word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.
Solve word problems that call for addition of three whole numbers whose sum is less than or equal to 20, e. Use addition and subtraction within to solve one- and two-step word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.
Use multiplication and division within to solve word problems in situations involving equal groups, arrays, and measurement quantities, e. Solve two-step word problems using the four operations. Represent these problems using equations with a letter standing for the unknown quantity.
Assess the reasonableness of answers using mental computation and estimation strategies including rounding. Solve multi-step word problems posed with whole numbers and having whole-number answers using the four operations, including problems in which remainders must be interpreted.
Solve word problems involving addition and subtraction of fractions referring to the same whole and having like denominators, e.
Solve word problems involving multiplication of a fraction by a whole number, e. Solve word problems involving addition and subtraction of fractions referring to the same whole, including cases of unlike denominators, e. Use benchmark fractions and number sense of fractions to estimate mentally and assess the reasonableness of answers.
Solve real world problems involving division of unit fractions by non-zero whole numbers and division of whole numbers by unit fractions, e. Understand the concept of a ratio and use ratio language to describe a ratio relationship between two quantities.The latest Tweets from barnweddingvt.com (@mathplayground).
maker of math games, developer of Thinking Blocks, teacher of children, Donors Choose supporter. Massachusetts, USA Ss independently solve story problems on Math Playground using the bar model strategy. You could have heard a pin drop! #NBS @mathplayground barnweddingvt.com Best Math Friends is a fantastic and exciting game that motivates learners to practice their word problem skills.
The game is appropriate for all elementary grade levels, as . Jul 16, · Thinking Blocks Fractions teaches children how to model and solve word problems involving fractions and whole numbers. In this interactive tutorial, children are introduced to /5(17).
Math Playground:Word Problems. Students create models and use multiple skills to solve problems Math Playground: Math Videos. Math video tutorials Science National Geo Kids. Life Cycle Games. ThinkCentral. Our online science curriculum 4-H Virtual Forest.
4-H Virtual Farm. Grade 5 math Here is a list of all of the math skills students learn in grade 5!
These skills are organized into categories, and you can move your mouse over any skill name to preview the skill. L.4 Add and subtract fractions with like denominators: word problems; L.5 Add and subtract mixed numbers with like denominators; L.6 Add and.
Word problems practice quiz for GED Mathematics. These questions test arithmetic skills by presenting problems in a real-life context. |
See full work attached.
Common Core State Standards (CCSS) establish clear expectations for student learning and are the standards for a set of learning for all students in the United States regardless of geographic location. This discussion is focused on CCSS (Links to an external site.) and the role these standards take in the school setting.
There are two parts to this discussion as explained below.
- Part One: First, in one paragraph, summarize your understanding of the foundation of the CCSS for Math and English Arts. Next, adopting the perspective of a teacher leader, in at least two paragraphs, evaluate how CCSS (Math and English Language Arts) can be used to influence the use of technology-enhanced differentiated instructional strategies to support the needs of all learners. Finally, in one paragraph, justify why it is important to have purposeful planning of differentiated instructional strategies to promote student learning and provide at least one specific example to support your justification.
- Part Two: Include a link to your Folio in your initial post along with a one-paragraph reflection about your experience with the redesign for the Week One Assignment in terms of challenges you encountered and how you overcame those challenges. Be sure to include any 21st-century difficulties you experienced in revising to meet the components of 21st-21st-century student outcomes and 21st century support systems.
Week 2 Discussion 1 Common Core State Standards
Common Core State Standards (CCSS) establish clear expectations for student learning and are the standards for a set of learning for all students in the United States regardless of geographic location. This discussion is focused on CCSS (Links to an external site.) and the role these standards take in the school setting. There are two parts to this discussion as explained below.
· Part One: First, in one paragraph, summarize your understanding of the foundation of the CCSS for Math and English Arts. Next, adopting the perspective of a teacher leader, in at least two paragraphs, evaluate how CCSS (Math and English Language Arts) can be used to influence the use of technology- enhanced differentiated instructional strategies to support the needs of all learners. Finally, in one paragraph, justify why it is important to have purposeful planning of differentiated instructional strategies to promote student learning and provide at least one specific example to support your justification.
· Part Two: Include a link to your Folio in your initial post along with a one-paragraph reflection about your experience with the redesign for the Week One Assignment in terms of challenges you encountered and how you overcame those challenges. Be sure to include any difficulties you experienced in revising to meet the components of 21st century student outcomes and 21st century support systems.
Guided Response: Respond to at least two peers. In your responses, include a question about your peer’s technology-enhanced instructional strategies and offer an additional resource that supports an alternative viewpoint. Additionally, discuss your peer’s point of view and whether it is in direct correlation or contrast to yours about the CCSS. Finally, address your peer’s discussion of the challenges he or she faced in the redesign for the Week One Assignment offering supportive ideas for how your peer could overcome these challenges for future redesigns in this course. Though two replies is the basic expectation, for deeper engagement and learning, you are encouraged to provide responses to any comments or questions others have given to you, including the instructor. Responding to the replies given to you will further the conversation and provide additional opportunities for you to demonstrate your content expertise, critical thinking, and real-world experiences with this topic.
Standards and Assessment
This week students will:
1. Evaluate how purposeful planning of differentiated instructional strategies promotes student learning.
2. Describe how embedding evidence-based assessment in the curriculum can help guide teachers and learners in decision making.
3. Evaluate how designing instruction aligned to CCSS, ISTE-S and 21st century skills promote learner achievement and growth.
During Week One, you discussed diversity in the classroom and how to support diversity through 21st-century teaching and learning. During Week Two, you will discuss differentiated instruction in the classroom as it relates to promoting student learning and the use of Common Core State Standards (CCSS). Additionally, the Week Two Assignment will require you to think about 21st century support systems, CCSS, the International Society for Technology in Education Student (ISTE-S) standards, and their relationship with quality instructional planning, delivery, and learner achievement. Similar to Week One’s Assignment, this week you will redesign prior coursework to include in your Folio.
Burnaford, G., & Brown, T. (2014). Teaching and learning in 21st century learning environments: A reader. Retrieved from https://content.ashford.edu/
· Chapter 3: Assessment in the 21st Century
Framework for 21st century learning (Links to an external site.). (n.d.). Retrieved from http://www.p21.org/our-work/p21-framework
ISTE Standards for Students (Links to an external site.). (2012). Retrieved from http://www.iste.org/standards/standards/for-students
Read the standards (Links to an external site.). (n.d.). Retrieved from http://www.corestandards.org/the-standards
Marzano, R. J. (2006). Classroom assessment & grading that work [Electronic version]. Retrieved from the ebrary database.
Phelps, P. H. (2008). Helping teachers become leaders. The Clearing House, 81(3), 119-122. doi:10.3200/TCHS.81.3.119-122.
· Phelps discusses the need for school improvement is based on the idea that more teachers need to function as leaders. When we understand the various dimensions of teacher leadership, we can fulfill multiple roles at the school. This resource will support student completion of the discussions and assignment for this week. The full-text version of this article is available through the EBSCOhost database in the Ashford University Library.
Roby, D. E. (2011). Teacher leaders impacting school culture. Education, 131(4), 782-790. Retrieved from the EBSCOhost database.
· Roby discusses how teacher leaders have the ability to shape the culture of the school when given the right opportunities. This resource will support student completion of the discussions and assignment for this week. The full-text version of this article is available through the EBSCOhost database in the Ashford University Library.
Welcome to Week 2
As Week Two begins, you are encouraged you review the Week Five homepage to prepare for when you work in a mock Professional Learning Community (PLC) and complete a team assignment. Start thinking ahead now about your role and responsibilities during Week Five and contact the instructor for clarifications on any expectations. In Week One, diversity through 21st Century Teaching and Learning was investigated. During Week Two, one discussions and an assignment are included, focusing on the Common Core State Standards and 21st Century Skills and Standards.
Common Core State Standards Initiative (CCSSI)
CCSS is an initiative in the United States that outlines what every K-12 student should know and be able to do in English Language Arts (ELA) and Mathematics at the end of each grade level. The CCSSI seeks to establish consistent educational standards across the nation to ensure students are prepared for success after graduating from high school, be it going into a college program or entering the work force. The goal of CCSSI is to provide more depth and breadth of learning across the grade levels, where students can explore concepts at a deeper level than before. For more information, view this approximately three-minute video from DCPublicSchools (Links to an external site.)(2012), which provides a look at why the CCSS were created. When viewing the video think about how you do or intend to incorporate CCSS into your classroom.
Remember, the current focus of the CCSS is on two content areas, ELA and Math. This week, you focus on both sets of standards and how these relate to technology to enhance differentiated instructional strategies to support all learners. However, before we can begin to talk about technology and differentiated instruction in regards to CCSS, it may be helpful for us to review the two subjects.
ELA includes six categories; reading, writing, speaking and listening, language, media and technology and cursive and keyboarding. View this approximately six-minute video from the Professional Educators Association (Links to an external site.) (n.d.) about ELA where they explain the movement of CCSS and the creation of ELA standards. While viewing this video, think about your former or current classroom setting or any exposure you have to the classroom and how the ELA standards were, or are being used within the ELA curriculum. If you are not currently in the classroom, consider how you might ensure that you include CCSS within your ELA curriculum. Consider sharing your thoughts on the video as part of your discussions for the week or in the Ashford Café!
The Common Core Mathematics Standards consists of two sections; practice and content. In practices, there are eight set principals revolving around modeling, constructing arguments and critiquing math reasoning skills to name a few. In the content area, the CCSS are in four domains of mastery learning for grade levels kindergarten to fifth grade, middle school six and seventh grade and high school ninth thru twelfth grade. View this approximately seven-minute video from Dr. Raj Shah, owner and founder of Math Plus Academy (Links to an external site.) (2015) where the movement of the CCSS and the creation of Math Standards are explained. While viewing this video think about your own math experiences in grade school: how is teaching and learning different now than it was while you were in school? Consider sharing your experiences in our Ashford Café!
Now that the CCSS have been revisited, evaluation of how the CCSS (Math and English Language Arts) can be used to influence the use of technology-enhanced differentiated instructional strategies can occur.
Technology and the CCSS
Upon review of the ELA CCSS, you will notice that technology is embedded into most of the standards. For example:
RI.3.5 Grade 3 students: Use text features and search tools (e.g., key words, sidebars, hyperlinks) to locate information relevant to a given topic efficiently.
RI.1.5 Grade 1 students: Know and use various text structures (e.g., sequence) and text features (e.g., headings, tables of contents, glossaries, electronic menus, icons) to locate key facts or information in a text.
When considering the context of technology, a connection can be made with how utilizing technology can enhance differentiated instructional strategies, thereby supporting the needs of all learners. For example, the two standards listed above clearly demonstrate how students should know how to look up information using key words. Teachers could provide opportunities for students to meet these standards through library and internet related activities. One could also use web based student anthologies to look up key words while reading a story. All of these methods represent differentiation instruction through the incorporation of technology.
Week Two Assessments Overview
Always review the complete instructions for each assessment on the week’s homepage in addition to viewing this additional guidance.
Discussion 1 – Common Core State Standards
In part one of this discussion, you summarize your understanding of the foundation of the CCSS (Math and English Arts) and evaluate how CCSS (Math and English Language arts) can be used to influence the use of technology-enhanced differentiated instructional strategies to support the needs of all learners. You will also attach a link to your Folio and reflect on your redesign activity from the Week One assignment. Think about how the incorporation of technology is used to enhance instruction. All too often technology is an afterthought when creating lessons. Connect the idea of how can you intertwine technology to ensure that the needs of diverse learners are being met. Part two of the discussion you will provide a link to your Folio and a one paragraph reflection about your experiences with the redesign assignment from Week One.
Assignment – 21st Century Skills & Standards
This assignment requires you to think about 21st Century Support Systems, the Common Core State Standards (CCSS), and International Society for Technology in Education Student (ISTE-S) Standards and their relationship with quality instructional planning, delivery, and learner achievement. In this assignment set up the case for why the redesign meets the relationships between instruction planning, delivery and learner achievement. As well, discuss what you are teaching in the lesson plan or curriculum project and how it aligns to the components of 21st Century Support Systems, CCSS and ISTE-S.
Common Core State Standards Initiative (2012). Read the common core standards: The standards (Links to an external site.) . Retrieved from http://www.corestandards.org/ELA-Literacy/RL/3
DC Public Schools (2012, Nov 2) Three-min video explaining the common core state standards (Links to an external site.) . [Video file] Retrieved from http://youtu.be/5s0rRk9sER0
Professional Educators Association (n.d.) ELA and literacy (Links to an external site.). [Video file] Retrieved from https://www.youtube.com/watch?v=jHo0jg6FwjI#action=share
Shah, R. (2015, Jan 6). Common core math explained (Links to an external site.). [Video file]. Retrieved from https://youtu.be/X_CK1e0Lmxw
Running Head: DIVERSITY THROUGH 21ST CENTURY TEACHING AND LEARNING 1
DIVERSITY THROUGH 21ST CENTURY TEACHING AND LEARNING 5
Supporting Diversity through 21st Century Teaching and Learning
EDU 696: Capstone 2: Culminating Project (EDG2035A)
Dr. Robert Voelkel
August 31, 2020
Supporting Diversity through 21st Century Teaching and Learning
According to Ashford University, “The MASE graduate plans cross-disciplinary learning experiences that promote individualized academic and social abilities, attitudes, values, interests, and career options for students with exceptionalities” (Ashford University, 2019). All instructors are required to come up with learning plans that address the needs of all learners. Planning of meaningful activities positively influences learners, raising their motivation and the desire to learn. Educator’s incorporation of 21st-century skills might end up being successful beyond limits. Some areas which instructors need to take into account include but not limited to information and media technology, learning innovation ad career guidance. The paper sought to focus on a sample plan made up of 21st-century skill learning and innovation skills: communication, critical thinking, collaborations, and creativity.
The redesign lesson plan is extracted from ECE 642 Quality Curriculum in Early Childhood Education. The purpose of the program is the formulation of new vocabularies meant to describe emotions. The lesson plan supports 21st-century skills since it upholds, critical thinking, communication, and creativity. Learners will be in a position of expressing themselves and even communicating their feelings using the right vocabularies. Learners are expected to learn and master three words and learn how they can be applied in expressing their emotions. The plan will incorporate games, reading journals, and crayons, rendering the learners as creative thinkers. Lesson introduction will entail reading a book to learners and giving them an overview of what the class will be covering.
In order to distinguish instructions correctly, the teacher will be required to read widely and read more materials so that he/she will have a more comprehensive view of what needs to be covered. Lesson assessment will be done through administered practice, which is thought and pair. Learners will get questions about reading stories and given time to reason them out and respond. Learners will be allowed to discuss the problem with peers. Independence practice assessment will entail sending the vocabulary worksheets home, where the learners will trace and copy-paste. Lesson closure will involve bringing paired learners in front so that they can share their views and also receive some questions from instructors. Activities are taking into consideration 21st-century skills of independence and innovation.
“21st-century curriculum and lesson design are essential to creating educational experiences that lead to deeper learning that prepares students to navigate a complex, rapidly changing world” (“Framework 21st Century”, n.d). The lesson plan will create an educational experience that requires critical and deeper thinking from learners. The expertise they received will be significant beyond their current level of education.
Modification is done upon the lessons aimed at increasing learner independence and self-reliance over the content they are required to cover and what they are supposed to learn. The inclusion of a memory game in the lesson plan will trigger learners thinking and even make them more creative by trying to find solutions to the game. These solutions are positively contributed to learners’ encouragement because success at one level increased their desire to succeed in the next level. Having exit slips in place also contributed to learners’ critical thinking because they tend to empathize with how a given situation can hurt a person’s emotions.
Lesson structure relied on off preschool students aged between 3 and 5 years. The lesson incorporated 21st-century skills of innovation and critical thinking. If instructors can successfully incorporate 21st skills in everyday Learning, learners will be well prepared and well equipped to face life matters beyond the school level.
Newman, R. (2013). Teaching and Learning in the 21st Century: Connecting the Dots. San Diego, CA: Bridgepoint Education |
What is force multiplication?
Pascal’s law states that pressure set up in a confined body of fluid, acts equally in all directions, and always at right angles to the containing surface. When a pressure is applied to a fluid trapped in a confined space, that pressure acts on each square millimeter of that surface. The force output of a hydraulic actuator is the result of the pressure applied, and the area to which that pressure is applied.
Force = Pressure x Area
If a given pressure is applied to two identical cylinders, then they will have an equal output force.
How does force multiplication work?
When the same pressure is applied to different sized cylinders, then the larger cylinder will have a greater force output. This is because the area which sees the hydraulic pressure is greater.
Different cylinder sizes create different pressures
The cylinder on the right in this example is twice the diameter of the cylinder on the left.
The area of the circle multiplies the force
The area of a circle is calculated by this formula:
The area of a circle = Pi x the radius squared
When a given pressure is applied to two different sized areas, the larger area will see a larger force output than the smaller area. This force differential is often significant. For example, while a 200mm diameter circle is twenty times larger than a 10mm circle in diameter, the area differential between the two is 400 to 1. When pressure is applied, the output force increase would also be 400 to 1.
A bottle jack piston uses force multiplication
A hydraulic bottle jack is a perfect example of force multiplication at work.
A reciprocating piston is moved with a hand lever. This piston applies pressure to the fluid, which is then fed through and applied to a larger piston. The output force is magnified due to the area differential of the two pistons, thus making it possible to lift a car by hand. The bottle jack piston moves slowly in relation to the pump piston because of the volume differential between the two chambers.
The weight of a load affects the hydraulic pressure
The hydraulic pressure required to lift a load is dictated by the mass of the load, and the area to which it is applied.
The weight of a load changes the hydraulic pressure required
The three cylinders on the left are all lifting the same weight, 20,000kg, (20T). This mass applies a force of 19,600N to the piston area of the cylinder. The hydraulic pressure that is created is called the load induced pressure. i.e. The pressure that is induced in the fluid by the load applied to it. When the same force applied to different cylinder areas, the load induced pressure will be highest in the smallest cylinder.
Load induced pressure vs system pressure
When the pressure applied to a cylinder is greater than the load induced pressure, the cylinder will begin to move forward and lift the load. As the load is lifted, the pressure seen within that area of the circuit is generally that of the load induced pressure, as long as there is no resistance to movement.
As the cylinder mechanically reaches the end of its stroke, the pressure will rise to the maximum allowed within that area of the system. This maximum pressure allowed is limited by valving such as pressure relief valves, pressure reducing valves or similar hydraulic fittings.
Need more information?
To help you choose the right hydraulic hose for your project, download our hose selection guide, with 15 keys to selecting the right hose.
What does “Hydraulic” mean?
The term “Hydraulic” is derived from ancient Greek. The word “hydraulics” originates from the Greek word hydraulikos which in turn originates from hydraulos meaning water organ which in turn comes from hydor, Greek for water, and aulos, meaning pipe. It’s also defined as “operated by pressure transmitted through a pipe by a liquid, such as water or oil.” [Collins Dictionary]
How do hydraulic machines work?
Generally speaking, a hydraulic machine gains controlled motion through the use of a transmitted fluid. There is a common assumption that a machine operated by a fluid is a hydraulic machine, but this is not always the case. The water wheel above on the left turns with the weight of the water. But it is not a hydraulic machine.
A correct definition of a hydraulic system is one where motion and force output is created by the use of a pressurised fluid. In the case of the water wheel, the fluid is not entrapped, and therefore, pressure cannot be applied to it at any point.
Blaise Pascal defined The Fundamental Laws of Hydraulics
What is Pascal’s law?
Blaise Pascal was born in 1623 and died in 1662 at the age of 39. Pascal defined the first of two fundamentals of hydraulics – known as Pascal’s Law.
The first fundamental rule of hydraulics
Pascal’s law states that pressure set up in a confined body of fluid, acts equally in all directions, and always at right angles to the containing surface.
The second fundamental rule of hydraulics
A second important fundamental of hydraulics is that a fluid is considered to be virtually incompressible. It will compress in volume about one third of a percent for every 70 Bar or 1000 PSI of applied pressure. In this example, as pressure was manually applied, the plunger would not be able to move down the measuring flask unless the fluid was able to escape. The applied pressure would shatter the glass flask very quickly.
Motion, force and power can be transmitted through a solid object. In cases of direct force transmission, the output force will be equal to the input force, minus some friction. For example, if the input force is 25 Newtons, the output force will be 25 Newtons minus an amount of friction.
How does fluid transmit force and movement?
Fluid can be used to transmit force and movement. This is possible because the fluid will act as a solid when it is held in a confined space.
Mechanical vs Fluid Transmission
Due to Pascal’s Law, any pressure that is applied to the fluid at piston 1 will be transmitted to piston 2. If the areas of the pistons are the same, there will be a direct force transmission.
The first advantage of fluid transmission
The first advantage of fluid transmission is that the direction of output force is not confined to the same axis of the input force. In the example above, the output movement and force is at 90 degrees to the direction of input.
The second advantage of fluid transmission
The second advantage of transmitting power through a fluid is that the fluid may be transmitted through a tube, pipe or high pressure hose. This makes it possible for the output force to be directed anywhere, and to be applied from a distance.
The forestry equipment shown above is a typical example of a hydraulic application. Hydraulics allows the application of controlled motion and power in limitless directions.
Need more information?
To help you choose the right hydraulic hose and fittings for your project, download our guide with 15 Keys to Selecting the Right Hose. |
The Schiehallion experiment was an 18th-century experiment to determine the mean density of the Earth. Funded by a grant from the Royal Society, it was conducted in the summer of 1774 around the Scottish mountain of Schiehallion, Perthshire. The experiment involved measuring the tiny deflection of the vertical due to the gravitational attraction of a nearby mountain. Schiehallion was considered the ideal location after a search for candidate mountains, thanks to its isolation and almost symmetrical shape.
Schiehallion's isolated position and symmetrical shape lent well to the experiment
The experiment had previously been considered, but rejected, by Isaac Newton as a practical demonstration of his theory of gravitation; however, a team of scientists, notably Nevil Maskelyne, the Astronomer Royal, was convinced that the effect would be detectable and undertook to conduct the experiment. The deflection angle depended on the relative densities and volumes of the Earth and the mountain: if the density and volume of Schiehallion could be ascertained, then so could the density of the Earth. Once this was known, it would in turn yield approximate values for those of the other planets, their moons, and the Sun, previously known only in terms of their relative ratios.
A pendulum hangs straight downwards in a symmetrical gravitational field. However, if a sufficiently large mass such as a mountain is nearby, its gravitational attraction should pull the pendulum's plumb-bob slightly out of true (in the sense that it doesn't point to the centre of mass of the Earth). The change in plumb-line angle against a known object—such as a star—could be carefully measured on opposite sides of the mountain. If the mass of the mountain could be independently established from a determination of its volume and an estimate of the mean density of its rocks, then these values could be extrapolated to provide the mean density of the Earth, and by extension, its mass.
Isaac Newton had considered the effect in the Principia but pessimistically thought that any real mountain would produce too small a deflection to measure. Gravitational effects, he wrote, were only discernible on the planetary scale. Newton's pessimism was unfounded: although his calculations had suggested a deviation of less than 2 minutes of arc (for an idealised three-mile high [5 km] mountain), this angle, though very slight, was within the theoretical capability of instruments of his day.
An experiment to test Newton's idea would both provide supporting evidence for his law of universal gravitation, and estimates of the mass and density of the Earth. Since the masses of astronomical objects were known only in terms of relative ratios, the mass of the Earth would provide reasonable values to the other planets, their moons, and the Sun. The data were also capable of determining the value of Newton's gravitational constant G, though this was not a goal of the experimenters; references to a value for G would not appear in the scientific literature until almost a hundred years later.
Finding the mountain
Chimborazo, the subject of the French 1738 experiment
A pair of French astronomers, Pierre Bouguer and Charles Marie de La Condamine, were the first to attempt the experiment, conducting their measurements on the 6,268-metre (20,564 ft) volcano Chimborazo in the Viceroyalty of Peru in 1738. Their expedition had left France for South America in 1735 to try to measure the meridian arc length of one degree of latitude near the equator, but they took advantage of the opportunity to attempt the deflection experiment. In December 1738, under very difficult conditions of terrain and climate, they conducted a pair of measurements at altitudes of 4,680 and 4,340 m. Bouguer wrote in a 1749 paper that they had been able to detect a deflection of 8 seconds of arc, but he downplayed the significance of their results, suggesting that the experiment would be better carried out under easier conditions in France or England. He added that the experiment had at least proved that the Earth could not be a hollow shell, as some thinkers of the day, including Edmond Halley, had suggested.
The symmetrical ridge of Schiehallion viewed across Loch Rannoch
That a further attempt should be made on the experiment was proposed to the Royal Society in 1772 by Nevil Maskelyne, Astronomer Royal. He suggested that the experiment would "do honour to the nation where it was made" and proposed Whernside in Yorkshire, or the BlencathraSkiddaw massif in Cumberland as suitable targets. The Royal Society formed the Committee of Attraction to consider the matter, appointing Maskelyne, Joseph Banks and Benjamin Franklin amongst its members. The Committee despatched the astronomer and surveyor Charles Mason to find a suitable mountain.
After a lengthy search over the summer of 1773, Mason reported that the best candidate was Schiehallion (then spelled Schehallien), a 1,083 m (3,553 ft) peak lying between Loch Tay and Loch Rannoch in the central Scottish Highlands. The mountain stood in isolation from any nearby hills, which would reduce their gravitational influence, and its symmetrical east–west ridge would simplify the calculations. Its steep northern and southern slopes would allow the experiment to be sited close to its centre of mass, maximising the deflection effect. Coincidentally, the summit lies almost exactly at the latitudinal and longitudinal centre of Scotland.
Mason declined to conduct the work himself for the offered commission of one guinea per day. The task therefore fell to Maskelyne, for which he was granted a temporary leave of his duties as Astronomer Royal. He was aided in the task by mathematician and surveyor Charles Hutton, and Reuben Burrow who was a mathematician of the Royal Greenwich Observatory. A workforce of labourers was engaged to construct observatories for the astronomers and assist in the surveying. The science team was particularly well-equipped: its astronomical instruments included a 12 in (30 cm) brass quadrant from Cook's 1769 transit of Venus expedition, a 10 ft (3.0 m) zenith sector, and a regulator (precision pendulum clock) for timing the astronomical observations. They also acquired a theodolite and Gunter's chain for surveying the mountain, and a pair of barometers for measuring altitude. Generous funding for the experiment was available due to underspend on the transit of Venus expedition, which had been turned over to the Society by King George III of the United Kingdom.
Observatories were constructed to the north and south of the mountain, plus a bothy to accommodate equipment and the scientists. The ruins of these structures remain on the mountainside. Most of the workforce was housed in rough canvas tents. Maskelyne's astronomical measurements were the first to be conducted. It was necessary for him to determine the zenith distances with respect to the plumb line for a set of stars at the precise time that each passed due south (astronomic latitude). Weather conditions were frequently unfavourable due to mist and rain. However, from the south observatory, he was able to take 76 measurements on 34 stars in one direction, and then 93 observations on 39 stars in the other. From the north side, he then conducted a set of 68 observations on 32 stars and a set of 100 on 37 stars. By conducting sets of measurements with the plane of the zenith sector first facing east and then west, he successfully avoided any systematic errors arising from collimating the sector.
To determine the deflection due to the mountain, it was necessary to account for the curvature of the Earth: an observer moving north or south will see the local zenith shift by the same angle as any change in geodetic latitude. After accounting for observational effects such as precession, aberration of light and nutation, Maskelyne showed that the difference between the locally determined zenith for observers north and south of Schiehallion was 54.6 arc seconds. Once the surveying team had provided a difference of 42.94″ latitude between the two stations, he was able to subtract this, and after rounding to the accuracy of his observations, announce that the sum of the north and south deflections was 11.6″.
Maskelyne published his initial results in the Philosophical Transactions of the Royal Society in 1775, using preliminary data on the mountain's shape and hence the position of its center of gravity. This led him to expect a deflection of 20.9″ if the mean densities of Schiehallion and the Earth were equal. Since the deflection was about half this, he was able to make a preliminary announcement that the mean density of the Earth was approximately double that of Schiehallion. A more accurate value would have to await completion of the surveying process.
Maskelyne took the opportunity to note that Schiehallion exhibited a gravitational attraction, and thus all mountains did; and that Newton's inverse square law of gravitation had been confirmed. An appreciative Royal Society presented Maskelyne with the 1775 Copley Medal; the biographer Chalmers later noting that "If any doubts yet remained with respect to the truth of the Newtonian system, they were now totally removed".
The work of the surveying team was greatly hampered by the inclemency of the weather, and it took until 1776 to complete the task.[a] To find the volume of the mountain, it was necessary to divide it into a set of vertical prisms and compute the volume of each. The triangulation task falling to Charles Hutton was considerable: the surveyors had obtained thousands of bearing angles to more than a thousand points around the mountain. Moreover, the vertices of his prisms did not always conveniently coincide with the surveyed heights. To make sense of all his data, he hit upon the idea of interpolating a series of lines at set intervals between his measured values, marking points of equal height. In doing so, not only could he easily determine the heights of his prisms, but from the swirl of the lines one could get an instant impression of the form of the terrain. Hutton thus used contour lines, which became in common use since for depicting cartographic relief.
Hutton's solar system density table
Hutton had to compute the individual attractions due to each of the many prisms that formed his grid, a process which was as laborious as the survey itself. The task occupied his time for a further two years before he could present his results, which he did in a hundred-page paper to the Royal Society in 1778. He found that the attraction of the plumb-bob to the Earth would be 9,933 times that of the sum of its attractions to the mountain at the north and south stations, if the density of the Earth and Schiehallion had been the same. Since the actual deflection of 11.6″ implied a ratio of 17,804:1 after accounting for the effect of latitude on gravity, he was able to state that the Earth had a mean density of
, or about
that of the mountain. The lengthy process of surveying the mountain had not therefore greatly affected the outcome of Maskelyne's calculations. Hutton took a density of 2,500 kg·m−3 for Schiehallion, and announced that the density of the Earth was
That the mean density of the Earth should so greatly exceed that of its surface rocks naturally meant that there must be more dense material lying deeper. Hutton correctly surmised that the core material was likely metallic, and might have a density of 10,000 kg·m−3. He estimated this metallic portion to occupy some 65% of the diameter of the Earth. With a value for the mean density of the Earth, Hutton was able to set some values to Jérôme Lalande's planetary tables, which had previously only been able to express the densities of the major solar system objects in relative terms.
Main article: Cavendish experiment
A more accurate measurement of the mean density of the Earth was made 24 years after Schiehallion, when in 1798 Henry Cavendish used an exquisitely sensitive torsion balance to measure the attraction between large masses of lead. Cavendish's figure of 5,448 ± 33 kg·m−3 was only 1.2% from the currently accepted value of 5,515 kg·m−3, and his result would not be significantly improved upon until 1895 by Charles Boys.[c] The care with which Cavendish conducted the experiment and the accuracy of his result has led his name to since be associated with it.
John Playfair carried out a second survey of Schiehallion in 1811; on the basis of a rethink of its rock strata, he suggested a density of 4,560 to 4,870 kg·m−3, though the then elderly Hutton vigorously defended the original value in an 1821 paper to the Society. Playfair's calculations had raised the density closer towards its modern value, but was still too low and significantly poorer than Cavendish's computation of some years earlier.
Arthur's Seat, the site of Henry James's 1856 experiment
The Schiehallion experiment was repeated in 1856 by Henry James, director-general of the Ordnance Survey, who instead used the hill Arthur's Seat in central Edinburgh. With the resources of the Ordnance Survey at his disposal, James extended his topographical survey to a 21-kilometre radius, taking him as far as the borders of Midlothian. He obtained a density of about 5,300 kg·m−3.
An experiment in 2005 undertook a variation of the 1774 work: instead of computing local differences in the zenith, the experiment made a very accurate comparison of the period of a pendulum at the top and bottom of Schiehallion. The period of a pendulum is a function of g, the local gravitational acceleration. The pendulum is expected to run more slowly at altitude, but the mass of the mountain will act to reduce this difference. This experiment has the advantage of being considerably easier to conduct than the 1774 one, but to achieve the desired accuracy, it is necessary to measure the period of the pendulum to within one part in one million. This experiment yielded a value of the mass of the Earth of 8.1 ± 2.4 × 1024 kg, corresponding to a mean density of 7,500 ± 1,900 kg·m−3.[d]
A modern re-examination of the geophysical data was able to take account of factors the 1774 team could not. With the benefit of a 120-km radius digital elevation model, greatly improved knowledge of the geology of Schiehallion, and in particular a computer, a 2007 report produced a mean Earth density of 5,480 ± 250 kg·m−3. When compared to the modern figure of 5,515 kg·m−3, it stood as a testament to the accuracy of Maskelyne's astronomical observations.
Schiehallion force diagram
Consider the force diagram to the right, in which the deflection has been greatly exaggerated. The analysis has been simplified by considering the attraction on only one side of the mountain. A plumb-bob of mass m is situated a distance d from P, the centre of mass of a mountain of mass MM and density ρM. It is deflected through a small angle θ due to its attraction F towards P and its weight W directed towards the Earth. The vector sum of W and F results in a tension T in the pendulum string. The Earth has a mass ME, radius rE and a density ρE.
The two gravitational forces on the plumb-bob are given by Newton's law of gravitation:
where G is Newton's gravitational constant. G and m can be eliminated by taking the ratio of F to W:
where VM and VE are the volumes of the mountain and the Earth. Under static equilibrium, the horizontal and vertical components of the string tension T can be related to the gravitational forces and the deflection angle θ:
Substituting for T:
Since VE, VM and rE are all known, θ has been measured and d has been computed, then a value for the ratio ρE : ρM can be obtained: |
Introduction- The way humans communicate and share ideas and concepts in society is complex. How are ideas conceptualized — how are they explained — how does discourse relate- and how do humans understand messages — what is true about language- what is not? These are just some of the issues surrounding theories of language acquisition and development. However, a full review of all current linguistic theories is out of the realm of this paper, thus we will concentrate on a single theory of language acquisition. First, though, it is useful to understand the basic themes of theoretical linguistics, a branch of the science of speech concerned with the way humans use core factors of language, and how those core precepts are developed within a particular culture. Regardless of the language grouping, human languages have three major commonalties: articulation (the production of speech sounds, sometimes including non-verbal cues); perception (the way human ears respond to speech and how the brain analyzes the messages and; acoustics (physical characteristics of sound like color, volume, amplitude, and frequency) (Ottenheimer 2006 34-47). As one might imagine, scholars and philosophers all have different ideas on the theoretical constructs of the way humans acquire, develop, and utilize language. Even ancient philosophers like Plato had thoughts on whether children were born with an innate sense of meaning already inside their brain, or whether it was social interaction that caused different skills to be forthcoming. For Plato, not knowing or understanding the various language families, much of learning was relearning — children were born with an innate sense of the world and just needed practice “remembering” how to communicate (Tomasello 2008). After the Renaissance, and into the Age of Enlightenment, philosophers like Hobbes and Locke argued that knowledge (of which language is an essential determiner for them) emerged from the senses (Harrison 2002).
Noam Chomsky, and other linguistic scholars, believe that human language is the sense of that language — and culture. French, for instance, is a historical, social and political notion that is expressed linguistically as well. Thus, commonalties in culture (e.g. The French, English, Italian, Swiss, etc.) are amended by language — in this case, the commonalties of linguistic structure as opposed to the way Chinese would not be common to French; either in language or in human culture (Linguistic Relativity Hypothesis 2003). In the late 1950s, however, psychologist B.F. Skinner took past theories and formulated a newer approach — the behaviorist theory of language acquisition. In his 1957 book, Verbal Behavior, Skinner postulated that language was divisible into units and that was acquired through both repetition and reinforcement; it was a later step to move from repeating a word, “tree” for example, to understanding that the spoken and written words form a shorthand for an object; and that object need not be identical each time (e.g. The cognition that there is one general word for tree, but hundreds of examples) (Skinner 1992).
Some linguists embraced the theory, indicating that while it was incomplete, it did help explain some of the commonalties of linguistic behavior across cultures, and was at least a way to understand one of the aspects of language acquisition and development. Others, however, saw behaviorism as deconstruction in the worst sense; a way to look at only one small part of language, to ascribe only physical nuances and characteristics to something far more complex, and to simply take “old experimental psychology,” dress it up with a new bit of frosting for theory, and supply the operative word “conditioning’ in order to establish the veracity of linguistic culture (Carroll (ed.) 1956, 41). However, the very basis of this issue goes beyond just acquisition, and asks us to define the basis of usable linguistic theory in reference to robust discourse.
Definition of Discourse — Discourse analysis, or discourse studies, is a broad term for a rubric of approaches to written, spoken, or signed language and the way the participants interact. The object of discourse analysis — discourse, writing, talking, conversation — really any communicative event, are typically defined much like basic linguistic phenomena — patterns of sentences, propositions, speech acts, etc. However, contrary to much of traditional linguistics, discourse analysis not only focuses on the study of language use beyond sentence structure, it also works with naturally occurring language, and has relevance in a variety of social science fields (Blommaert 2005).
Discourse analysis is not so much a single defining “noun,” but more a way of approaching linguistics — a template, if you will, as a research method to thinking about a problem. It is neither completely quantitative nor completely qualitative, nor does it provide a tangible means of answering all the problems based on empirical research. Instead, as a method of research, it enable access to the ontological approach (proof via intuition and reason) combined with epistemological assumptions (how is knowledge acquired, how do we know what we know), when dealing with a project, statement, or even classification of text. In other words, by using the discourse analysis method, one can find the hidden motivations behind a text or behind a specific method of research; then interpret that as a way to understand the author or conversation better. Since every text, every author, indeed every conversation has multiple levels in which it can be understood, discourse analysis allows for a more robust look at the entire picture, not just what we initially read, see or hear (Frohmann 1992).
Discourse analysis is a theoretical approach to what we are all trying to do in the classroom — that is implement Bloom’s hierarchy and teach beyond rote so that students understand the necessity for analysis and synthesis; and that they know how to handle subjectivity within a text. While the term discourse analysis is relatively new, critical thinking about situations and text is not. What postmodern discourse theory does, though, is move from there being a single particular view of the world to one in which we can see the world as fragmented, and that individual interpretation is subjective — an interpretation that is at least marginally conditioned by the social and cultural forces that surround us all. Somewhat akin to deconstruction, discourse analysis allows the community to actively participate in the interpretation of the conversation. It is interesting that almost a century ago, one of the pioneers of modern educational theory, John Dewey, defined critical thought as: “active, persistent, and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusion to which it tends” (Dewey in Lyons (ed.) 2010).
The Discourse Community- In simple terms, a discourse community is a group of people who share a body of knowledge, group culture, or even something in common like language, interests, environments, or something even more unique — a club, a meeting about an issue, or a classroom. Bringing people together from divergent group structures (demographics, psychographics, geographic, etc.) is quite common within the classroom, thus this becomes its own unique learning environment. In any language class, too, there is often far more discussion and group sharing simply because of the relationship of the individuals to the text and to the rest of the group (language learning tends to break down some social barriers) (Porter 1992). Within this context there are six important distinctions, first developed by Swales (1990) that help us define a context in which we may create a more robust basis for language learning. A discourse community:
1. Has a broadly agreed set of common, yet public, goals. For instance, in the classroom, the obvious goal is to ensure that a proper learning environment is available for students to learn a language; an effective curriculum available, and other specifics based on the particular class in question.
2. Have mechanisms of intercommunication among the members of the group. Any classroom, particularly that of a language learning environment has regular and clear sets of intercommunication between the instructor, the students, and between groups. Often, language communication is enhanced between members of the group
3. Uses its participatory mechanisms primarily to provide information and feedback. In the language classroom, formative assessments (feedback) are both immediate and regular. Each time a student repeats a phrase, reads, or is asked to translate, feedback and information are transferred.
4. Utilizes and then possesses one of more genres of communication so that its aims are met and move forward. Each classroom is unique, but moreover, the very nature of language learning allows for a clear differentiation in communication (speaking, answering questions, dialog, writing, reading, role-playing, explaining drawings, etc.).
5. Has a specific lexis of opportunity. Usually, language learning is broken into a series of steps so that there is a broader understanding of expectations and learning targets. Each step has its own lexis of vocabulary, each part foundational towards the next.
6. Has a threshold level of members with a suitable degree of relevant content and expertise. Again, depending on the structure of the class, during the semester, quarter or year, the teacher is the expert, the students the apprentices who will become experts; then each cycle of students there are novices progressing forward (Swales 1990).
Discourse Analysis in Language Pedagogy- Specifically, discourse analysis in the classroom has important applications in the areas of phonology, grammar, and vocabulary development.
Phonology — Generally, phonology can be used to understand ethnic basis for what the student is bringing into the classroom. Within the classroom structure though, pronunciation and intonation are the most vibrant ways we can use discourse analysis. Traditional pronunciation pedagogy breaks down each part of the sound and works within that microstructure to understand dialect. From a discourse analysis method, though, the problem becomes far more complex. What sound precedes, what sound comes after — for when words and sounds follow each other in speech they may undergo considerable changes and modifications. What then happens is that depending on the way the word is said, isolated or comes in context a list of assimilations of elision (where sounds from the citation for are missed — example most men, becomes mos-men in conversation). When in a language classroom though, discourse analysis can help us understand and correct these issues so that the learner understands the correct pronunciation during the novice or learning stage (McCarthy 1991).
Grammar — At its core, the methodology of discourse analysis is a way of conceptualizing language “in-use.” Grammar is the way that sounds and words are structured (rules) that describe a particular language or group of speakers. Grammar evolves through usage and through the way populations are separated, so that specific forms of meaning can be ascribed to a way of thinking. Traditional grammar, or generative grammar, more of a modern Chomsky idea. This theory says that each sentence in a language has two levels of representation- a surface structure and a deep structure in which the similarities between languages, culture and thought occur (Chomsky 1965). Using a discourse analysis method places a more important role on both texts with which grammatical concepts are presented and the connecting role between the grammatical forms. “Knowing” grammar no longer means that one memorizes declinations and grammar facts, but the reasons behind the grammar and the way grammatical rules are used to convey meaning; context-dependent, mode, article use, etc. Once these basics are understood from an analytical point-of-view, discourse theory believes that there will be a greater understanding of the overall template of the grammar, and therefore a better understanding of the language in general. “Grammar…. Has a direct role in welding clauses, turns and sentences into discourse” (McCarthy 1991).
Vocabulary — It is in the teaching of vocabulary that the discourse method excels. Vocabulary cannot be effectively taught out of context; it is only within a more macro environment of discourse that any intended meaning becomes clear. One might argue that there is a basic “dictionary” definition of a word, but the intended meaning of such words is not found until one finds a contextual approach. The word “tree,” for example, has 18 dictionary definitions, with its most common dealing with a plant with a woody stem. If the context of the sentence was, “Mary saw a large, very old, apple tree,” this definition would work — in context. If, however, the sentence was, “Hank forgot that the complexities of this particular data set require a different set of rules for the programming tree,” the meaning is quite different.
Thus, the meaning of the word, in this case, is not dependent upon rote vocabulary but on the actual phrase of use. This is particularly true when dealing with specialized vocabulary for the sciences; with specific uses for words not just specific meanings. To be effective teaching grammar in a discourse analysis method, sentence level examples are not always complex enough to provide enough back up for language learning. “What are needed are many fully contextualized examples…. To provide learners with the necessary exposure to and practice with else, a function word that is semantically, grammatically, and textually complex” (The Handbook of Discourse Analysis 2003).
Discourse Analysis and Language Skills — There are two distinct processes when teaching language: 1) transmitting the ideas and intentions to others and, 2) interpreting and understanding the text/message produced by another speaker. Discourse analysis asks use to produce knowledge using strategies that help the learning speak or write, using formative assessments to understand if the audience is on track. When interpreting discourse we also combine strategies (listening, reading) while, at the same time, relying on our past experience to help put material in context as well as our anticipation of what we think might occur within the context of the sentence or paragraph. It is important that language teachers use both productive skills and interpretive events so that the inclusion of discourse becomes part of the paradigm of that language (Anthony et al. 2007).
Additionally, there are two types of learning/knowledge that are aided by the use of discourse analysis theory. Prior and shared knowledge, for instance, including repetitive skills, all involve activation of schematic and contextual knowledge. Schematic knowledge is usually defined as patterned knowledge — something so innate that it just comes natural. A pattern is activated by certain expectations — a person sees a dog running and will classify it as a Boxer, based on the past-or-prior knowledge (this is obviously a goal in teaching a language, the so-called “think in the new language” idea) (David, Shrobe & Szolovits 1993). The second type of knowledge assisted by discourse analysis is contextrual — or the overall perception of what the learner hears, sees, or infers from the situation. This is more complex because it takes into account both the past and future, as well as subtle body-langauge signals and prior-knowledge. Language teachers can use discourse analysis to provide learners with a number of activities that stimulate both these types of knowledge, and move the new language into a part of the brain that allows one to analyze just what is happening in that language. During that process, “it is important that learners have the opportunity to combine… phonological signals…. Lexicogrammatical signals….. content organization…. And contextual features” (Schiffrin: 717).
Activities to Bolster Discourse — For our purposes, there are three major tools that focus on the theoretical use of discourse analysis — but in the more multi-disciplinary sense, and thus quite useful within the modern language classroom:
The Social Languages Tool — Humans build language through more than just words; they use vernacular phrases, idioms, less common terms, and structures that are not necessarily technical, but understood and engaging. How one uses language is then interpretive of how one is perceived by peers (accepted within a group) or by the dominant culture (accepted use of language). Clearly this is constantly apparent in the contemporary world in which phrases like “down with it, ” (I want to), “That be hap gear” (Those are nice clothes) are an accepted part of language, but not of the dominant culture. Use of phrases that are more social language are not part of the strict lexicon in a language classroom, but discourse analysis of those phrases can allow for a greater understanding of the cultural viability of language (Stubbs 1984)
The Intertextuality Tool – A relatively new form of theiry, intertextuality revolves around the shaping of texts’ meaning by other texts. What other texts brought to the learner, what parts of other materials are then put into the language learner’s toolbox. It can also mean that within a text an author uses another text as a literary means of providing a more robust scenario, as in John Steinbeck’s retelling of the Genesis story in East of Eden, but setting in the Salinas Valley in Northern California. Using intertextuality in the language classroom also allows for interpretation and more discussion about the actual meaning of the text. Use of intertextuality in wrtiing allows for greater depth within the writing assignment, and a push towards real understanding of the new languge (Jesson 2010).
The Situated Meaning Tool — Meaning is quite complex in language. It required interpretation as well as expectations. Psychologists still do not know how we construct meaning, but do understand that when learning a language, meaning must precede understanding. Again, by focusing on context, by applying the entire picture of the phrase, routine guessing becomes more of a theoretical approach to dialog, and therefore holds greater meaning. Drilling in different contextual uses forces the language learner to interrupt themselves cognitively, and to think about the manner in which shared meaning between people establishes a greater “true” meaning of the word or phrase (Gee 2010).
Conclusions – Traditional theorists, like Skinner, approached language learning as the manner in which verbal behavior is mitigated by the same controlling variables as any other operant theory (Skinner; (Michael 1984). For Skinner, language acquisition does incorporate verbal problems as dependent variables, but that old a certain common structure when analyzed vigorously, and indeed are the factors that are the most robust in helping students acquire a new language:
Emission — Responses emitted are usually interpreted to have sense,
Energy Level — Another term for response magnitude and veracity of the experiment,
Speed — Infers high strength if quick in response.
Repetition — “Mary, Mary, Mary!” indicates urgent and symbolic behavior
Limitations — Individual differences may provide mislabeled limitations on data appealing factual.
Overall Frequency — The overall frequency, though, sent into a statistical model and of sufficient depth, provides validity to the approach (Skinner 1992)
Summarizing Skinner’s Language Acquisition Theory is, however, then fairly straightforward, and certainly provides both definitions and a template for operational cognition and linguistic function.
Table 1 — Summery of Verbal Operants that are also part of Discourse Theory
Mand (verbal operant)
A child comes into the kitchen, Mom is preparing supper, and is harried; Child asks for a glass of milk; Mother stops, opens the refrigerator and gives the milk.
Mary looks out the window and comments, “Gee, despite it still being February, it’s so warm and Fall like today.” Her friend comments back, “You’re right.”
Verbal behavior of others
A mother asks her son what score he received on today’s spelling bee. He replies, enthusiastically, “An A.” Mother says “Very, very good!”
Verbal behavior of others
Teacher says, “tree in German is Baum.” Tom repeats, “Tree is Baum? The teacher indicates this is correct.
Self-critique of verbal behavior
A child awakens his parent’s at 1am and says, “I think I’m sick, and I am lonely.” Father dons clothing, without a second thought, and rushes the child to Hospital.
(Source: Frist & Bondy 2006)).
Chomsky, for one, finds the Skinnerian principles to be limiting and lacking in their explanation for robustness in language acquisition. Modeling behavior is uncontroversial, Chomsky says, but the actual models used are much more complex and entreating than those skinner presents. Skinner underestimates the complexity of the problem of linguistic acquisition, and in particular the manner in which the complexity of the organism being studies interacts with the ecological universe itself. Thus, the basic criticisms are really in three parts:
Language Use: Even basic child grammar is significantly more complex than Skinner allows.
Generalizations: The cause and effect portion of the paradigm is not always clear and concise.
Instinct vs. Cognition: Because of the tremendous complexity of human language and behavior relations, it is sometimes difficult to construct and adequate model of complex behavior being non-instinctual and cognitive. There may be signal processing, in fact, that transcends both, resulting in mixed instructional issues (Chomsky n.d.)
Discourse analysis asks us to take these basic steps, grounded theories if you will, and amalgamate and enhance them with a broader approach to language learning. One of the values we have seen about discourse analysis is the manner in which it allows us a tool in creating and understanding language use as part of everyday life. We imply personhood within language discourse, and that is constructed. Particularly in the language classroom, people are constantly interacting with other — constantly negotiating a working consensus for how they define their parameters of learning and what characteristics are necessary for moving from the novice learner to the expert learner. Studies have shown that this continual defining of personhood also defines language and the way language is communicated (dialects, etc.), as well as placing the language learning in the context of the mind (Brown & Yule 1983).
We see that the language classroom may also be a microcosm of the universe in which the students live. They are active agents within that world, with both a strategic and tactical view of their universe — this is also emulated within the classroom. People also locate themselves locally and globally by their physical presence in a community as well as through history. They engage in face-to-face encounters, but as a group also understand that in the broader context and dynamics they are expressing their own world view. In language learning, discourse analysis allows them a broader context of these dynamics, and a way to move beyond rote into text and actual linguistic understanding of the new language (Bloome 2005).
Anthony, L, Palius, M, Maher, C & Moghe, P 2007, ‘Using Discourse Analysis to Study a Cross-disciplinary Learning Community’, Journal of Engineering Education, vol 96, no. 2, pp. 141-52.
Blommaert, J 2005, Discourse, Cambridge University Press, Cambridge.
Bloome, D 2005, Discourse Analysis and the Sudy of Classroom Language and Literacy Events, Erlbaum, Mahwah, NJ.
Brown, G & Yule, G 1983, Discourse Analysis, Cambridge University Press, Cambridge.
Carroll, JB (ed.) 1956, Language, THought and Reality: Selected Writings of Behjamin Lee Whort, MIT Press, Cambridge.
Chomsky, N 1965, Aspects of the Theory of Syntax, M.I.T. Press, Cambridge, MA.
Chomsky, N n.d., Two Quotes from Chomsky’s Review of B.F. Skinner’s Verbal Behavior, viewed April 2011, http://cseweb.ucsd.edu/~yfreund/consciousness/collins.behaviorism.pdf.
David, R, Shrobe, H & Szolovits, P 1993, What is a Knowledge Representation?, viewed April 2011, http://groups.csail.mit.edu/medg/ftp/psz/k-rep.html.
Frist, L & Bondy, A 2006, ‘A Common Language Using B.F. Skinner’s Verbal Bahvior’, Journal of Speech and Language Pathology, vol 1, no. 2, pp. 103-10.
Frohmann, B 1992, ‘The Power of Images: A Discourse Analysis of the Cognitive Viewpoint’, Journal of Documentation, vol 48, no. 4, pp. 365-86.
Gee, J 2010, How to Do Discourse Analysis: A Toolkit, Taylor and Francis, New York.
Harrison, R 2002, Hobbes, Locke and Confusion’s Masterpiece, Cambridge University Press, Cambridge.
Jesson, R 2010, ‘Intertexutlity as a Conceptual Tool for the Teaching of Writing’, University of Auckland, p. Unpublished PhD dissertation.
Linguistic Relativity Hypothesis 2003, viewed March 2011, http://plato.stanford.edu/entries/relativism/supplement2.html.
Lyons, N (ed.) 2010, Handbook of Reflection and Reflective Inquiry, Springer, New York.
McCarthy, M 1991, Discourse Analysis for Language Teachers, Cambridge University Press, Cambridge.
McCarthy, M 1991, Discourse Analysis for Language Teachers, Cambridge University Press, Cambridge.
Michael, J 1984, ‘Verbal Behavior’, Journal of Experimental and Analytical Behavior, vol 42, no. 3, pp. 363-76.
Ottenheimer, H 2006, The Anthropology of Language: An Introduction to Linguistic Anthropology, Thomas Wadsworth, Toronto, Canada.
Porter, J 1992, Audience and Rhetoric: An Archaeological Composition of the Discourse Community, Prentice Hall, New Jersey.
Skinner, BF 1992, Verbal Behavior, Copley Publishing Company, New York.
Stubbs, M 1984, Discourse Analysis: The Sociolinguistic Analysis of Natural Language, University of Chicago Press, Chicago, IL.
Swales, J 1990, Genre Analysis: English in Academic and Research Settings, Cambridge University Press, Cambridge.
The Handbook of Discourse Analysis 2003, Blackwell Publishing, Malden, MA.
Tomasello, M 2008, Origins of Human Communication, MIT Press, Boston.
Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?
Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.
Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.
Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.
While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.
Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.
In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.
Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.
We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!
We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.
Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.
We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.
Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.
There is a very low likelihood that you won’t like the paper.
Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.
We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.
You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.
We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.
You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.
Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.
You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.
The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.Read more
Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.Read more
Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.Read more
Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.Read more
By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.Read more |
If you wish to contribute or participate in the discussions about articles you are invited to join Navipedia as a registered user
|Year of Publication||2011|
GNSS systems were originally designed for earth-based positioning and navigation. Despite this, real-time spacecraft navigation based on spaceborne GNSS receivers is becoming a common technique for low-Earth orbits and geostationary orbits, allowing satellites to self-determine their position using GNSS, reducing dependence on ground-based stations.
The space community started experimenting with spaceborne receivers very early in the deployment of the GPS network.The first spaceborne GNSS receiver was deployed in Landsat 4 in July 16th 1982. The GPSPAC receiver deployed with Landsat 4 was also deployed with Landsat 5 and 2 other US Department of Defense missions and despite the few number of GPS satellites deployed (at that time only 6 Block I satellites were deployed), the GPSPAC was able to demonstrate the feasibility of using GNSS for space navigation.
Operational Constraints of Space-borne Receivers
The space environment presents differences from the terrestrial environment that don't allow us to take the assumption that a receiver working flawlessly on the ground will work properly in space.
The first big difference is the spacecraft velocity and specially the bigger relative velocity between the GNSS receiver and the GNSS satellites. Bigger velocity induce bigger Doppler shifts and the receiver will have to scan a larger range of frequencies to acquire the GNSS signal. Also at this velocity the receiver will track each satellite for a shorter period and visible satellites will change faster. This might require a different selection logic and tracking loop design in order to cope with this more dynamic environment. In general this increases the time required for the signal acquisition and some terrestrial receivers are limited by design to work below certain velocities since it is not expected that a terrestrial receiver can achieve the velocities achieved by a spacecraft.
Orbit altitude and geometry are also a constraining factor for space-borne receivers. GPS satellites orbit the earth at an altitude of roughly 20 200 km and the GNSS signal is transmitted by directional antennas pointed to earth and taking into account the earth diameter. For Low Earth Orbit[footnotes 1] spacecrafts the signal reception conditions are roughly the same as in terrestrial applications. When progressing in Medium Earth Orbit until near the altitude of the GPS satellites signal acquisition becomes increasingly more difficult since it is more likely that the spacecraft will fall outside the coverage cone of the GPS satellite signal and might require to track weaker signals from the sidelobes of the satellite or to track satellites on the opposite side of the earth not being obstructed by it. For Medium Earth Orbit above the GPS satellites orbit, High Earth Orbit or High Elliptical Orbits the problem becomes even more harsh since only the weaker signals from the sidelobes and the signal from satellites in the opposite side of earth are available.
Usually terrestrial receivers antennas have a hemispherical reception pattern since the GNSS signals will come from the sky and ground based vehicle and even aircrafts rarely bank more than 45 degrees. For space-borne vehicles the reception needs to be omni directional with the exception of particular cases where the spacecraft's attitude is somehow stabilized in relation to the GNSS constellation. To achieve omnidirectional reception the most common approach is the use of multiple antennas although other solutions such as modifying the antenna gain pattern can also be used. While space doesn't generally present external sources of signal reflection causing multipath, multipath can be self-induced by the reflection on the spacecraft surface. Also in docking maneuvers or formation flying there might be problems due to multipath due to the reflection of the GNSS signal on the other spacecrafts surfaces. This might have an impact on antenna placement and mitigation techniques could be required.
As any spaceborne device, spaceborne devices have very strict constraints in what regards size, shape, weight, power consumption and overall robustness to the space environment. Space GNSS receiver should be as small and light as possible and should have a reduced power consumption. Shape constraints might be different from mission to mission and the receiver should be robust enough to survive the vibration load on liftoff and eventually re-entry. Space receivers should be able to withstand ionizing radiations between 9 to 100 or more kilorads.
Precise Orbit Determination
One of the first scientific applications of GNSS was to precisely determine the position of fixed ground antennas in order to study the dynamics of the Earth surface. It was soon realised that in order to obtain the best results it would be necessary to compute very precise orbits of the GPS satellites. A number of groups started doing this and, as a result, the first orbits that were precisely obtained using GNSS were those of the GPS satellites themselves.
Detailed information about Precise Orbit Determination can be found here.
From the very beginning it was realized that GNSS systems could also be used for a wide range of scientific and other civil applications. New tracking methods that were not foreseen by the original developers of the systems, like carrier tracking, were proposed and, as soon as it was possible, successfully tested and used. One of the applications that was soon envisioned was the use of GPS for navigation of spacecraft. The first onboard receiver was installed and flown in a Landsat satellite even before the complete GPS constellation was deployed. Since that time, more receivers have been flown on satellites, at first as a demonstration of increasingly precise uses and now as the main operational means of navigation.
Detailed information about Satellite Realtime Navigation can be found here.
Satellite Formation Flying
All space missions are difficult. Docking a pair of spacecraft is tough but flying multiple satellites together in formation is the real cutting edge. In formation flying separate expensive pieces of hardware, each one zipping through space at several kilometres per second, may have to manoeuvre to within metres of each other to achieve their goals.
The relative positions of the satellites must be maintained precisely as they close in: lose control of one part of the formation, even momentarily, and the satellites risk destruction. And orbital dynamics dictate the satellites’ orbits will tend to cross as they circle Earth, another worrying factor for their controllers.
Detailed information about Satellite Formation Flying can be found here.
- ^ between 160 and 2,000 km altitude
- ^ Innovative satellite navigation receivers for space applications, ESA Portal, February 16th 2006
- ^ Landsat-4 and -5, eoPortal
- ^ GNSS Applications and Methods - Chapter 13 - Space Applications, E. Gleen Lightsey, Artech House
- ^ a b Satellite Navigation Using GPS, T.J. Martín Mur & J.M. Dow, ESA Bulletin Nr. 90, May 1997
- ^ a b Simulating the formation-flying future of space, ESA Portal, September 2010 |
Students will explore multi-digit numbers and the relationship between ones, tens and hundreds; a digit in one place is 10x the digit in the place to its right. Students will use their bodies to represent digits in multi-digit numbers up to the hundredths place and compare these numbers using <, =, >. Students will use their bodies as multi-digit numbers to add and subtract.
This Demonstration illustrates the concept of rotating a 2D polygon. The rotation matrix is displayed for the current angle. The default polygon is a square that you can modify.
This lesson is about trying to get students to make connections between ideas about equations, inequalities, and expressions. The lesson is designed to give students opportunities to use mathematical vocabulary for a purpose to describe, discuss, and work with these symbol strings.The idea is for students to start gathering global information by looking at the whole number string rather than thinking only about individual procedures or steps. Hopefully students will begin to see the symbol strings as mathematical objects with their own unique set of attributes. (7th Grade Math)
This lesson is based on the results of a performance task in which we realized that students' understanding of area and perimeter was mostly procedural. Therefore the purpose of this re-engagement lesson was to address student misconceptions and deepen student understanding of area and perimeter. The standards addressed in this lesson involve finding perimeter and area of various shapes, finding the perimeter when given a fixed area, and using a formula in a practical context. Challenges for our students included decoding the language in the problem and proving their thinking. (7th Grade Math)
The foundation of this lesson is constructing, communicating, and evaluating student-generated tables while making comparisons between three different financial plans. Students are given three different DVD rental plans and asked to analyze each one to see if they could determine when the 3 different DVD plans cost the same amount of money, if ever. (7th/8th Grade Math)
A teacher's guide on teaching the connection between the definition and equation of a parabola, and how to get from one to the other.
This lesson unit is intended to help you assess how well students are able to: solve simple problems involving ratio and direct proportion; choose an appropriate sampling method; and collect discrete data and record them using a frequency table.
This page documents ISKME’s 2013-2014 Open Educational Resources (OER) Fellowship Program which mentors educational leaders to champion OER into classrooms, school districts, and communities. The program runs from September until October and includes eight math teachers from Doha, Qatar.
The Indian Ocean Basin is becoming an important topic in middle and high school world history and geography courses, but one for which there are few instructional resources. This web-based resource helps teachers incorporate the Indian Ocean into world history studies by illustrating a variety of interactions that took place in the Indian Ocean during each era. The material is assembled into an integrated and user-friendly teaching tool for students in upper elementary, middle and high school. It offers students the chance to investigate primary sources that illustrate historical interactions, helping them to become more adept at the analytical historical thinking skills that are required by virtually all state history standards today.
If two inscribed angles intercept the same arc, then the angles are equal. Drag the orange points to change the figure.
In this packet we look at works that span nearly a thousand yearsäóîfrom shortly after the foundation of Islam in the seventh century to the seventeenth century when the last two great Islamic empiresäóîthe Ottoman and the Safavidäóîhad reached their peak. Although the definition of Islamic art usually includes work made in Mughal India, it is beyond the scope of this packet. The works we will look at here come from as far west as Spain and as far east as Afghanistan.
A dynamically simplified solar system is constructed from online data to explore the real solar system on many different scales.
The realistically scaled solar system is surprising because nothing is visible due to the presence of many different scales. That is why it is usually rescaled in animations or illustrations. This is nice but gives us a wrong sense of distances and sizes. This Demonstration is intended to show the solar system's different scales in their full glory.
Since it is hardly possible to see anything when the real scales are used, controls have been added to modify the sizes of the celestial bodies.
This kit covers stereotyping of Arab people, the Arab/Israeli conflict, the war in Iraq and militant Muslim movements. Students will learn core information and vocabulary about the historical and contemporary Middle East issues that challenge stereotypical, simplistic and uninformed thinking, and political and ethical issues involving the role of media in constructing knowledge, evaluating historical truths, and objectivity and subjectivity in journalism.
A collaboration between the National Aeronautics and Space Administration (NASA) and the CK-12 Foundation, this book provides high school mathematics and physics teachers with an introduction to the main principles of modeling and simulation used in science and engineering. An appendix of lesson plans is included.
This lesson is a re-engagement lesson designed for learners to revisit a problem-solving task they have already experienced. Students will activate prior knowledge of graphical representations through the 'what's my rule' number talk; compare and contrast two different learners' interpretations of the growing pattern; use multiple representations to demonstrate how one of these learners would represent the numeric pattern; make connections between the different representations to more critically compare the two interpretations. (5th/6th Grade Math)
The goals of the International OER Exchange Pilot project are to: facilitate the development and use of Open Educational Resources (OER) by teachers and students globally, track the development and use of the science learning materials and data collection, especially around climate change study, created in the project through OER Commons, and highlight the process and results through workshops and conference presentations.The broader purpose of the project is to support the international exchange of information and understanding through freely available resources among teachers and students, especially in the area of environmental science and climate change investigation.
This lesson is about properties of quadrilaterals and learning to investigate, formulate, conjecture, justify, and ultimately prove mathematical theorems. Students will: Analyze characteristics and properties of two- and three-dimensional geometric shapes; develop mathematical arguments about geometric relationships; and apply appropriate techniques, tools, and formulas to determine measurements.Explore relationships among classes of two- and three-dimensional geometric objects, make and test conjectures about them, and solve problems involving them. Employ forms of mathematical reasoning and proof appropriate to the solution of the problem at hand, including deductive and inductive reasoning, making and testing conjectures, and using counter examples and indirect proof. Identify, formulate and confirm conjectures. Establish the validity of geometric conjectures using deduction, prove theorems, and critique arguments made by others. (9th/10th Grade Math)
This lesson is about ratios and proportions using candy boxes as well as a recipe for making candy as situations to be considered. It addresses many Mathematical Reasoning standards and asks students to: Use models to understand fractions and to solve ratio problems; think about a ratio as part/part model and to think about the pattern growing in equal groups or a unit composed of the sum of the parts; find a scale factor and apply it to a ratio. (5th Grade Math)
This lesson focuses on students making decisions about what tools to apply to solve different problems related to quadratic expressions and equations. It is also intended to build awareness of the form an answer will take in order to help students make sense of the kind of problem they are solving. (9th/10th/11th Grade Math)
The Read Arabic! Internet lessons were developed at the National Foreign Language Center (NFLC) at the University of Maryland primarily with high school students of Arabic in mind; however, the materials can also be used for those in college at the basic and intermediate level as well. The website assumes knowledge of the Arabic alphabet and how to read. In addition to lessons, the website includes a basic overview of the Arabic language in English, from its history to modern usage, and learning suggestions.
This booklet is a collection of opinions of nearly 50 important poets from 25 countries in 5 continents on the best ways to present poetry to secondary school pupils. It is mainly intended for use in teacher training programmes, to bring to methods of teaching poetry two important dimensions: the creative perspective of poets themselves, as well as the perspective of different cultures regarding the reading and writing of poetry.
This is a summary and example of my experiences and successes "flipping" my Algebra I classroom in the 2012-13 school year. An example of the materials a student would encounter in a given 24 hour period are included.
Algebra students need practice determining equations of lines given a pair of points, or the line parallel or perpendicular to a given line through a given point. This Demonstration, along with guiding worksheets or a teacher presentation, gives students a chance to see the relationships between these lines and points.
The mission of Understanding Science is to provide a fun, accessible, and free resource that accurately communicates what science is and how it really works. The process of science is exciting, but standard explanations often miss its dynamic nature. Science affects us all everyday, but people often feel cut off from science. Science is an intensely human endeavor, but many portrayals gloss over the passion, curiosity, and even rivalries and pitfalls that characterize all human ventures. Understanding Science gives users an inside look at the general principles, methods, and motivations that underlie all of science. This project has at its heart a re-engagement with science that begins with teacher preparation and ends with broader public understanding. Its immediate goals are to (1) improve teacher understanding of the nature of the scientific enterprise, (2) provide resources and strategies that encourage and enable K-16 teachers to reinforce the nature of science throughout their science teaching, and (3) provide a clear and informative reference for students and the general public that accurately portrays the scientific endeavor. The Understanding Science site was produced by the UC Museum of Paleontology of the University of California at Berkeley, in collaboration with a diverse group of scientists and teachers, and was funded by the National Science Foundation1. Understanding Science was informed and initially inspired by our work on the Understanding Evolution project, which highlighted the fact that many misconceptions regarding evolution spring from misunderstandings of the nature of science. Furthermore, research indicates that students and teachers at all grade levels have inadequate understandings of the nature and process of science, which may be traced to classrooms in which science is taught as a simple, linear, and non-generative process. This false and impoverished depiction disengages students, discourages public support, and may help explain current indications that the U.S. is losing its global edge in science. Even beyond the health of the U.S. economy, the public has a genuine need to critically assess conflicting representations of scientific evidence in the media. To do this, they need to understand the strengths, limitations, and basic methods of the enterprise that has produced those claims. Understanding Science takes an important step towards meeting these needs.
- نوع المادة:
- ملاحظات المحاضرة
- Lesson Plan
- Teaching/Learning Strategy
- AMSER: Applied Math and Science Education Repository
- Internet Scout Project
- U.C. Berkeley
- المؤسسة الجامعية لأبحاث الغلاف الجوي
- Provider Set:
- AMSER: Applied Math and Science Education Repository
- Internet Scout Project
- المؤلفون الأفراد
- DLESE Community Collection
- Individual Authors
- Date Added:
The VAVA is a collecion of royalty-free audio and video files for teachers to use in their own creative exercises. We have also developed a small number of sample exercises that utilize material from the VAVA. The LCTL Project encourages teachers of all LCTLs to cooperate in developing new VAVA exercises using audio or video materials. Individual exercises might be very simple listening practice, or they might be more complex, integrating sounds, video clips into reading, writing, speaking and listening activities for students.The VAVA currently contains audio for the following languages: Arabic (Tunisa), Chinese (Mainland and Taiwan), Hebrew, Norwegian, Polish.
- Material Type:
- Teaching/Learning Strategy
- University of Minnesota
- Provider Set:
- Center for Advanced Research on Language Acquisition
- Date Added:
This lesson focuses on the music and poetry of Afghanistan, but teachers may conduct an analysis on global music in any given period of history, depending on what is pertinent to the grade level. Students will take into consideration important political events or conflicts, the ruling party of the area, the belief systems in place, and specific cultural features. Students will also learn to identify traditional musical instruments, consider the value of oral traditions, study the ghazal as a form of poetry and song, while creating their own musical works and poetry.
FreeReading is an open source instructional program that helps educators teach early literacy. Because it is open source, it represents the collective wisdom of a wide community of teachers and researchers. FreeReading contains Writing Activities, a page of activities to address important writing skills and strategies.
سيستكشف الطلاب مفاهيم القيمة المنزلية باستخدام أجسامهم كأدوات. سيقومون ببعض المهمات الحركية الموقوتة كالقفز وضغط البطن وسيستخدمون الأعداد التي سجلوها من تلك النشاطات في استكشافهم. وأثناء عملهم في مجموعات سيتمرنون على جمع الأعداد وطرحها ومقارنتها. كذلك سيبتكرون طرقًا خلاقة لتمثيل الأعداد مستخدمين خصائص العمليات الحسابية وقواعد القيمة المنزلية.
سيستكشف الطلاب مفهوم الأعداد الثنائية الأرقام بالقيام بحركات جسمانية تمثل إما أرقام (منزلات) العشرات أو أرقام (منزلات) الآحاد. سوف يستخدم الطلاب اجسامهم للمقارنة بين عددين ثنائيي الأرقام، باستخدام الرموز < ، = ، > للمقارنة بين القيم المختلفة. |
Crew members aboard the International Space Station will begin conducting research this week to improve the way we grow crystals on Earth. The information gained from the experiments could speed up the process for drug development, benefiting humans around the world.
Proteins serve an important role within the human body. Without them, the body wouldn’t be able to regulate, repair or protect itself. Many proteins are too small to be studied even under a microscope, and must be crystallized in order to determine their 3-D structures. These structures tell researchers how a single protein functions and its involvement in the development of disease. Once modeled, drug developers can use the structure to develop a specific drug to interact with the protein, a process called structure-based drug design.
Two investigations, The Effect of Macromolecular Transport on Microgravity Protein Crystallization (LMM Biophysics 1) and Growth Rate Dispersion as a Predictive Indicator for Biological Crystal Samples Where Quality Can be Improved with Microgravity Growth (LMM Biophysics 3), will study the formation of these crystals, looking at why microgravity-grown crystals often are of higher quality than Earth-grown, and which crystals may benefit from being grown in space.
Rate of Growth – LMM Biophysics 1
Researchers know that crystals grown in space often contain fewer imperfections than those grown on Earth, but the reasoning behind that phenomenon isn’t crystal clear. A widely accepted theory in the crystallography community is that the crystals are of higher quality because they grow slower in microgravity due to a lack of buoyancy-induced convection. The only way these protein molecules move in microgravity is by random diffusion, a process that is much slower than movement on Earth.
Another less-explored theory is that a higher level of purification can be achieved in microgravity. A pure crystal may contain thousands of copies of a single protein. Once crystals are returned to Earth and exposed to an X-Ray beam, the X-ray diffraction pattern can be used to mathematically map a protein’s structure.
“When you purify proteins to grow crystals, the protein molecules tend to stick to each other in a random fashion,” said Lawrence DeLucas, LMM Biophysics 1 primary investigator. “These protein aggregates can then incorporate into the growing crystals causing defects, disturbing the protein alignment, which then reduces the crystal’s X-ray diffraction quality.”
The theory states that in microgravity, a dimer, or two proteins stuck together, will move much slower than a monomer, or a single protein, giving aggregates less opportunity to incorporate into the crystal.
“You’re selecting out for predominantly monomer growth, and minimizing the amount of aggregates that are incorporated into the crystal because they move so much more slowly,” said DeLucas.
The LMM Biophysics 1 investigation will put these two theories to the test, to try to understand the reason(s) microgravity-grown crystals are often of superior quality and size compared to their Earth-grown counterparts. Improved X-ray diffraction data results in a more precise protein structure and thereby enhancing our understanding of the protein’s biological function and future drug discovery.
Crystal Types – LMM Biophysics 3
As LMM Biophysics 1 studies why space-grown crystals are of higher quality than Earth-grown crystals, LMM Biophysics 3 will take a look at which crystals may benefit from crystallization in space. Research has found that only some proteins crystallized in space benefit from microgravity growth. The shape and surface of the protein that makes up a crystal defines its potential for success in microgravity.
“Some proteins are like building blocks,” said Edward Snell, LMM Biophysics 3 primary investigator. “It’s very easy to stack them. Those are the ones that won’t benefit from microgravity. Others are like jelly beans. When you try and build a nice array of them on the ground, they want to roll away and not be ordered. Those are the ones that benefit from microgravity. What we’re trying to do is distinguish the blocks from the jelly beans.”
Understanding how different proteins crystallize in microgravity will give researchers a deeper view into how these proteins function, and help to determine which crystals should be transported to the space station for growth.
“We’re maximizing the use of a scarce resource, and making sure that every crystal we put up there benefits the scientists on the ground,” said Snell.
These crystals could be used in drug development and disease research around the world. |
Space weathering is a blanket term used for a number of processes that act on any body exposed to the harsh environment of outer space. Airless bodies (including the Moon, Mercury, the asteroids, comets, and some of the moons of other planets) incur many weathering processes:
- collisions of galactic cosmic rays and solar cosmic rays,
- irradiation, implantation, and sputtering from solar wind particles, and
- bombardment by different sizes of meteorites and micrometeorites.
Space weathering is important because these processes affect the physical and optical properties of the surface of many planetary bodies. Therefore, it is critical to understand the effects of space weathering in order to properly interpret remotely sensed data.
Much of our knowledge of the space weathering process comes from studies of the lunar samples returned by the Apollo program, particularly the lunar soils (or regolith). The constant flux of high energy particles and micrometeorites, along with larger meteorites, act to comminute, melt, sputter and vaporize components of the lunar soil, as well as to garden (or overturn) it.
The first products of space weathering that were recognized in lunar soils were "agglutinates". These are created when micrometeorites melt a small amount of material, which incorporates surrounding glass and mineral fragments into a glass-welded aggregate ranging in size from a few micrometers to a few millimeters. Agglutinates are very common in lunar soil, accounting for as much as 60 to 70% of mature soils. These complex and irregularly-shaped particles appear black to the human eye, largely due to the presence of nanophase iron.
Space weathering also produces surface-correlated products on individual soil grains, such as glass splashes; implanted hydrogen, helium and other gases; solar flare tracks; and accreted components, including nanophase iron. It wasn't until the 1990s that improved instruments, in particular transmission electron microscopes, and techniques allowed for the discovery of very thin (60-200 nm) patinas, or rims, which develop on individual lunar soil grains as a result of the redepositing of vapor from nearby micrometeorite impacts and the redeposition of material sputtered from nearby grains.
These weathering processes have large effects on the spectral properties of lunar soil, particularly in the ultraviolet, visible, and near infrared (UV/Vis/NIR) wavelengths. These spectral changes have largely been attributed to the inclusions of "nanophase iron" which is a ubiquitous component of both agglutinates and soil rims. These very small (one to a few hundred nanometers in diameter) blebs of metallic iron are created when iron-bearing minerals (e.g. olivine and pyroxene) are vaporized and the iron is liberated and redeposited in its native form.
Effects on spectral properties
On the Moon, the spectral effects of space weathering are threefold: as the lunar surface matures it becomes darker (the albedo is reduced), redder (reflectance increases with increasing wavelength), and the depth of its diagnostic absorption bands are reduced These effects are largely due to the presence of nanophase iron in both the agglutinates and in the accreted rims on individual grains. The darkening effects of space weathering are readily seen by studying lunar craters. Young, fresh craters have bright ray systems, because they have exposed fresh, unweathered material, but over time those rays disappear as the weathering process darkens the material.
Space weathering on asteroids
Space weathering is also thought to occur on asteroids, though the environment is quite different from the Moon. Impacts in the asteroid belt are slower, and therefore create less melt and vapor. Also, fewer solar wind particles reach the asteroid belt. And finally, the higher rate of impactors and lower gravity of the smaller bodies means that there is more overturn and the surface exposure ages should be younger than the lunar surface. Therefore, space weathering should occur more slowly and to a lesser degree on the surfaces of asteroids.
However, we do see evidence for asteroidal space weathering. For years there had been a so-called "conundrum" in the planetary science community because, in general, the spectra of asteroids do not match the spectra of our collection of meteorites. Particularly, the spectra of S-type asteroids, did not match the spectra of the most abundant type of meteorites, ordinary chondrites (OCs). The asteroid spectra tended to be redder with a steep curvature in the visible wavelengths. However, Binzel et al. have identified near Earth asteroids with spectral properties covering the range from S-type to spectra similar to those of OC meteorites, suggesting an ongoing process is occurring that can alter the spectra of OC material to look like S-type asteroids. There is also evidence of regolith alteration from Galileo's flybys of Gaspra and Ida showing spectral differences at fresh craters. With time, the spectra of Ida and Gaspra appear to redden and lose spectral contrast. Evidence from NEAR Shoemaker's x-ray measurements of Eros indicate an ordinary chondrite composition despite a red-sloped, S-type spectrum, again suggesting that some process has altered the optical properties of the surface. Results from the Hayabusa spacecraft at the asteroid Itokawa, also ordinary chondrite in composition, shows spectral evidence of space weathering. In addition, definitive evidence of space weathering alteration has been identified in the grains of soil returned by the Hayabusa spacecraft. Because Itokawa is so small (550 m diameter), it was thought that the low gravity would not allow for the development of a mature regolith, however, preliminary examination of the returned samples reveals the presence of nanophase iron and other space weathering effects on several grains. In addition, there is evidence that weathering patinas can and do develop on rock surfaces on the asteroid. Such coatings are likely similar to the patinas found on lunar rocks.
There is evidence to suggest most of the color change due to weathering occurs rapidly, in the first hundred thousands years, limiting the usefulness of spectral measurement for determining the age of asteroids.
Space weathering on Mercury
The environment at Mercury also differs substantially from the Moon. For one thing, it is significantly hotter in the day (diurnal surface temperature ~100 °C for the Moon, ~425 °C on Mercury) and colder at night, which may alter the products of space weathering. In addition, because of its location in the solar system, Mercury is also subjected to a slightly larger flux of micrometeorites that impact at much higher velocities than the Moon. These factors combine to make Mercury much more efficient than the Moon at creating both melt and vapor. Per unit area, impacts on Mercury are expected to produce 13.5x the melt and 19.5x the vapor than is produced on the Moon. Agglutinitic glass-like deposits and vapor-deposited coatings should be created significantly faster and more efficiently on Mercury than on the Moon.
The UV/Vis spectrum of Mercury, as observed telescopically from Earth, is roughly linear, with a red slope. There are no absorption bands related to Fe-bearing minerals, such as pyroxene. This means that either there is no iron on the surface of Mercury, or else the iron in the Fe-bearing minerals has been weathered to nanophase iron. A weathered surface would then explain the reddened slope.
- Heiken, Grant (1991). Lunar sourcebook : a user's guide to the moon (1. publ. ed.). Cambridge [u.a.]: Cambridge Univ. Press. ISBN 978-0-521-33444-0.
- Keller, L. P; McKay, D. S. (June 1997). "The nature and origin of rims on lunar soil grains". Geochimica et Cosmochimica Acta 61 (11): 2331–2341. Bibcode:1997GeCoA..61.2331K. doi:10.1016/S0016-7037(97)00085-9.
- Noble, Sarah; Pieters C. M.; Keller L. P. (September 2007). "An experimental approach to understanding the optical effects of space weathering". Icarus 192: 629–642. Bibcode:2007Icar..192..629N. doi:10.1016/j.icarus.2007.07.021.
- Pieters, C. M.; Fischer, E. M.; Rode, O.; Basu, A. (1993). "Optical Effects of Space Weathering: The Role of the Finest Fraction". Journal of Geophysical Research 98 (E11): 20,817–20,824. Bibcode:1993JGR....9820817P. doi:10.1029/93JE02467. ISSN 0148-0227.
- For a thorough review of the current state of understanding of space weathering on Asteroids, see Chapman, Clark R. (May 2004). "Space Weathering of Asteroid Surfaces". Annual Review of Earth and Planetary Sciences 32: 539–567. Bibcode:2004AREPS..32..539C. doi:10.1146/annurev.earth.32.101802.120453..
- Binzel, R.P.; Bus, S.J.; Burbine, T.H.; Sunshine, J.M. (Aug 1996). "Spectral Properties of Near-Earth Asteroids: Evidence for Sources of Ordinary Chondrite Meteorites". Science 273 (5277): 946–948. Bibcode:1996Sci...273..946B. doi:10.1126/science.273.5277.946. PMID 8688076.
- Noguchi, T; T. Nakamura, M. Kimura, M. E. Zolensky, M. Tanaka, T. Hashimoto, M. Konno, A. Nakato, T. Ogami, A. Fujimura, M. Abe, T. Yada, T. Mukai, M. Ueno, T. Okada, K. Shirai, Y. Ishibashi, R. Okazaki (26 August 2011). "Incipient Space Weathering Observed on the Surface of Itokawa Dust Particles". Science 333: 1121–1125. Bibcode:2011Sci...333.1121N. doi:10.1126/science.1207794.
- Hiroi, Takahiro; Abe M.; K. Kitazato; S. Abe; B. Clark; S. Sasaki; M. Ishiguro, O. Barnouin-Jha (7 September 2006). "Developing space weathering on the asteroid 25143 Itokawa". Nature 443 (7107): 56–58. Bibcode:2006Natur.443...56H. doi:10.1038/nature05073. PMID 16957724.
- Rachel Courtland (30 April 2009). "Sun damage conceals asteroids' true ages". New Scientist. Retrieved 27 February 2013.
- Cintala, Mark J. (Jan 1992). "Impact-Induced Thermal Effects in the Lunar and Mercurian Regoliths". Journal of Geophysical Research 97 (E1): 947–973. Bibcode:1992JGR....97..947C. doi:10.1029/91JE02207. ISSN 0148-0227.
- Hapke, Bruce (Feb 2001). "Space Weathering from Mercury to the asteroid belt". Journal of Geophysical Research 106 (E5): 10,039–10,073. Bibcode:2001JGR...10610039H. doi:10.1029/2000JE001338.
- Linda Martel (July 5, 2004). "New Mineral Proves an Old Idea about Space Weathering". Planetary Science Research Discoveries. |
Mathematics Grade 4
Strand: OPERATIONS AND ALGEBRAIC THINKING (4.OA)
Use the four operations with whole numbers (addition, subtraction, multiplication, and division) to solve problems (Standards 4.OA.1–3)
. Gain familiarity with factors and multiples (Standard 4.OA.4)
. Generate and analyze numeric and shape patterns (Standard 4.OA.5)
Multiply or divide to solve word problems involving multiplicative comparison, for example, by using drawings and equations with a symbol for the unknown number to represent the problem, distinguishing multiplicative comparison from additive comparison.
Chessboard Algebra and Function Machines
This Teaching Channel video shows how students can find the rule that determines the number of chessboard squares. (6 min.)
Comparing Money Raised
The purpose of this task is for students to solve three comparisons problems that are related by their context but are structurally different. In these multiplicative comparison problems, one factor and the product are amounts of money and the other factor represents the number of times bigger one amount is than the other.
Grade 4 Mathematics Module 3: Multi-Digit Multiplication and Division
In this 43-day module, students use place value understanding and visual representations to solve multiplication and division problems with multi-digit numbers. As a key area of focus for Grade 4, this module moves slowly but comprehensively to develop students' ability to reason about the methods and models chosen to solve problems with multi-digit factors and dividends.
Grade 4 Mathematics Module 7: Exploring Measurement with Multiplication
In this 20-day module, students build their competencies in measurement as they relate multiplication to the conversion of measurement units. Throughout the module, students will explore multiple strategies for solving measurement problems involving unit conversion.
Grade 4 Unit 2: Multiplication and Division of Whole Numbers (Georgia Standards)
In this unit students will solve multi-step problems using the four operations, use estimation to solve multiplication and division problems, find factors and multiples, identify prime and composite numbers and generate patterns.
IXL Game: Mixed operations: Addition, subtraction, multiplication, and division word problems
This games helps third graders understand mixed operations: Addition, subtraction, multiplication, and division word problems. This is just one of many online games that supports the Utah Math core. Note: The IXL site requires subscription for unlimited use.
Operations and Algebraic Thinking - Fourth Grade Core Guide
The Utah State Board of Education (USBE) and educators around the state of Utah developed these guides for Fourth Grade Mathematics - Operations and Algebraic Thinking (4.OA)
http://www.uen.org - in partnership with Utah State Board of Education
(USBE) and Utah System of Higher Education
(USHE). Send questions or comments to USBE Specialist -
and see the Mathematics - Elementary website. For
general questions about Utah's Core Standards contact the Director
- Jennifer Throndsen .
These materials have been produced by and for the teachers of the
State of Utah. Copies of these materials may be freely reproduced
for teacher and classroom use. When distributing these materials,
credit should be given to Utah State Board of Education. These
materials may not be published, in whole or part, or in any other
format, without the written permission of the Utah State Board of
Education, 250 East 500 South, PO Box 144200, Salt Lake City, Utah |
the Physics Education Technology Project
This simulation promotes understanding of isotopes by providing a simple way to model isotopes of the first 10 elements in the Periodic Table. In the most basic model, users click on an atomic symbol. The simulation displays a stable isotope for that atom. (For example, choose Helium and view a nucleus with two protons and two neutrons.) Now, drag neutrons into the nucleus and watch to see if the atom becomes unstable. Students may be surprised to see that Beryllium and Flourine, for example, are unstable with equal numbers of protons and neutrons in the nucleus. Click on "Abundance in Nature" to see how common or rare a particular isotope is in nature. Mass number and Atomic Mass (amu) are displayed in real time.
See Related Materials for a lesson plan developed specifically for use with the "Isotopes and Atomic Mass" simulation.
This item is part of a larger collection of simulations developed by the Physics Education Technology project (PhET).
Please note that this resource requires
Java Applet Plug-in.
atom, atom simulation, atomic mass, elements, isotope, nuclear properties, radioactivity, stable element, unstable element
Metadata instance created
July 18, 2011
by Caroline Hall
August 18, 2016
by Lyle Barbato
Last Update when Cataloged:
July 8, 2011
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4D. The Structure of Matter
6-8: 4D/M1a. All matter is made up of atoms, which are far too small to see directly through a microscope.
6-8: 4D/M1b. The atoms of any element are like other atoms of the same element, but are different from the atoms of other elements.
9-12: 4D/H1. Atoms are made of a positively charged nucleus surrounded by negatively charged electrons. The nucleus is a tiny fraction of the volume of an atom but makes up almost all of its mass. The nucleus is composed of protons and neutrons which have roughly the same mass but differ in that protons are positively charged while neutrons have no electric charge.
9-12: 4D/H2. The number of protons in the nucleus determines what an atom's electron configuration can be and so defines the element. An atom's electron configuration, particularly the outermost electrons, determines how the atom can interact with other atoms. Atoms form bonds to other atoms by transferring or sharing electrons.
9-12: 4D/H3. Although neutrons have little effect on how an atom interacts with other atoms, the number of neutrons does affect the mass and stability of the nucleus. Isotopes of the same element have the same number of protons (and therefore of electrons) but differ in the number of neutrons.
9-12: 4D/H4. The nucleus of radioactive isotopes is unstable and spontaneously decays, emitting particles and/or wavelike radiation. It cannot be predicted exactly when, if ever, an unstable nucleus will decay, but a large group of identical nuclei decay at a predictable rate. This predictability of decay rate allows radioactivity to be used for estimating the age of materials that contain radioactive substances.
9-12: 4D/H5. Scientists continue to investigate atoms and have discovered even smaller constituents of which neutrons and protons are made.
11. Common Themes
6-8: 11B/M1. Models are often used to think about processes that happen too slowly, too quickly, or on too small a scale to observe directly. They are also used for processes that are too vast, too complex, or too dangerous to study.
6-8: 11B/M4. Simulations are often useful in modeling events and processes.
6-8: 11D/M3. Natural phenomena often involve sizes, durations, and speeds that are extremely small or extremely large. These phenomena may be difficult to appreciate because they involve magnitudes far outside human experience.
This resource is part of a Physics Front Topical Unit.
Topic: Particles and Interactions and the Standard Model Unit Title: Molecular Structures and Bonding
This simulation promotes understanding of isotopes by providing a simple way to model isotopes of the first 10 elements in the Periodic Table. In the most basic model, users click on an atomic symbol. The simulation displays a stable isotope for that atom. (For example, choose Helium and view a nucleus with two protons and two neutrons.) Now, drag neutrons into the nucleus and watch to see if the atom becomes unstable.
%0 Electronic Source %D July 8, 2011 %T PhET Simulation: Isotopes and Atomic Mass %I Physics Education Technology Project %V 2017 %N 23 May 2017 %8 July 8, 2011 %9 application/java %U https://phet.colorado.edu/en/simulation/isotopes-and-atomic-mass
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
This is a lesson plan appropriate for Grades 7-10, created specifically to accompany the "Isotopes and Atomic Mass" simulation. It is very effective at guiding beginning students through an exploration of the atomic nucleus and factors that affect the stability of an atom. |
The structure of an atom consists of a nucleus composed of protons and neutrons with electrons orbiting around this nucleus.
Electrons are much smaller than protons and neutrons and are attracted to the nucleus by electromagnetic force. This force also holds the nucleus together despite the fact that the protons are all positively charged and would therefore repel each other. The electrons orbiting the nucleus occupy shells of different energy levels. The closer an electron is to the nucleus, the less energy it requires to remain in that location.
Atoms bond together to form molecules by sharing electrons in order to achieve a more stable configuration.
Are covalent bonds electrons always shared equally?
In a covalent bond, atoms share electrons in order to achieve stability. The number of electrons that each atom contributes mostly decides the electron pulling capacity.
If the atoms involved in the bond have different electronegativities, then the electrons will be pulled more strongly towards one atom that has more electron pulling capacity than the other.
The electron pulling capacity of an atom is determined by its position on the periodic table.
Atoms with a higher electron pulling capacity are more likely to pull electrons away from other atoms, resulting in an uneven distribution of electrons within the bond. This can happen when one atom has a higher electronegativity than the other. Covalent bond electrons are not always shared equally. For example, in a carbon-oxygen bond, the oxygen atom has a higher electronegativity and will pull electrons away from the carbon atom. As a result, the bond between these two atoms is not equal.
Unequally sharing of electrons
Covalent bonds form when atoms share electrons with each other in order to fill their valence shells.
In a single covalent bond, each atom contributes one electron to the bond.
However, not all covalent bonds are equal. If the two atoms involved in the bond have different electronegativities, then the bond will be unequal, with the electron spending more time around the atom with the higher electronegativity.
This creates a dipole within the bond, where one end is slightly negative and the other is slightly positive. As a result, unequal covalent bonds are more reactive than equal covalent bonds, and they are often found in molecules that participate in chemical reactions.
Single and multiple covalent bonds
Atoms are held together by different types of bonds in order to form molecules.
When atoms share electrons to form covalent bonds, they are held together by the force of the Electron Cloud. This cloud is negative and is attracted to the positively charged nucleus of each atom. The more electrons that are shared, the stronger the bond will be.
The first type of bond is a single covalent bond. This occurs when two atoms share one electron. For example, hydrogen gas is made up of H2 molecules, each with a single covalent bond between the two hydrogen atoms.
The second type of bond is a double covalent bond. This occurs when two atoms share two electrons. For example, carbon dioxide is made up of CO2 molecules, each with a double covalent bond between the carbon and oxygen atoms.
The third type of bond is a triple covalent bond. This occurs when two atoms share three electrons. For example, nitrogen gas is made up of N2 molecules, each with a triple covalent bond between the nitrogen atoms. These are the three main types of bonds that occur in molecules.
The increased strength of double and triple covalent bonds also makes them more resistant to changes in temperature and other external factors. This makes them ideal for applications where reliability and durability are important, such as in the construction of bridges and buildings.
Electronegativity Effect on Covalent Compounds
Electronegativity is a term used to describe the tendency of an atom to attract electrons to itself. The higher the electronegativity of an atom, the more it will pull electrons away from other atoms. Electronegativity is measured on a scale from 0.7 to 4.0, with fluorine being the most electronegative element and cesium being the least electronegative.
Electronegativity values are affected by a number of factors, including the size of the atom and the number of valence electrons. Generally speaking, atoms with smaller radii and higher atomic numbers are more electronegative than atoms with larger radii and lower atomic numbers.
Atoms with high electronegativities will tend to steal electrons from atoms with low electronegativities. This can result in the formation of ions, which are atoms that have a net charge. Ions can be either positive or negative, depending on whether they have gained or lost electrons. Electronegativity can also affect the shape of molecules.
For example, water molecules are held together by the electrical attraction between the oxygen atom and the hydrogen atoms. The oxygen atom is much more electronegative than the hydrogen atoms, so it pulls the shared electrons closer to itself. This gives water molecules a bent shape, with the oxygen atom at the center and the hydrogen atoms at the corners. Electronegativity is thus a critical property that determines how atoms and molecules interact with each other.
While electronegativity is a relatively simple concept, it has a wide range of applications.
It can also be used to explain why some substances are good conductors of electricity while others are not. In short, understanding electronegativity is essential for understanding the behavior of atoms and molecules.
Polar Vs Non-Polar Bond
When atoms share electrons in a chemical bond, they do so in order to achieve a more stable, lower energy state. The polarity of a bond is determined by the difference in electronegativity between the atoms involved.
When the difference is great, a polar covalent bond is formed; when it is small, a nonpolar covalent bond result.
Polar covalent bonds occur when the electronegativity difference between the atoms is between 0.5 and 1.7. In this case, the shared electron pair is pulled more strongly towards the atom with the higher electronegativity.
This creates a dipole, or opposite charges, within the molecule.
For example, water molecules are held together by polar covalent bonds. The electronegativity of oxygen (3.44) is much higher than that of hydrogen (2.2), so the shared electrons spend more time around the oxygen nucleus.
As a result, the oxygen end of the molecule has a partial negative charge, while the hydrogen end has a partial positive charge.
Nonpolar covalent bonds occur when the difference in electronegativity between atoms is less than 0.5. In this case, there is no significant difference in electron sharing, and the atoms share the electrons equally. These types of bonds are often seen in molecules made up of identical atoms, such as H2 or Cl2.
Polarity can also be determined by looking at the structure of a molecule.
Molecules can also be polar even if they have no polar bonds. This happens when the shape of the molecule causes the dipoles created by the polar bonds to cancel each other out. For example, CO2 is a linear molecule with two polar bonds, but the dipoles cancel each other out so the molecule is nonpolar.
Polar molecules must have at least one polar bond, but they can also have nonpolar bonds. In order for a molecule to be polar, the sum of all the bond dipoles must not equal zero. For example, NH3 is a polar molecule because the three N-H bond dipoles do not cancel each other out.
Polarity is an important factor in determining the solubility of a molecule. Polar molecules tend to dissolve in polar solvents, while nonpolar molecules dissolve in nonpolar solvents.
Read More: Polar And Nonpolar Covalent Bonds
Dipole Moment in Covalent Bond
A dipole moment is a measure of the polarity of a molecule. In a covalent bond, the atoms share electrons equally. However, there are times when the sharing is not equal. This can happen when the electronegativity of the atoms is different.
The more electronegative atom will have a greater pull on the shared electrons, resulting in a dipole moment. A dipole moment can also be created by an asymmetrical distribution of charge within a molecule.
For example, a water molecule has a dipole moment because it has more negative charge on the oxygen atom than on the hydrogen atoms.
Dipole moments are important because they help to determine the physical and chemical properties of molecules.
For instance, molecules with dipole moments are often attracted to each other, leading to interactions between molecules. As a result, dipole moments play a significant role in many biochemical processes.
One way to think about this is to imagine two people sharing a pizza. In some cases, the two people may share the pizza equally. But in other cases, one person may end up with a larger piece of pizza than the other. So while the two people are still sharing a pizza, one person may have gotten a larger piece of pizza than the other.
So this is an example of how some covalent bonds may be more equal than others. In addition to this, one key thing to remember is that some atoms will always partake in covalent bonding as opposed to others. For example, carbon will always partake in covalent bonding while noble gas neon will not. This is because noble gas already has a full octet of electrons, while carbon does not.
So while covalent bonds are always shared equally, it is important to remember that this varies from molecule to molecule. In addition to this, different atoms will partake in covalent bonding to different degrees.
Visitors also Read: |
When scientists are looking into space, the more they can see, the easier it is to piece together the puzzle of the cosmos. The James Webb Space Telescope's mirror blanks have now been constructed. When polished and assembled, together they will form a mirror whose area is over seven times larger than the Hubble Telescope's mirror.
A telescope’s sensitivity, or how much detail it can see, is directly related to the size of the mirror area that collects light from the cosmos. A larger area collects more light to see deeper into space, just like a larger bucket collects more water in a rain shower than a small one. The larger mirror also means the James Webb Space Telescope (JWST) will have excellent resolution. That's why the telescope's mirror is made up of 18 mirror segments that form a total area of 25 square-meters (almost 30 square yards) when they all come together.
The challenge was to make the mirrors lightweight for launch, but nearly distortion-free for excellent image quality. That challenge has been met by AXSYS Technologies., Inc., Cullman, Ala. "From the start, AXSYS Technologies has been a key player in the mirror technology development effort," said Kevin Russell, mirror development lead at NASA's Marshall Spaceflight Center, Huntsville, Ala.
If the mirror were assembled completely and fully opened on the ground, there would be no way to fit it into a rocket. Therefore, the Webb Telescope's 18 mirror segments must be set into place when the telescope is in space. Engineers solved this problem by allowing the segmented mirror to fold, like the leaves of a drop-leaf table.
Each of the 18 mirrors will have the ability to be moved individually, so that they can be aligned together to act as a single large mirror. Scientists and engineers can also correct for any imperfections after the telescope opens in space, or if any changes occur in the mirror during the life of the mission. Each segment is made of beryllium, one of the lightest of all metals known to man. Beryllium has been used in other space telescopes and has worked well at the super-frigid temperatures of space in which the telescope will operate.
Each of the hexagonal-shaped mirror segments is 1.3 meters (4.26 feet) in diameter, and weighs approximately 20 kilograms or 46 pounds. The completed primary mirror will be over 2.5 times larger than the diameter of the Hubble Space Telescope's primary mirror, which is 2.4 meters in diameter, but will weigh roughly half as much.
"The James Webb Space Telescope will collect light approximately 9 times faster than the Hubble Space Telescope when one takes into account the details of the relative mirror sizes, shapes, and features in each design," said Eric Smith, JWST program scientist at NASA Headquarters, Washington. The increased sensitivity will allow scientists to see back to when the first galaxies formed just after the Big Bang. The larger telescope will have advantages for all aspects of astronomy and will revolutionize studies of how stars and planetary systems form and evolve.
The 18 mirrors have now been shipped to L-3 Communications SSG-Tinsley, Richmond, Calif. where they can be ground and polished.
After the grinding and polishing, the mirror segments will be delivered to Ball Aerospace in small groups where they will be assembled. Once the mirrors are completed, they will go to NASA's Goddard Space Flight Center, Greenbelt, Md., for final assembly on the telescope.
Upon successful launch in 2013, JWST will study the first stars and galaxies following the Big Bang.
Source: by Rob Gutro, Goddard Space Flight Center
Explore further: Why is space black? |
Informed consent is a process for getting permission before conducting a healthcare intervention on a person, or for disclosing personal information. A health care provider may ask a patient to consent to receive therapy before providing it, or a clinical researcher may ask a research participant before enrolling that person into a clinical trial. Informed consent is collected according to guidelines from the fields of medical ethics and research ethics.
An informed consent can be said to have been given based upon a clear appreciation and understanding of the facts, implications, and consequences of an action. Adequate informed consent is rooted in respecting a person’s dignity. To give informed consent, the individual concerned must have adequate reasoning faculties and be in possession of all relevant facts. Impairments to reasoning and judgment that may prevent informed consent include basic intellectual or emotional immaturity, high levels of stress such as posttraumatic stress disorder (PTSD) or a severe intellectual disability, severe mental disorder, intoxication, severe sleep deprivation, Alzheimer's disease, or being in a coma.
Obtaining informed consent is not always required. If an individual is considered unable to give informed consent, another person is generally authorized to give consent on his behalf, e.g., parents or legal guardians of a child (though in this circumstance the child may be required to provide informed assent) and conservators for the mentally disordered, or consent can be assumed through the doctrine of implied consent, e.g., when an unconscious person will die without immediate medical treatment.
In cases where an individual is provided insufficient information to form a reasoned decision, serious ethical issues arise. Such cases in a clinical trial in medical research are anticipated and prevented by an ethics committee or Institutional Review Board.
Informed Consent Form Templates can be found on the World Health Organization Website for practical use.
Informed consent can be complex to evaluate, because neither expressions of consent, nor expressions of understanding of implications, necessarily mean that full adult consent was in fact given, nor that full comprehension of relevant issues is internally digested. Consent may be implied within the usual subtleties of human communication, rather than explicitly negotiated verbally or in writing. In some cases consent cannot legally be possible, even if the person protests he does indeed understand and wish. There are also structured instruments for evaluating capacity to give informed consent, although no ideal instrument presently exists.
Thus, there is always a degree to which informed consent must be assumed or inferred based upon observation, or knowledge, or legal reliance. This especially is the case in sexual or relational issues. In medical or formal circumstances, explicit agreement by means of signature—normally relied on legally—regardless of actual consent, is the norm. This is the case with certain procedures, such as a "do not resuscitate" directive that a patient signed before onset of their illness.
Brief examples of each of the above:
- A person may verbally agree to something from fear, perceived social pressure, or psychological difficulty in asserting true feelings. The person requesting the action may honestly be unaware of this and believe the consent is genuine, and rely on it. Consent is expressed, but not internally given.
- A person may claim to understand the implications of some action, as part of consent, but in fact has failed to appreciate the possible consequences fully and may later deny the validity of the consent for this reason. Understanding needed for informed consent is present but is, in fact (through ignorance), not present.
- A person signs a legal release form for a medical procedure, and later feels he did not really consent. Unless he can show actual misinformation, the release is usually persuasive or conclusive in law, in that the clinician may rely legally upon it for consent. In formal circumstances, a written consent usually legally overrides later denial of informed consent (unless obtained by misrepresentation).
- Informed consent in the U.S. can be overridden in emergency medical situations pursuant to 21CFR50.24, which was first brought to the general public's attention via the controversy surrounding the study of Polyheme.
- Disclosure requires the researcher to supply each prospective subject with the information necessary to make an autonomous decision and also to ensure that the subject adequate understands the information provided. This latter requirement implies that a written consent form be written in lay language suited for the comprehension skills of subject population, as well as assessing the level of understanding through conversation.
- Capacity pertains to the ability of the subject to both understand the information provided and form a reasonable judgment based on the potential consequences of his/her decision.
- Voluntariness refers to the subject’s right to freely exercise his/her decision making without being subjected to external pressure such as coercion, manipulation, or undue influence.
Waiver of requirementEdit
Waiver of the consent requirement may be applied in certain circumstances where no foreseeable harm is expected to result from the study or when permitted by law, federal regulations, or if an ethical review committee has approved the non-disclosure of certain information.
Besides studies with minimal risk, waivers of consent may be obtained in a military setting. According to 10 USC 980, the United States Code for the Armed Forces, Limitations on the Use of Humans as Experimental Subjects, a waiver of advanced informed consent may be granted by the Secretary of Defense if a research project would:
- Directly benefit subjects.
- Advance the development of a medical product necessary to the military.
- Be carried out under all laws and regulations (i.e., Emergency Research Consent Waiver) including those pertinent to the FDA.
While informed consent is a basic right and should be carried out effectively, if a patient is incapacitated due to injury or illness, it is still important that patients benefit from emergency experimentation. The Food and Drug Administration (FDA) and the Department of Health and Human Services (DHHS) joined together to create federal guidelines to permit emergency research, without informed consent. However, they can only proceed with the research if they obtain a waiver of informed consent (WIC) or an emergency exception from informed consent (EFIC).
21st Century Cures ActEdit
The 21st Century Cures Act enacted by the 114th United States Congress in December 2016 allows researchers to waive the requirement for informed consent when clinical testing "poses no more than minimal risk" and "includes appropriate safeguards to protect the rights, safety, and welfare of the human subject."
Informed consent is a technical term first used by attorney, Paul G. Gebhard, in a medical malpractice United States court case in 1957. In tracing its history, some scholars have suggested tracing the history of checking for any of these practices::54
- A patient agrees to a health intervention based on an understanding of it.
- The patient has multiple choices and is not compelled to choose a particular one.
- The consent includes giving permission.
These practices are part of what constitutes informed consent, and their history is the history of informed consent.:60 They combine to form the modern concept of informed consent—which rose in response to particular incidents in modern research.:60 Whereas various cultures in various places practiced informed consent, the modern concept of informed consent was developed by people who drew influence from Western tradition.:60
Historians cite a series of medical guidelines to trace the history of informed consent in medical practice.
The Hippocratic Oath, a 500 BC Greek text, was the first set of Western writings giving guidelines for the conduct of medical professionals. It advises that physicians conceal most information from patients to give the patients the best care.:61 The rationale is a beneficence model for care—the doctor knows better than the patient, and therefore should direct the patient's care, because the patient is not likely to have better ideas than the doctor.:61
Henri de Mondeville, a French surgeon who in the 14th century, wrote about medical practice. He traced his ideas to the Hippocratic Oath.:63 Among his recommendations were that doctors "promise a cure to every patient" in hopes that the good prognosis would inspire a good outcome to treatment.:63 Mondeville never mentioned getting consent, but did emphasize the need for the patient to have confidence in the doctor.:63 He also advised that when deciding therapeutically unimportant details the doctor should meet the patients' requests "so far as they do not interfere with treatment".
Benjamin Rush was an 18th-century United States physician who was influenced by the Age of Enlightenment cultural movement.:65 Because of this, he advised that doctors ought to share as much information as possible with patients. He recommended that doctors educate the public and respect a patient's informed decision to accept therapy.:65 There is no evidence that he supported seeking a consent from patients.:65 In a lecture titled "On the duties of patients to their physicians", he stated that patients should be strictly obedient to the physician's orders; this was representative of much of his writings.:65 John Gregory, Rush's teacher, wrote similar views that a doctor could best practice beneficence by making decisions for the patients without their consent.:66
Thomas Percival was a British physician who published a book called Medical Ethics in 1803.:68 Percival was a student of the works of Gregory and various earlier Hippocratic physicians.:68 Like all previous works, Percival's Medical Ethics makes no mention of soliciting for the consent of patients or respecting their decisions.:68 Percival said that patients have a right to truth, but when the physician could provide better treatment by lying or withholding information, he advised that the physician do as he thought best.:68
When the American Medical Association was founded they in 1847 produced a work called the first edition of the American Medical Association Code of Medical Ethics.:69 Many sections of this book are verbatim copies of passages from Percival's Medical Ethics.:69 A new concept in this book was the idea that physicians should fully disclose all patient details truthfully when talking to other physicians, but the text does not also apply this idea to disclosing information to patients.:70 Through this text, Percival's ideas became pervasive guidelines throughout the United States as other texts were derived from them.:70
Worthington Hooker was an American physician who in 1849 published Physician and Patient.:70 This medical ethics book was radical demonstrating understanding of the AMA's guidelines and Percival's philosophy and soundly rejecting all directives that a doctor should lie to patients.:70 In Hooker's view, benevolent deception is not fair to the patient, and he lectured widely on this topic.:70 Hooker's ideas were not broadly influential.:70
Historians cite a series of human subject research experiments to trace the history of informed consent in research.
The U.S. Army Yellow Fever Commission “is considered the first research group in history to use consent forms.” In 1900, Major Walter Reed was appointed head of the four man U.S. Army Yellow Fever Commission in Cuba that determined mosquitoes were the vector for yellow fever transmission. His earliest experiments were probably done without formal documentation of informed consent. In later experiments he obtained support from appropriate military and administrative authorities. He then drafted what is now “one of the oldest series of extant informed consent documents.” The three surviving examples are in Spanish with English translations; two have an individual’s signature and one is marked with an X.
Tearoom Trade is the name of a book by American psychologist Laud Humphreys. In it he describes his research into male homosexual acts. In conducting this research he never sought consent from his research subjects and other researchers raised concerns that he violated the right to privacy for research participants.
The Milgram experiment is the name of a 1961 experiment conducted by American psychologist Stanley Milgram. In the experiment Milgram had an authority figure order research participants to commit a disturbing act of harming another person. After the experiment he would reveal that he had deceived the participants and that they had not hurt anyone, but the research participants were upset at the experience of having participated in the research. The experiment raised broad discussion on the ethics of recruiting participants for research without giving them full information about the nature of the research.
Chester M. Southam used HeLa cells to inject into cancer patients and Ohio State Penitentiary inmates without informed consent to determine if people could become immune to cancer and if cancer could be transmitted.
The doctrine of informed consent relates to professional negligence and establishes a breach of the duty of care owed to the patient (see duty of care, breach of the duty, and respect for persons). The doctrine of informed consent also has significant implications for medical trials of medications, devices, or procedures.
Requirements of the professionalEdit
Until 2015 in the United Kingdom and in countries such as Malaysia and Singapore, informed consent in medical procedures requires proof as to the standard of care to expect as a recognised standard of acceptable professional practice (the Bolam Test), that is, what risks would a medical professional usually disclose in the circumstances (see Loss of right in English law). Arguably, this is "sufficient consent" rather than "informed consent." The UK has since departed from the Bolam test for judging standards of informed consent, due to the landmark ruling in Montgomery v Lanarkshire Health Board. This moves away from the concept of a reasonable physician and instead uses the standard of a reasonable patient, and what risks an individual would attach significance to.
Medicine in the United States, Australia, and Canada also takes this patient-centric approach to "informed consent." Informed consent in these jurisdictions requires healthcare providers to disclose significant risks, as well as risks of particular importance to that patient. This approach combines an objective (a hypothetical reasonable patient) and subjective (this particular patient) approach.
The doctrine of informed consent should be contrasted with the general doctrine of medical consent, which applies to assault or battery. The consent standard here is only that the person understands, in general terms, the nature of and purpose of the intended intervention. As the higher standard of informed consent applies to negligence, not battery, the other elements of negligence must be made out. Significantly, causation must be shown: That had the individual been made aware of the risk he would not have proceeded with the operation (or perhaps with that surgeon).
Optimal establishment of an informed consent requires adaptation to cultural or other individual factors of the patient. For example, people from Mediterranean and Arab appear to rely more on the context of the delivery of the information, with the information being carried more by who is saying it and where, when, and how it's being said, rather than what is said, which is of relatively more importance in typical "Western" countries.
The informed consent doctrine is generally implemented through good healthcare practice: pre-operation discussions with patients and the use of medical consent forms in hospitals. However, reliance on a signed form should not undermine the basis of the doctrine in giving the patient an opportunity to weigh and respond to the risk. In one British case, a doctor performing routine surgery on a woman noticed that she had cancerous tissue in her womb. He took the initiative to remove the woman's womb; however, as she had not given informed consent for this operation, the doctor was judged by the General Medical Council to have acted negligently. The council stated that the woman should have been informed of her condition, and allowed to make her own decision.
Obtaining informed consentsEdit
To capture and manage informed consents, hospital management systems typically use paper-based consent forms which are scanned and stored in a document handling system after obtaining the necessary signatures. Hospital systems and research organizations are adopting an electronic way of capturing informed consents to enable indexing, to improve comprehension, search and retrieval of consent data, thus enhancing the ability to honor to patient intent and identify willing research participants. More recently, Health Sciences South Carolina, a statewide research collaborative focused on transforming healthcare quality, health information systems and patient outcomes, developed an open-source system called Research Permissions Management System (RPMS).
Competency of the patientEdit
The ability to give informed consent is governed by a general requirement of competency. In common law jurisdictions, adults are presumed competent to consent. This presumption can be rebutted, for instance, in circumstances of mental illness or other incompetence. This may be prescribed in legislation or based on a common-law standard of inability to understand the nature of the procedure. In cases of incompetent adults, a health care proxy makes medical decisions. In the absence of a proxy, the medical practitioner is expected to act in the patient's best interests until a proxy can be found.
By contrast, 'minors' (which may be defined differently in different jurisdictions) are generally presumed incompetent to consent, but depending on their age and other factors may be required to provide Informed assent. In some jurisdictions (e.g. much of the U.S.), this is a strict standard. In other jurisdictions (e.g. England, Australia, Canada), this presumption may be rebutted through proof that the minor is ‘mature’ (the ‘Gillick standard’). In cases of incompetent minors, informed consent is usually required from the parent (rather than the 'best interests standard') although a parens patriae order may apply, allowing the court to dispense with parental consent in cases of refusal.
Research involving deception is controversial given the requirement for informed consent. Deception typically arises in social psychology, when researching a particular psychological process requires that investigators deceive subjects. For example, in the Milgram experiment, researchers wanted to determine the willingness of participants to obey authority figures despite their personal conscientious objections. They had authority figures demand that participants deliver what they thought was an electric shock to another research participant. For the study to succeed, it was necessary to deceive the participants so they believed that the subject was a peer and that their electric shocks caused the peer actual pain.
Nonetheless, research involving deception prevents subjects from exercising their basic right of autonomous informed decision-making and conflicts with the ethical principle of respect for persons.
The Ethical Principles of Psychologists and Code of Conduct set by the American Psychological Association says that psychologists may conduct research that includes a deceptive compartment only if they can both justify the act by the value and importance of the study's results and show they could not obtain the results by some other way. Moreover, the research should bear no potential harm to the subject as an outcome of deception, either physical pain or emotional distress. Finally, the code requires a debriefing session in which the experimenter both tells the subject about the deception and gives subject the option of withdrawing the data.
In some U.S. states, informed consent laws (sometimes called "right to know" laws) require that a woman seeking an elective abortion receive information from the abortion provider about her legal rights, alternatives to abortion (such as adoption), available public and private assistance, and other information specified in the law, before the abortion is performed. Other countries with such laws (e.g. Germany) require that the information giver be properly certified to make sure that no abortion is carried out for the financial gain of the abortion provider and to ensure that the decision to have an abortion is not swayed by any form of incentive.
Some informed consent laws have been criticized for allegedly using "loaded language in an apparently deliberate attempt to 'personify' the fetus," but those critics acknowledge that "most of the information in the [legally mandated] materials about abortion comports with recent scientific findings and the principles of informed consent", although "some content is either misleading or altogether incorrect."
As children often lack the decision making ability or legal power (competence) to provide true informed consent for medical decisions, it often falls on parents or legal guardians to provide informed permission for medical decisions. This "consent by proxy" usually works reasonably well, but can lead to ethical dilemmas when the judgment of the parents or guardians and the medical professional differ with regard to what constitutes appropriate decisions "in the best interest of the child". Children who are legally emancipated, and certain situations such as decisions regarding sexually transmitted diseases or pregnancy, or for unemancipated minors who are deemed to have medical decision making capacity, may be able to provide consent without the need for parental permission depending on the laws of the jurisdiction the child lives in. The American Academy of Pediatrics encourages medical professionals also to seek the assent of older children and adolescents by providing age appropriate information to these children to help empower them in the decision making process.
Research on children has benefited society in many ways. The only effective way to establish normal patterns of growth and metabolism is to do research on infants and young children. When addressing the issue of informed consent with children, the primary response is parental consent. This is valid, although only legal guardians are able to consent for a child, not adult siblings. Additionally, parents may not order the termination of a treatment that is required to keep a child alive, even if they feel it is in the best interest. Guardians are typically involved in the consent of children, however a number of doctrines have developed that allow children to receive health treatments without parental consent. For example, emancipated minors may consent to medical treatment, and minors can also consent in an emergency.
Informed consent is part of the ethical clinical research as well, in which a human subject voluntarily confirms his or her willingness to participate in a particular clinical trial, after having been informed of all aspects of the trial that are relevant to the subject's decision to participate. Informed consent is documented by means of a written, signed, and dated informed consent form. In medical research, the Nuremberg Code set a base international standard in 1947, which continued to develop, for example in response to the ethical violation in the Holocaust. Nowadays, medical research is overseen by an ethics committee that also oversees the informed consent process.
As the medical guidelines established in the Nuremberg Code were imported into the ethical guidelines for the social sciences, informed consent became a common part of the research procedure. However, while informed consent is the default in medical settings, it is not always required in the social science. Here, research often involves low or no risk for participants, unlike in many medical experiments. Second, the mere knowledge that they participate in a study can cause people to alter their behavior, as in the Hawthorne Effect: "In the typical lab experiment, subjects enter an environment in which they are keenly aware that their behavior is being monitored, recorded, and subsequently scrutinized.":168 In such cases, seeking informed consent directly interferes with the ability to conduct the research, because the very act of revealing that a study is being conducted is likely to alter the behavior studied. List exemplifies the potential dilemma that can result: "if one were interested in exploring whether, and to what extent, race or gender influences the prices that buyers pay for used cars, it would be difficult to measure accurately the degree of discrimination among used car dealers who know that they are taking part in an experiment." In cases where such interference is likely, and after careful consideration, a researcher may forgo the informed consent process. This is commonly done after weighting the risk to study participants versus the benefit to society and whether participants are present in the study out of their own wish and treated fairly. Researchers often consult with an ethics committee or institutional review board to render a decision.
The Facebook study controversy raises numerous questions about informed consent and the differences in the ethical review process between publicly and privately funded research. Some say Facebook was within its limits and others see the need for more informed consent and/or the establishment of in-house private review boards.
Conflicts of interestEdit
Other, long-standing controversies underscore the role for conflicts of interest among medical school faculty and researchers. For example, coverage of University of California (UC) medical school faculty members has included news of ongoing corporate payments to researchers and practitioners from companies that market and produce the very devices and treatments they recommend to patients. Robert Pedowitz, the former chairman of UCLA’s orthopedic surgery department, reported concern that his colleague’s financial conflicts of interest could negatively affect patient care or research into new treatments. In a subsequent lawsuit about whistleblower retaliation, the University provided a $10 million settlement to Pedowitz while acknowledging no wrongdoing. Consumer Watchdog, an oversight group, observed that University of CA policies were “either inadequate or unenforced…Patients in UC hospitals deserve the most reliable surgical devices and medication…and they shouldn’t be treated as subjects in expensive experiments.” Other UC incidents include taking the eggs of women for implantation into other women without consent and injecting live bacteria into human brains, resulting in potentially premature deaths.
- Belmont Report
- Consent (BDSM)
- Consent (criminal law)
- Consensual crime
- Declaration of Geneva
- Declaration of Helsinki
- Doe ex. rel. Tarlow v. District of Columbia
- Free, prior and informed consent
- Human experimentation
- Human experimentation in the United States
- Informed assent
- Informed consent in sociocratic decision-making
- Informed refusal
- International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use
- Mature minor doctrine
- Minors and abortion
- Parental consent
- Patient safety
- Safe, sane and consensual
- Schloendorff v. Society of New York Hospital
- World Medical Association
- Therapeutic misconception
- Executive Order 13139
- Elsayyad, Ahmed (2014). "Informed Consent for Comparative Effectiveness Trials". New England Journal of Medicine. 370 (20): 1958–1960. doi:10.1056/NEJMc1403310.
- "WHO | Informed Consent Form Templates". who.int. Retrieved 14 September 2014.
- Faden, R. R.; Beauchamp, T. L. (1986). A History and Theory of Informed Consent. New York: Oxford University Press. ISBN 978-0-19-503686-2.
- Beauchamp, Tom L.; Childress, James F. (1994). Principles of Biomedical Ethics (Fourth ed.). New York: Oxford University Press. ISBN 978-0-19-508536-5.
- Council for International Organization of Medical Sciences (CIOMS) and World Health Organization (WHO) Geneva, Switzerland, 2002. "International Ethical Guidelines for Biomedical Research Involving Human Subjects" (PDF). Archived from the original (PDF) on 2010-08-23.CS1 maint: Multiple names: authors list (link)
- McManus, J., J; S. G. Mehta; et al. (2005). "Informed consent and ethical issues in military medical research". Academic Emergency Medicine. 12 (11): 1120–1126. doi:10.1111/j.1553-2712.2005.tb00839.x. PMID 16264083.
- Baren, Jill. "Informed Consent to Human Experimentation". Springer Publishing Company. Retrieved 26 September 2013.
- Pace, Eric (26 August 1997). "P. G. Gebhard, 69, Developer Of the Term 'Informed Consent' - New York Times". The New York Times. New York: NYTC. ISSN 0362-4331. Retrieved 5 March 2014.
- Faden, Ruth R.; Beauchamp, Tom L.; King, Nancy M.P. (1986). A history and theory of informed consent (Online ed.). New York: Oxford University Press. ISBN 978-0-19-5036862.
- Burns, Chester R. (1977). Legacies in ethics and medicine. New York: Science History Publications. ISBN 9780882021669. In this book see Mary Catherine Welborn's excerpts from her 1966 The long tradition: A study in fourteenth-century medical deontology
- Katz, Jay; Alexander Morgan Capron (2002). The silent world of doctor and patient (Johns Hopkins Paperbacks ed.). Baltimore: Johns Hopkins University Press. pp. 7–9. ISBN 978-0801857805.
- Burns, Chester R. (1977). Legacies in ethics and medicine. New York: Science History Publications. ISBN 9780882021669. In this book see De Mondeville's "On the morals and ethics of medicine" from Ethics in Medicine
- Gregory, John (1772). Lectures on the Duties and Qualifications of a Physician.
- Cutter, Laura (2016). "Walter Reed, Yellow Fever, and Informed Consent". Military Medicine. 181 (1): 90–91. doi:10.7205/milmed-d-15-00430. PMID 26741482.
- "The U.S. Army Yellow Fever Commission in Cuba - U.S. Army Yellow Fever Commission". U.S. Army Yellow Fever Commission. Retrieved 2017-08-01.
- "The U.S. Army Yellow Fever Commission in Cuba - U.S. Army Yellow Fever Commission". U.S. Army Yellow Fever Commission. Retrieved 2017-08-01.
- Babbie, Earl (2010). The practice of social research (12th ed.). Belmont, Calif: Wadsworth Cengage. ISBN 978-0495598411.
- Baumrind, D. (1964). "Some thoughts on ethics of research: After reading Milgram's "Behavioral Study of Obedience."". American Psychologist. 19 (6): 421–423. doi:10.1037/h0040128.
- Skloot, Rebecca (2010). The Immortal Life of Henrietta Lacks. New York: Braodway Paperbacks. p. 130.
- Too Much Information: Informed Consent in Cultural Context. By Joseph J. Fins and Pablo Rodriguez del Pozo. Medscape 07/18/2011
- "Health Sciences South Carolina". healthsciencessc.org. Archived from the original on 11 October 2014. Retrieved 14 September 2014.
- Chalil Madathil, K.; Koikkara, R.; Gramopadhye, A. K.; Greenstein, J. S. (2011). "An Empirical Study of the Usability of Consenting Systems: IPad, Touchscreen and Paper-based Systems". Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 55: 813–817. doi:10.1177/1071181311551168.
- Chalil Madathil, K.; Koikkara, R.; Obeid, J.; Greenstein, J. S.; Sanderson, I. C.; Fryar, K.; Moskowitz, J.; Gramopadhye, A. K. (2013). "An investigation of the efficacy of electronic consenting interfaces of research permissions management system in a hospital setting". International Journal of Medical Informatics. 82 (9): 854–863. doi:10.1016/j.ijmedinf.2013.04.008. PMC 3779682. PMID 23757370.
- Sanderson, I. C.; Obeid, J. S.; Madathil, K. C.; Gerken, K.; Fryar, K.; Rugg, D.; Alstad, C. E.; Alexander, R.; Brady, K. T.; Gramopadhye, A. K.; Moskowitz, J. (2013). "Managing clinical research permissions electronically: A novel approach to enhancing recruitment and managing consents". Clinical Trials. 10 (4): 604–611. doi:10.1177/1740774513491338. PMC 4213063. PMID 23785065.
- "HSSC/RPMS · GitHub". github.com. Retrieved 14 September 2014.
- "Health Sciences South Carolina". healthsciencessc.org. Archived from the original on 11 October 2014. Retrieved 14 September 2014.
- 2. American Psychological Association. (2002). "2010 Amendments to the American Psychological Association ethical principles of psychologists and code of conduct". Retrieved 30 April 2012.
- Taupitz, Jochen; Weschka, Marion (2009). CHIMBRIDS - Chimeras and Hybrids in Comparative European and International Research: Scientific, Ethical, Philosophical and Legal Aspects. Volume 34 of Veröffentlichungen des Instituts für Deutsches, Europäisches und Internationales Medizinrecht, Gesundheitsrecht und Bioethik der Universitäten Heidelberg und Mannheim. Springer Science & Business Media. p. 298. ISBN 9783540938699.
- Dieper, Susanne (23 February 2012). "The Legal Framework of Abortions in Germany". American Institute for Contemporary German Studies. Johns Hopkins University. Retrieved 3 April 2015.
- Gold, Rachel and Nash, Elizabeth.State Abortion Counseling Policies and the Fundamental Principles of Informed Consent, Guttmacher Policy Review, Fall 2007, Volume 10, Number 4.
- Richardson, Chinue and Nash, Elizabeth. "Misinformed Consent: The Medical Accuracy of State-Developed Abortion Counseling Materials", Guttmacher Policy Review Fall 2006, Volume 9, Number 4.
- Committee on Bioethics (1995). "Informed consent, parental permission, and assent in pediatric practice" (PDF). Pediatrics. 95 (2): 314–7. PMID 7838658.
- Annas, Glantz, Katz, George, Leonard, Barbara (1977). Informed Consent to Human Experimentation. Cambridge, Massachusetts: Ballinger Publishing Company. pp. 63–93. ISBN 978-0-88410-147-5.CS1 maint: Multiple names: authors list (link)
- "Guideline For Good Clinical Practice" (PDF). Retrieved 2018-09-24.
- Homan, R. (1991). The Ethics of Social Research. London; New York: Longman. ISBN 978-0-582-05879-8.
- Levitt, S. D.; List, J. A. (2007). "What Do Laboratory Experiments Measuring Social Preferences Reveal about the Real World?". Journal of Economic Perspectives. 21 (2): 153–174. doi:10.1257/jep.21.2.153. JSTOR 30033722.
- List, J. A.; List, J. A. (2008). "Informed Consent in Social Science". Science. 322 (5902): 672. CiteSeerX 10.1.1.418.1731. doi:10.1126/science.322.5902.672a. PMID 18974330.
- Levitt, S. D.; List, J. A. (2009). "Field experiments in economics: The past, the present, and the future". European Economic Review. 53 (1): 1–18. doi:10.1016/j.euroecorev.2008.12.001.
- Kramer, Adam; Guillory, Jaime; Jeffrey, Hancock (2014). "Experimental evidence of massive-scale emotional contagion through social networks". PNAS. 111 (24): 8788–90. doi:10.1073/pnas.1320040111. PMC 4066473. PMID 24889601.
- LANIER, Jaron. "Should Facebook Manipulate Users?". The New York Times. Retrieved April 26, 2015.
- Boyd, Danah. "What does the Facebook experiment teach us?". Social Media Collective Research Blog. Retrieved April 26, 2015.
- Watts, Duncan. "Stop complaining about the Facebook study. It's a golden age for research". The Guardian. Retrieved April 26, 2015.
- Grimmelmann, James. "Illegal, Immoral, and Mood-Altering How Facebook and OkCupid Broke the Law When They Experimented on Users". Medium. Retrieved April 26, 2015.
- Salganik, Matt. "After the Facebook emotional contagion experiment: A proposal for a positive path forward". Freedom to Tinker. Retrieved April 26, 2015.
- Petersen, Melody. (2014, May 25). UC system struggles with professors’ outside earnings. Orange County Register. Retrieved from http://www.ocregister.com/articles/university-615629-ucla-corporate.html
- Terhune, Chad (2014, April 25). More scrutiny for UCLA’s School of Medicine. Los Angeles Times. Retrieved from http://www.latimes.com/business/la-fi-ucla-outside-money-20140426-story.html#ixzz30BvcCJIV
- Yoshino, Kimi. (2006, January 20). UC Irvine Fertility Scandal Isn’t Over: While seeking to limit its liability, college admits it failed to inform many patients of wrongdoing. Los Angeles Times. Retrieved from http://articles.latimes.com/2006/jan/20/local/me-uci20
- The Sacramento Bee. (2013, August 25). UC Davis surgeons resign after bacteria-in-brain dispute. Retrieved from "Archived copy". Archived from the original on 2016-02-02. Retrieved 2016-01-26.CS1 maint: Archived copy as title (link) |
This Excel tutorial explains how to use the Excel SIN function with syntax and examples.
The Microsoft Excel SIN function returns the sine of an angle.
The syntax for the Microsoft Excel SIN function is:
SIN( number )
The SIN function can be used in the following versions of Microsoft Excel:
The SIN function can be used in Microsoft Excel as the following type of function:
Let's look at some Excel SIN function examples and explore how to use the SIN function as a worksheet function in Microsoft Excel:
Based on the Excel spreadsheet above, the following SIN examples would return:
=SIN(A1) Result: 0.141120008 =SIN(A2) Result: -0.883454656 =SIN(A3) Result: 0.883454656 =SIN(2) Result: 0.909297427
The SIN function can also be used in VBA code in Microsoft Excel.
Let's look at some Excel SIN function examples and explore how to use the SIN function in Excel VBA code:
Dim LNumber As Double LNumber = Sin(2)
In this example, the variable called LNumber would now contain the value of 0.909297427.
(scroll to see more) |
Before you read this post, I suggest you read posts 17.13, 17.16 and 17.17.
The picture above shows a skier making a right turn by leaning to the right and transferring his/her weight on to the right-hand ski.
The pictures above show a motorcyclist turning right by leaning right and a plane turning left by leaning left. How can a plane lean over? This is achieved by flaps near the ends of the wings called ailerons.
The first picture (immediately above) shows an aileron in its neutral position; the middle picture shows it raised and the third picture shows it lowered. When the aileron is raised, the drag over the upper wing surface increases because M, defined in post 17.17 is increased (the upper surface is less streamlined – see post 17.18). So, relative to the air, the wing moves more slowly (see post 17.17). Relative to the wing, the speed of the air moves more slowly (post 16.4) over its upper surface, so the lift is reduced (post 17.16). When the aileron is lowered, the drag over the lower wing surface is increased, so the air moves more slowly over the lower wing surface – increasing the lift (post 17.16).
A plane can be made to lean to the left by raising its right wing (lowering its right-hand aileron) while simultaneously lowering its left wing (raising its left-hand aileron).
So why does leaning to the left make a plane turn left? The picture above shows the lift force, F, acting upwards perpendicular to the wings. If the wings are tilted by an angle θ to the horizontal, this force has a component (post 16.50) Fsinθ in the horizontal direction, as shown. This horizontal component of the lift acts as the centripetal force (post 17.13) to turn the plane.
The magnitude of the centrifugal force is given by mω2r where m is the mass of the object (in this case the plane), ω is the angular speed and r is the radius of the turning circle (post 17.13). Since ω is related to linear speed by v = ωr (post 17.12), mω2r = mv2/r. If it is the horizontal component of the lift that provides this centripetal force, we can write that
Fsinθ = mv2/r.
Multiplying both sides of this equation by r and dividing mv2 shows that the radius of the turning circle provided by tilting the plane, through an angle θ, is
r = mv2/Fsinθ.
The same ideas apply to an object (like the skier or the motor-cyclist) that is moving along the ground. Gravity exerts a force of magnitude mg, where m is the mass of the object and g is the magnitude of the earth’s gravitational field, that tends to pull the object into the ground (post 16.16). According to Newton’s third law of motion (post 17.26), the ground then exerts an equal and opposite force (sometimes called the ground reaction force). When the object leans, the horizontal component provides the centripetal force to turn it.
However, if our object is a road vehicle, a well-designed road is not flat when it turns a corner. In the picture above, A is higher than B to assist a cornering vehicle. This leads to a tilt of the road surface called camber.
In the picture above, the vehicle rests on a slop with a tilt angle of θ. Gravity exerts a downward vertical force of mg on the vehicle (post 16.16). Since the road surface is tilted by an angle θ with respect to the horizontal, a line perpendicular to this surface is tiled by θ with respect to the vertical, as shown in the picture. So, the component of the gravitational force acting perpendicular to the road surface is mgcosθ (post 16.50). The picture shows the ground pushing up on the vehicle with an equal but opposite force. This force has the same effect as F in the picture of the forces acting on a plane that is turning.
There are two potential problems for a vehicle on a camber. One: if θ is too big, the vehicle will fall over (post 17.22). Fortunately, the road can be designed (using the equation given above) so that this is unlikely to happen for well-designed vehicles. Two: what stops the vehicle from sliding down the slope? The force that opposes this motion is friction (post 16.19). In the next post, we will see how to ensure that sliding doesn’t happen.
17.32 How do birds fly?
17.31 Gliding and soaring
17.16 Why does a plane fly?
17.13 Centripetal force |
Students learn geometry at almost every grade level. Elementary students learn the basics of geometry: shapes and counting the number of sides. Middle school student begin learning how to find the volume and area of circles and squares. High school students jump into geometry with Euclidean/Plane Geometry and Symmetry & Tessellations. College students can take their knowledge from high school geometry to the next level and learn about Spherical Geometry, Hyperbolic Geometry, as well as Riemannian Geometry and Fourth Dimensional Geometry. No matter the grade level your student is in, we have expert tutors that will help students understand and conceptualize what they need to know in their geometry class.
What is it?
Euclidean/Plane Geometry is the study of flat space. Between every pair of points there is a unique line segment which is the shortest curve between those two points. These line segments can be extended to lines. Lines are infinitely long in both directions and for every pair of points on the line the segment of the line between them is the shortest curve that can be drawn between them. All of these ideas can be described by drawing on a flat piece of paper. From the laws of Euclidean Geometry, we get the famous Pythagorean Theorem.
Non-Euclidean Geometry is any geometry that is different from Euclidean geometry. It is a consistent system of definitions, assumptions, and proofs that describe such objects as points, lines and planes. The two most common non-Euclidean geometries are spherical geometry and hyperbolic geometry. The essential difference between Euclidean geometry and these two non-Euclidean geometries is the nature of parallel lines: In Euclidean geometry, given a point and a line, there is exactly one line through the point that is in the same plane as the given line and never intersects it. In spherical geometry there are no such lines. In hyperbolic geometry there are at least two distinct lines that pass through the point and are parallel to (in the same plane as and do not intersect) the given line.
Riemannian Geometry is the study of curved surfaces and higher dimensional spaces. For example, you might have a cylinder, or a sphere and your goal is to find the shortest curve between any pair of points on such a curved surface, also known as a minimal geodesic. Or you may look at the universe as a three dimensional space and attempt to find the distance between/around several planets.
Students can succeed in any geometry class. From elementary school to college, math can be a difficult subject for many students. We make it easier and more understandable for them by providing expert tutors in every mathematics class including geometry. We will be happy to provide you with all the information you need to choose the tutor that is best suited for the geometry class you or your student is taking. You will review their educational background and experience to know that the geometry tutors we offer are experts in their field.
The basic learning blocks of every child's education are built in elementary school. It is here that a life long love for learning is fostered. The fundamental reading, writing, and math skills are introduced to your child for the first time in elementary school. In order to be successful, elementary students need to master basic listening and study skills. Elementary school should be a positive, nurturing environment where children are introduced to learning.
Charlotte – Not just a large financial center, Charlotte is a great place to raise a family. When parents in Charlotte are looking for a tutor, they call Advanced Learners. We have been providing outstanding service and a premium selection of tutors to Charlotte families for many, many successful years. Our exemplary tutoring professionals are teachers and subject specialists that live and work in Charlotte and are ready to help your child achieve his or her full academic potential
Our Tutoring Service
We believe that one-on-one, personalized, in-home instruction is the most effective way for students to focus on academic improvement and build confidence. We know that finding you the best tutor means more that just sending a qualified teacher into your home. We provide our clients access to the largest selection of highly qualified and fully screened professional tutors in the country. We believe that tutoring is most effective when the academic needs of the student are clearly defined. Our purpose is to help you clarify those needs, set academic goals, and meet those goals as quickly and effectively as possible. Using a tutor should be a positive experience that results in higher achievement and higher self-confidence for every learner. |
Pronunciation of English ⟨wh⟩
|History and description of|
|Development of vowels|
|Development of consonants|
The pronunciation of the digraph ⟨wh⟩ in English has changed over time, and still varies today between different regions and accents. It is now most commonly pronounced /w/, the same as a plain initial ⟨w⟩, although some dialects, particularly those of Scotland, Ireland, and the Southern United States, retain the traditional pronunciation /hw/, generally realized as [ʍ], a voiceless "w" sound. The process by which the historical /hw/ has become /w/ in most modern varieties of English is called the wine–whine merger. It is also referred to as glide cluster reduction.
Before rounded vowels, a different reduction process took place in Middle English, as a result of which the ⟨wh⟩ in words like who and whom is now pronounced /h/. (A similar sound change occurred earlier in the word how.)
What is now English ⟨wh⟩ originated as the Proto-Indo-European consonant *kʷ (whose reflexes came to be written ⟨qu⟩ in Latin and the Romance languages). In the Germanic languages, in accordance with Grimm's Law, Indo-European voiceless stops became voiceless fricatives in most environments. Thus the labialized velar stop *kʷ initially became presumably a labialized velar fricative *xʷ in pre-Proto-Germanic, then probably becoming *[ʍ] – a voiceless labio-velar approximant – in Proto-Germanic proper. The sound was used in Gothic and represented by the symbol known as hwair; in Old English it was spelled as ⟨hw⟩. The spelling was changed to ⟨wh⟩ in Middle English, but the pronunciation remained [ʍ].
Because Proto-Indo-European interrogative words typically began with *kʷ, English interrogative words (such as who, which, what, when, where) typically begin with ⟨wh⟩ (for the word how, see below). As a result, such words are often called wh-words, and questions formed from them are called wh-questions. In reference to this English order, a common cross-lingual grammatical phenomenon affecting interrogative words is called wh-movement.
Developments before rounded vowels
Before rounded vowels, such as /uː/ or /oː/, there was a tendency, beginning in the Old English period, for the sound /h/ to become labialized, causing it to sound like /hw/. Therefore, words with an established /hw/ in that position came to be perceived (and spelt) as beginning with plain /h/. This occurred with the interrogative word how (Proto-Germanic *hwō, Old English hū).
A similar process of labialization of /h/ before rounded vowels occurred in the Middle English period, around the 15th century, in some dialects. Some words which historically began with /h/ came to be written ⟨wh⟩ (whole, whore). Later in many dialects /hw/ was delabialized to /h/ in the same environment, regardless of whether the historic pronunciation was /h/ or /hw/ (in some other dialects the labialized /h/ was reduced instead to /w/, leading to such pronunciations as the traditional Kentish /woʊm/ for home). This process affected the pronoun who and its inflected forms. These had escaped the earlier reduction to /h/ because they had unrounded vowels in Old English, but by Middle English the vowel had become rounded, and so the /hw/ of these words was now subject to delabialization:
- who – Old English hwā, Modern English /huː/
- whom – Old English hwǣm, Modern English /huːm/
- whose – Old English hwās, Modern English /huːz/
By contrast with how, these words changed after their spelling with ⟨wh⟩ had become established, and thus continue to be written with ⟨wh⟩ like the other interrogative words which, what, etc. (which were not affected by the above changes since they had unrounded vowels – the vowel of what became rounded at a later time).
The wine–whine merger is the phonological merger by which /hw/, historically realized as a voiceless labio-velar approximant [ʍ], comes to be pronounced the same as plain /w/, that is, as a voiced labio-velar approximant [w]. John C. Wells refers to this process as Glide Cluster Reduction. It causes the distinction to be lost between the pronunciation of ⟨wh⟩ and that of ⟨w⟩, so pairs of words like wine/whine, wet/whet, weather/whether, wail/whale, Wales/whales, wear/where, witch/which become homophones. This merger has taken place in the dialects of the great majority of English speakers.
The merger is essentially complete in England, Wales, the West Indies, South Africa, Australia, and in the speech of young speakers in New Zealand. The merger is not found, however, in Scotland, in most of Ireland (although the distinction is usually lost in Belfast and some other urban areas of Northern Ireland), and in the speech of older speakers in New Zealand.
Most speakers in the United States and Canada have the merger. According to Labov, Ash, and Boberg (2006: 49), using data collected in the 1990s, there are regions of the U.S. (particularly in the Southeast) in which speakers keeping the distinction are about as numerous as those having the merger, but there are no regions in which the preservation of the distinction is predominant (see map). Throughout the U.S. and Canada, about 83% of respondents in the survey had the merger completely, while about 17% preserved at least some trace of the distinction.
The merger seems to have been present in the south of England as early as the 13th century. It was unacceptable in educated speech until the late 18th century, but there is no longer generally any stigma attached to either pronunciation. Some RP speakers may use /hw/ for ⟨wh⟩, a usage widely considered "correct, careful and beautiful", but that is usually a conscious choice rather than a natural part of the speaker's accent.
A portrayal of the regional retention of the distinct wh- sound is found in the speech of the character Frank Underwood, a South Carolina politician, in the American television series House of Cards. The show King of the Hill, set in Texas, pokes fun at the issue through character Hank Hill's use of the hypercorrected [hʍ] pronunciation. A similar gag can be found in several episodes of Family Guy, with Brian becoming annoyed by Stewie's over-emphasis of the /hw/ sound in his pronunciation of "Cool hWhip", "hWheat Thins", and "Will hWheaton".
The distribution of the wh- sound in words does not always exactly match the standard spelling; for example, Scots pronounce whelk with plain /w/, while in many regions weasel has the wh- sound.
Below is a list of word pairs which are liable to be pronounced as homophones by speakers having the wine–whine merger.
|wail||whale||ˈweɪl||With pane-pain merger|
|weigh||whey||ˈweɪ||With wait–weight merger|
|were (man)||where||ˈwɛː(r), ˈweːr|
|were (to be)||whir||ˈwɜː(r)|
|word||whirred||ˈwɜː(r)d||With nurse merger|
|world||whirled||ˈwɜː(r)ld||With nurse merger|
Pronunciations and phonological analysis of the distinct wh sound
As mentioned above, the sound of initial ⟨wh⟩, when distinguished from plain ⟨w⟩, is often pronounced as a voiceless labio-velar approximant [ʍ], a voiceless version of the ordinary [w] sound. In some accents, however, the pronunciation is more like [hʍ], and in some Scottish dialects it may be closer to [xʍ] or [kʍ] – the [ʍ] sound preceded by a voiceless velar fricative or stop. (In other places the /kw/ of qu- words is reduced to [ʍ].) In the Black Isle, the /hw/ (like /h/ generally) is traditionally not pronounced at all. Pronunciations of the [xʍ] or [kʍ] type are reflected in the former Scots spelling quh- (as in quhen for when, etc.).
In some dialects of Scots, the sequence /hw/ has merged with the voiceless labiodental fricative /f/. Thus whit ("what") is pronounced /fɪt/, whan ("when") becomes /fan/, and whine becomes /fain/ (a homophone of fine). This is also found in some Irish English with an Irish Gaelic substrate influence (something which has led to an interesting re-borrowing of whisk(e)y as Irish Gaelic fuisce, the word having originally entered English from Scottish Gaelic).
Phonologically, the distinct sound of ⟨wh⟩ is often analyzed as the consonant cluster /hw/, and it is transcribed so in most dictionaries. When it has the pronunciation [ʍ], however, it may also be analyzed as a single phoneme, /ʍ/.
|Look up wine-whine merger in Wiktionary, the free dictionary.|
- Based on www.ling.upenn.edu and the map at Labov, Ash, and Boberg (2006: 50).
- Labov, William; Sharon Ash; Charles Boberg (2006). The Atlas of North American English. Berlin: Mouton-de Gruyter. ISBN 3-11-016746-8.
- Wells, J.C., Accents of English, CUP 1982, pp. 228–229.
- Wells, 1982, p. 408.
- Minkova, Donka (2004). "Philology, linguistics, and the history of /hw/~/w/". In Anne Curzan; Kimberly Emmons (eds.). Studies in the History of the English language II: Unfolding Conversations. Berlin: Mouton de Gruyter. pp. 7–46. ISBN 3-11-018097-9.
- See for example the YouTube video Fox Broadcasting Company (February 10, 2012), Family Guy: Wheat Thins Commercial (HD), retrieved April 29, 2019
- Robert McColl Millar, Northern and Insular Scots, Edinburgh University Press (2007), p. 62.
- Barber, C.L., Early Modern English, Edinburgh University Press 1997, p. 18.
- A similar phenomenon to this has occurred in most varieties of the Maori language. |
The number system or the numeral system is the system of naming or representing numbers. There are various types of number systems in maths like binary, decimal, etc. This lesson covers the entire concepts of the numeral system with their types, conversions and questions.
Table of Contents:
What is Number System in Maths?
A number system is defined as a system of writing to express numbers. It is the mathematical notation for representing numbers of a given set by using digits or other symbols in a consistent manner. It provides a unique representation of every number and represents the arithmetic and algebraic structure of the figures. It also allows us to operate arithmetic operations like addition, subtraction and division.
The value of any digit in a number can be determined by:
- The digit
- Its position in the number
- The base of the number system
Types of Number System
There are various types of number system in mathematics. The four most common number system types are:
- Decimal number system (Base- 10)
- Binary number system (Base- 2)
- Octal number system (Base-8)
- Hexadecimal number system (Base- 16)
Decimal Number System (Base 10 Number System)
Decimal number system has base 10 because it uses ten digits from 0 to 9. In the decimal number system, the positions successive to the left of the decimal point represent units, tens, hundreds, thousands and so on. This system is expressed in decimal numbers.
Every position shows a particular power of the base (10). For example, the decimal number 1457 consists of the digit 7 in the units position, 5 in the tens place, 4 in the hundreds position, and 1 in the thousands place whose value can be written as
(1×103) + (4×102) + (5×101) + (7×100)
(1×1000) + (4×100) + (5×10) + (7×1)
1000 + 400 + 50 + 7
Binary Number System (Base 2 Number System)
The base 2 number system is also known as the Binary number system wherein, only two binary digits exist, i.e., 0 and 1. Specifically, the usual base-2 is a radix of 2. The figures described under this system are known as binary numbers which are the combination of 0 and 1. For example, 110101 is a binary number.
We can convert any system into binary and vice versa.
Write (14)10 as a binary number.
∴ (14)10 = 11102
Octal Number System (Base 8 Number System)
In the octal number system, the base is 8 and it uses numbers from 0 to 7 to represent numbers. Octal numbers are commonly used in computer applications. Converting an octal number to decimal is the same as decimal conversion and is explained below using an example.
Example: Convert 2158 into decimal.
2158 = 2 × 82 + 1 × 81 + 5 × 80
= 2 × 64 + 1 × 8 + 5 × 1
= 128 + 8 + 5
Hexadecimal Number System (Base 16 Number System)
In the hexadecimal system, numbers are written or represented with base 16. In the hex system, the numbers are first represented just like in decimal system, i.e. from 0 to 9. Then, the numbers are represented using the alphabets from A to F. The below-given table shows the representation of numbers in the hexadecimal number system.
Number System Chart
In the number system chart, the base values and the digits of different number system can be found. Below is the chart of the numeral system.
Number System Conversion
Numbers can be represented in any of the number system categories like binary, decimal, hex, etc. Also, any number which is represented in any of the number system types can be easily converted to other. Check the detailed lesson on the conversions of number systems to learn how to convert numbers in decimal to binary and vice versa, hexadecimal to binary and vice versa, and octal to binary and vice versa using various examples.
Video Lesson on Numeral System
Number System Questions
- Convert (242)10 into hexadecimal. [Answer: (F2)16]
- Convert 0.52 into an octal number. [Answer: 4121]
- Subtract 11012 and 10102. [Answer: 0010]
- Represent 5C6 in decimal. [Answer:1478]
- Represent binary number 1.1 in decimal. [Answer: 1.5]
Also Check: Binary Operations
Computer Numeral System (Number System in Computers)
When we type any letter or word, the computer translates them into numbers since computers can understand only numbers. A computer can understand only a few symbols called digits and these symbols describe different values depending on the position they hold in the number. In general, the binary number system is used in computers. However, the octal, decimal and hexadecimal systems are also used sometimes.
More Topics Related to Number Systems
|Number System for Class 9||NCERT Solutions for Class 9 Maths Chapter 1- Number Systems|
|Important Questions Class 9 Maths Chapter 1 Number System||Number System Class 9 Notes – Chapter 1|
Frequently Asked Questions
What is Number System and its Types?
The number system is simply a system to represent or express numbers. There are various types of number systems and the most commonly used ones are decimal number system, binary number system, octal number system, and hexadecimal number system.
Why is the Number System Important?
Number system helps to represent numbers in a small symbol set. Computers, in general, use binary numbers 0 and 1 to keep the calculations simple and to keep the amount of necessary circuitry less, which results in the least amount of space, energy consumption and cost.
What is Base 1 Number System Called?
Base 1 number system is called the unary numeral system and is the simplest numeral system to represent natural numbers. |
From the 1880s until World War II (1939-1945), France governed Vietnam as part of French Indochina, which also included Cambodia and Laos. The country was under the nominal control of an emperor, Bao Dai. In 1940 Japanese troops invaded and occupied French Indochina. In December of that year, Vietnamese nationalists established the League for the Independence of Vietnam, or Viet Minh, seeing the turmoil of the war as an opportunity for resistance to French colonial rule.
The United States demanded that Japan leave Indochina, warning of military action. The Viet Minh began guerrilla warfare against Japan and entered an effective alliance with the United States. Viet Minh troops rescued downed U.S. pilots, located Japanese prison camps, helped U.S. prisoners to escape, and provided valuable intelligence to the Office of Strategic Services (OSS), the forerunner of the Central Intelligence Agency (CIA). Ho Chi Minh, the principal leader of the Viet Minh, was even made a special OSS agent.
When the Japanese signed their formal surrender on September 2, 1945, Ho used the occasion to declare the independence of Vietnam, which he called the Democratic Republic of Vietnam (DRV). Emperor Bao Dai had abdicated the throne a week earlier. The French, however, refused to acknowledge Vietnam’s independence, and later that year drove the Viet Minh into the north of the country.
Ho wrote eight letters to U.S. president Harry Truman, imploring him to recognize Vietnam’s independence. Many OSS agents informed the U.S. administration that despite being a Communist, Ho Chi Minh was not a puppet of the Communist Union of Soviet Socialist Republics (USSR) and that he could potentially become a valued ally in Asia. Tensions between the United States and the USSR had mounted after World War II, resulting in the Cold War.
The foreign policy of the United States during the Cold War was driven by a fear of the spread of Communism. Eastern Europe had fallen under the domination of the Communist USSR, and China was ruled by Communists. United States policymakers felt they could not afford to lose Southeast Asia as well to the Communists. The United States therefore condemned Ho Chi Minh as an agent of international Communism and offered to assist the French in recapturing Vietnam.
In 1946 United States warships ferried elite French troops to Vietnam where they quickly regained control of the major cities, including Hanoi, Haiphong, ðà Nang, Hue, and Saigon (now Ho Chi Minh City), while the Viet Minh controlled the countryside. The Viet Minh had only 2000 troops at the time Vietnam’s independence was declared, but recruiting increased after the arrival of French troops. By the late 1940s, the Viet Minh had hundreds of thousands of soldiers and were fighting the French to a draw. In 1949 the French set up a government to rival Ho Chi Minh’s, installing Bao Dai as head of state.
In May 1954 the Viet Minh mounted a massive assault on the French fortress at Dien Bien, in northwestern Vietnam. The Battle of Dien Bien Phu resulted in perhaps the most humiliating defeat in French military history. Already tired of the war, the French public forced their government to reach a peace agreement at the Geneva Conference.
France asked the other world powers to help draw up a plan for French withdrawal from the region and for the future of Vietnam. Meeting in Geneva, Switzerland, from May 8 to July 21, 1954, diplomats from France, the United Kingdom, the USSR, China, and the United States, as well as representatives from Vietnam, Laos, and Cambodia, drafted a set of agreements called the Geneva Accords. These agreements provided for the withdrawal of French troops to the south of Vietnam until they could be safely removed from the country. Viet Minh forces moved into the north. Vietnam was temporarily divided at the 17th parallel to allow for a cooling-off period and for warring factions among the Vietnamese to return to their native regions. Ho Chi Minh maintained control of North Vietnam, or the DRV, while Emperor Bao Dai remained head of South Vietnam.
Elections were to be held in 1956 throughout the north and south and to be supervised by an International Control Commission that had been appointed at Geneva and was made up of representatives from Canada, Poland, and India. Following these elections, Vietnam was to be reunited under the government chosen by popular vote. The United States refused to sign the accords, because it did not want to allow the possibility of Communist control over Vietnam. The U.S. government moved to establish the Southeast Asia Treaty Organization (SEATO), a regional alliance that extended protection to South Vietnam, Cambodia, and Laos in case of Communist “subversion.” SEATO, which came into force in 1955, became the mechanism by which Washington justified its support for South Vietnam; this support eventually became direct involvement of U.S. troops.
Also in 1955, the United States picked Ngo Dinh Diem to replace Bao Dai as head of the anti-Communist regime in South Vietnam. With U.S. encouragement, Diem refused to participate in the planned national elections, which Ho Chi Minh and the Lao Dong, or Workers’ Party, were favored to win. Instead, Diem held elections only in South Vietnam, an action that violated the Geneva Accords.
Diem won the elections with 98.2 percent of the vote, but many historians believe these elections were rigged, since 200,000 more people voted in Saigon than were registered. Diem then declared South Vietnam to be an independent nation called the Republic of Vietnam (RVN), with Saigon as its capital. Vietnamese Communists and many non-Communist Vietnamese nationalists saw the creation of the RVN as an effort by the United States to interfere with the independence promised at Geneva.
III THE BEGINNING OF THE WAR:
The repressive measures of the Diem government eventually led to increasingly organized opposition within South Vietnam. Diem’s government represented a minority of Vietnamese who were mostly businessmen, Roman Catholics, large landowners, and others who had fought with the French against the Viet Minh. The United States initially backed the South Vietnamese government with military advisers and financial assistance, but more involvement was needed to keep it from collapsing. The Gulf of Tonkin Resolution eventually gave President Lyndon B. Johnson permission to escalate the war in Vietnam.
in South Vietnam
When Vietnam was divided in 1954, many Viet Minh who had been born in the southern part of the country returned to their native villages to await the 1956 elections and the reunification of their nation. When the elections did not take place as planned, these Viet Minh immediately formed the core of opposition to Diem’s government and sought its overthrow. The Viet Minh were greatly aided in their efforts to organize resistance in the countryside by Diem’s own policies, which alienated many peasants.
Beginning in 1955, the United States created the Army of the Republic of Vietnam (ARVN) in South Vietnam. Using these troops, Diem took land away from peasants and returned it to former landlords, reversing the land redistribution program implemented by the Viet Minh. He also forcibly moved many villagers from their ancestral lands to controlled settlements in an attempt to prevent Communist activity, and he drafted their sons into the ARVN.
Diem sought to discredit the Viet Minh by contemptuously referring to them as “Viet Cong” (the Vietnamese equivalent of calling them “Commies”), yet their influence continued to grow. Most southern Viet Minh were members of the Lao Dong and were still committed to its program of national liberation, reunification of Vietnam, and reconstruction of society along socialist principles. By the late 1950s they were anxious to begin full-scale armed struggle against Diem but were held in check by the northern branch of the party, which feared that this would invite the entry of U.S. armed forces. By 1959, however, opposition to Diem was so widespread in rural areas that the southern Communists formed the National Liberation Front (NLF), and in 1960 the North Vietnamese government gave its formal sanction to the organization. The NLF began to train and equip guerrillas, known as the People’s Liberation Armed Forces (PLAF).
Diem’s support was concentrated mainly in the cities. Although he had been a nationalist opposed to French rule, he welcomed into his government those Vietnamese who had collaborated with the French, and many of these became ARVN officers. Catholics were a minority throughout Vietnam, amounting to no more than 10 percent of the population, but they predominated in government positions because Diem himself was Catholic. Between 1954 and 1955, operatives paid by the CIA spread rumors in northern Vietnam that Communists were going to launch a persecution of Catholics, which caused nearly 1 million Catholics to flee to the south. Their resettlement uprooted Buddhists who already deeply resented Diem’s rule because of his severe discrimination against them.
In May 1963 Buddhists began a series of demonstrations against Diem, and the demonstrators were fired on by police. At least seven Buddhist monks set themselves on fire to protest the repression. Diem dismissed these suicides as publicity stunts and promptly arrested 1400 monks. He then arrested thousands of high school and grade school students who were involved in protests against the government. After this, Diem was viewed as an embarrassment both by the United States and by many of his own generals.
The Saigon government’s war against the NLF was also going badly. In January 1963 an ARVN force of 2000 encountered a group of 350 NLF soldiers at Ap Bac, a village south of Saigon in the Mekong River Delta. The ARVN troops were equipped with jet fighters, helicopters, and armored personnel carriers, while the NLF forces had only small arms. Nonetheless, 61 ARVN soldiers were killed, as were three U.S. military advisers. By contrast, the NLF forces lost only 12 men. Some U.S. military advisers began to report that Saigon was losing the war, but the official military and embassy press officers reported Ap Bac as a significant ARVN victory. Despite this official account, a handful of U.S. journalists began to report pessimistically about the future of U.S. involvement in South Vietnam, which led to increasing public concern.
President John F. Kennedy still believed that the ARVN could become effective. Some of his advisers advocated the commitment of U.S. combat forces, but Kennedy decided to try to increase support for the ARVN among the people of Vietnam through counterinsurgency. United States Special Forces (Green Berets) would work with ARVN troops directly in the villages in an effort to match NLF political organizing and to win over the South Vietnamese people.
To support the U.S. effort, the Diem government developed a “strategic hamlet” program that was essentially an extension of Diem’s earlier relocation practices. Aimed at cutting the links between villagers and the NLF, the program removed peasants from their traditional villages, often at gunpoint, and resettled them in new hamlets fortified to keep the NLF out. Administration was left up to Diem’s brother Nhu, a corrupt official who charged villagers for building materials that had been donated by the United States. In many cases peasants were forbidden to leave the hamlets, but many of the young men quickly left anyway and joined the NLF. Young men who were drafted into the ARVN often also worked secretly for the NLF. The Kennedy administration concluded that Diem’s policies were alienating the peasantry and contributing significantly to NLF recruitment.
The number of U.S. advisers assigned to the ARVN rose steadily. In January 1961, when Kennedy took office, there were 800 U.S. advisers in Vietnam; by November 1963 there were 16,700. American air power was assigned to support ARVN operations; this included the aerial spraying of herbicides such as Agent Orange, which was intended to deprive the NLF of food and jungle cover. Despite these measures, the ARVN continued to lose ground.
As the military situation deteriorated in South Vietnam, the United States sought to blame it on Diem’s incompetence and hoped that changes in his administration would improve the situation. Nhu’s corruption became a principal focus, and Diem was urged to remove his brother. Many in Diem’s military were especially dissatisfied and hoped for increased U.S. aid. General Duong Van Minh informed the CIA and U.S. ambassador Henry Cabot Lodge of a plot to conduct a coup d’état against Diem. After much discussion, Kennedy approved support for the coup. He was reportedly dismayed, however, when the coup resulted in the murder of both Diem and Nhu on November 1, 1963. Far from stabilizing South Vietnam, the assassination of Diem ushered in ten successive governments within 18 months. Meanwhile, the CIA was forced to admit that the strength of the NLF was continuing to grow.
B The Gulf of Tonkin Resolution Succeeding to the presidency after Kennedy’s assassination on November 22, 1963, Lyndon B. Johnson felt he had to take a forceful stance on Vietnam so that other Communist countries would not think that the United States lacked resolve. Kennedy had begun to consider the possibility of withdrawal from Vietnam and had even ordered the removal of 1000 advisers shortly before he was assassinated, but Johnson increased the number of U.S. advisers to 27,000 by mid-1964. Even though intelligence reports clearly stated that most of the support for the NLF came from the south, Johnson, like his predecessors, continued to insist that North Vietnam was orchestrating the southern rebellion. He was determined that he would not be held responsible for allowing Vietnam to fall to the Communists.
Johnson believed that the key to success in the war in South Vietnam was to frighten North Vietnam’s leaders with the possibility of full-scale U.S. military intervention. In January 1964 he approved top-secret, covert attacks against North Vietnamese territory, including commando raids against bridges, railways, and coastal installations. Johnson also ordered the U.S. Navy to conduct surveillance missions along the North Vietnamese coast. He increased the secret bombing of territory in Laos along the Ho Chi Minh Trail, a growing network of paths and roads used by the NLF and the North Vietnamese to transport supplies into South Vietnam. Hanoi concluded that the United States was preparing to occupy South Vietnam and indicated that it, too, was preparing for full-scale war.
On August 2, 1964, North Vietnamese coastal gunboats fired on the destroyer USS Maddox, which had penetrated North Vietnam’s territorial boundaries in the Gulf of Tonkin. Johnson ordered more ships to the area, and on August 4 both the Maddox and the USS Turner Joy reported that North Vietnamese patrol boats had fired on them. Johnson then ordered the first air strikes against North Vietnamese territory and went on television to seek approval from the U.S. public. (Subsequent congressional investigations would conclude that the August 4 attack almost certainly had never occurred.) The U.S. Congress overwhelmingly passed the Gulf of Tonkin Resolution, which effectively handed over war-making powers to Johnson until such time as "peace and security" had returned to Vietnam.
After the Gulf of Tonkin incident Johnson steadily escalated U.S. bombing of North Vietnam, which began to dispatch well-trained units of its People’s Army of Vietnam (PAVN) into the south. The NLF guerrillas coordinated their attacks with PAVN forces. Between February 7 and February 10, 1965, the NLF launched surprise attacks on the U.S. air base at Pleiku, killing 8 Americans, wounding 126, and destroying 10 aircraft; they struck again at Qui Nhon, killing 23 U.S. servicemen and wounding 21.
Johnson responded by bombing Hanoi at a time when Soviet premier Aleksey Kosygin was visiting, thus pushing the USSR closer to North Vietnam and ensuring future Soviet arms deliveries to Southeast Asia. Johnson’s advisers, chiefly Defense Secretary Robert McNamara and National Security Adviser McGeorge Bundy, declared that a full-scale air war against North Vietnam would depress the morale of the NLF. The bombing did just the opposite, however. The inability of the ARVN to protect U.S. air bases led Johnson’s senior planners to the consensus that U.S. combat forces would be required. On March 8, 1965, 3500 U.S. Marines landed at ðà Nang. By the end of April, 56,000 other combat troops had joined them; by June the number had risen to 74,000.
IV ESCALATED UNITED STATES
When some of the soldiers of the U.S. 9th Marine Regiment landed in ðà Nang in March 1965, their orders were to protect the U.S. air base, but the mission was quickly escalated to include search-and-destroy patrols of the area around the base. This corresponded in miniature to the larger strategy of General William Westmoreland. Westmoreland, who took over the Military Assistance Command in Vietnam (MACV) in 1964, advocated establishing a large American force and then unleashing it in big sweeps. His strategy was that of attrition—eliminating or wearing down the enemy by inflicting the highest death toll possible. There were 80,000 U.S. troops in Vietnam by the end of 1965; by 1969 a peak of 543,000 troops would be reached.
Having easily pushed aside the ARVN, both the North Vietnamese and the NLF had anticipated the U.S. escalation. With full-scale movement of U.S. troops onto South Vietnamese territory, the Communists claimed that the Saigon regime had become a puppet, not unlike the colonial collaborators with the French. Both the North Vietnamese and NLF appealed to the nationalism of the Vietnamese to rise up and drive this new foreign army from their land.
A DRV and NLF Strategy The strategy developed against the United States was the result of intense debate both within the Lao Dong in the north, and between the northerners and the NLF. Truong Chinh, the leading southern military figure, argued that the southern Vietnamese must liberate themselves; Le Duan, secretary general of the Lao Dong, insisted that Vietnam was one nation and therefore dependent on all Vietnamese for its independence and reunification. Ho Chi Minh, revered widely throughout Vietnam as the father of independence, successfully appealed for unity. The Central Committee Directorate for the South (also known as the Central Office for South Vietnam, or COSVN), which was composed of DRV and NLF representatives, was then able to coordinate a unified strategy.
After the United States initiated large-scale bombing against the DRV in 1964, in the wake of the Gulf of Tonkin incident, Hanoi dispatched the first unit of northern-born regular soldiers to the south. Previously, southern-born Viet Minh, known as regroupees, had returned to their native regions and joined NLF guerrilla units. Now PAVN regulars, commanded by generals who had been born in the south, began to set up bases in the Central Highlands of South Vietnam in order to gain strategic position.
Unable to cross the Demilitarized Zone (DMZ) at the 17th parallel separating North from South Vietnam, PAVN regulars moved into South Vietnam along the Ho Chi Minh Trail through Laos and Cambodia. In use since 1957, the trail was originally a series of footpaths; by the late 1960s it would become a network of paved highways that enabled the motor transport of people and equipment. The NLF guerrillas and North Vietnamese troops were poorly armed compared to the Americans, so once they were in South Vietnam they avoided open combat. Instead they developed hit-and-run tactics designed to cause steady casualties among the U.S. troops and to wear down popular support for the war in the United States.
B United States Strategy In June 1964 retired general Maxwell Taylor replaced Henry Cabot Lodge as ambassador to South Vietnam. A former chairman of the Joint Chiefs of Staff, the military advisory group to the president, Taylor at first opposed the introduction of American combat troops, believing that this would make the ARVN quit fighting altogether. By 1965 he agreed to the request of General Westmoreland for combat forces. Taylor initially advocated an enclave strategy, where U.S. forces would seek to preserve areas already considered to be under Saigon’s control. This quickly proved impossible, since NLF strength was considerable virtually everywhere in South Vietnam.
In October 1965 the newly arrived 1st Cavalry Division of the U.S. Army fought one of the largest battles of the Vietnam War in the Ia Drang Valley, inflicting a serious defeat on North Vietnamese forces. The North Vietnamese and NLF forces changed their tactics as a result of the battle. From then on both would fight at times of their choosing, hitting rapidly, with surprise if possible, and then withdrawing just as quickly to avoid the impact of American firepower. The success of the American campaign in the Ia Drang Valley convinced Westmoreland that his strategy of attrition was the key to U.S. victory. He ordered the largest search-and-destroy operations of the war in the “Iron Triangle,” the Communist stronghold northeast of Saigon. This operation was intended to find and destroy North Vietnam and NLF military headquarters, but the campaign failed to wipe out Communist forces from the area.
By 1967 the ground war had reached a stalemate, which led Johnson and McNamara to increase the ferocity of the air war. The Joint Chiefs of Staff had been pressing for this for some time, but there was already some indication that intensified bombing would not produce the desired results. In 1966 the bombing of North Vietnam’s oil facilities had destroyed 70 percent of their fuel reserves, but the DRV’s ability to wage the war had not been affected.
Planners wished to avoid populated areas, but when 150,000 sorties per year were being flown by U.S. warplanes, civilian casualties were inevitable. These casualties provoked revulsion both in the United States and internationally. In 1967 the chairman of the Joint Chiefs of Staff, General Earle Wheeler, declared that no more “major military targets” were left. Unable to widen the bombing to population centers for fear of Chinese and Soviet reactions in support of North Vietnam, the U.S. Department of Defense had to admit stalemate in the air war as well. The damage that had already been inflicted on Vietnam’s population was enormous.
C The Tet Offensive and Beyond
In 1967 North Vietnam and the NLF decided the time had come to mount an all-out offensive aimed at inflicting serious losses on both the ARVN and U.S. forces. They planned the Tet Offensive with the hope that this would significantly affect the public mood in the United States. In December 1967 North Vietnamese troops attacked and surrounded the U.S. Marine base at Khe Sanh, placing it under siege. Westmoreland ordered the outpost held at all costs. To prevent the Communists from overrunning the base, about 50,000 U.S. Marines and Army troops were called into the area, thus weakening positions further south.
This concentration of American troops in one spot was exactly what the COSVN strategists had hoped would happen. The main thrust of the Tet Offensive then began on January 31, 1968, at the start of Tet, or the Vietnamese lunar new year celebration, when a lull in fighting traditionally took place. Most ARVN troops had gone home on leave, and U.S. troops were on stand-down in many areas. Over 85,000 NLF soldiers simultaneously struck at almost every major city and provincial capital across South Vietnam, sending their defenders reeling. The U.S. Embassy in Saigon, previously thought to be invulnerable, was taken over by the NLF, and held for eight hours before U.S. forces could retake the complex. It took three weeks for U.S. troops to dislodge 1000 NLF fighters from Saigon.
During the Tet Offensive, the imperial capital of Hue witnessed the bloodiest fighting of the entire war. South Vietnamese were assassinated by Communists for collaborating with Americans; then when the ARVN returned, NLF sympathizers were murdered. United States Marines and paratroopers were ordered to go from house to house to find North Vietnamese and NLF soldiers. Virtually indiscriminate shelling was what killed most civilians, however, and the architectural treasures of Hue were laid to waste. More than 100,000 residents of the city were left homeless.
The Tet Offensive as a whole lasted into the fall of 1968, and when it was over the North Vietnamese and the NLF had suffered acute losses. The U.S. Department of Defense estimated that a total of 45,000 North Vietnamese and NLF soldiers had been killed, most of them NLF fighters. Although it was covered up for more than a year, one horrifying event during the Tet Offensive would indelibly affect America’s psyche. In March 1968 elements of the U.S. Army’s Americal Division wiped out an entire hamlet called My Lai, killing 500 unarmed civilians, mostly women and children.
After Tet, Westmoreland said that the enemy was almost conquered and requested 206,000 more troops to finish the job. Told by succeeding administrations since 1955 that there was “light at the end of the tunnel,” that victory in Vietnam was near, the American public had reached a psychological breaking point. The success of the NLF in coordinating the Tet Offensive demonstrated both how deeply rooted the Communist resistance was and how costly it would be for the United States to remain in Vietnam. After Tet a majority of Americans wanted some closure to the war, with some favoring an immediate withdrawal while others held out for a negotiated peace. President Johnson rejected Westmoreland’s request for more troops and replaced him as the commander of U.S. forces in Vietnam with Westmoreland’s deputy, General Creighton Abrams. Johnson himself decided not to seek reelection in 1968. Republican Richard Nixon ran for the presidency declaring that he would bring “peace with honor” if elected.
FIRE SUPPORT BASE CROOK’S GROUND ATTACKS June 1969
V ENDING THE WAR: 1969-1975
Promising an end to the war in Vietnam, Richard Nixon won a narrow victory in the election of 1968. Slightly more than 30,000 young Americans had been killed in the war when Nixon took office in January 1969. The new president retained his predecessor’s goal of a non-Communist South Vietnam, however, and this could not be ensured without continuing the war. Nixon’s most pressing problem was how to make peace and war at the same time. His answer was a policy called “Vietnamization.” Under this policy, he would withdraw American troops and the South Vietnamese army would take over the fighting.
A Nixon’s Vietnamization During his campaign for the presidency, Nixon announced that he had a secret plan to end the war. In July 1969, after he had become president, he issued what came to be known as the Nixon doctrine, which stated that U.S. troops would no longer be directly involved in Asian wars. He ordered the withdrawal of 25,000 troops, to be followed by more, and he lowered draft calls. On the other hand, Nixon also stepped up the Phoenix Program, a secret CIA operation that resulted in the assassination of 20,000 suspected NLF guerrillas, many of whom were innocent civilians. The operation increased funding for the ARVN and intensified the bombing of North Vietnam. Nixon reasoned that to keep the Communists at bay during the U.S. withdrawal, it was also necessary to bomb their sanctuaries in Cambodia and to increase air strikes against Laos.
The DRV leadership, however, remained committed to the expulsion of all U.S. troops from Vietnam and to the overthrow of the Saigon government. As U.S. troop strength diminished, Hanoi’s leaders planned their final offensive. While the ARVN had increased in size and was better armed than it had been in 1965, it could not hold its own without the help of heavy U.S. air power.
B Failed Peace Negotiations Johnson had initiated peace negotiations after the first phase of the Tet Offensive. Beginning in Paris on May 13, 1968, the talks rapidly broke down over disagreements about the status of the NLF, which the Saigon government refused to recognize. In October 1968, just before the U.S. presidential elections, candidate Hubert Humphrey called for a negotiated settlement, but Nixon secretly persuaded South Vietnam’s President Nguyen Van Thieu to hold out for better terms under a Nixon administration. Stating that he would never negotiate with Communists, Thieu caused the Paris talks to collapse and contributed to Humphrey’s defeat as well.
Nixon thus inherited the Paris peace talks, but they continued to remain stalled as each faction refused to alter its position. Hanoi insisted on the withdrawal of all U.S. forces, the removal of the Saigon government, and its replacement through free elections that would include the Provisional Revolutionary Government (PRG), which the NLF created in June 1969 to take over its governmental role in the south and serve as a counterpart to the Saigon government. The United States, on the other hand, insisted that all North Vietnamese troops be withdrawn.
C Invasion of Cambodia
In March 1969 Nixon ordered the secret bombing of Cambodia. Intended to wipe out North Vietnamese and NLF base camps along the border with South Vietnam in order to provide time for the buildup of the ARVN, the campaign failed utterly. The secret bombing lasted four years and caused great destruction and upheaval in Cambodia, a land of farmers that had not known war in centuries. Code-named Operation Menu, the bombing was more intense than that carried out over Vietnam. An estimated 100,000 peasants died in the bombing, while 2 million people were left homeless.
In April 1970 Nixon ordered U.S. troops into Cambodia. He argued that this was necessary to protect the security of American units then in the process of withdrawing from Vietnam, but he also wanted to buy security for the Saigon regime. When Nixon announced the invasion, U.S. college campuses erupted in protest, and one-third of them shut down due to student walkouts. At Kent State University in Ohio four students were killed by panicky national guardsmen who had been called up to prevent rioting. Two days later, two students were killed at Jackson State College in Mississippi. Congress proceeded to repeal the Gulf of Tonkin Resolution. Congress also passed the Cooper-Church Amendment, which specifically forbade the use of U.S. troops outside South Vietnam. The measure did not expressly forbid bombing, however, so Nixon continued the air strikes on Cambodia until 1973.
Three months after committing U.S. forces, Nixon ordered them to withdraw from Cambodia. The combined effects of the bombing and the invasion, however, had completely disrupted Cambodian life, driving millions of peasants from their ancestral lands. The right-wing government then in power in Cambodia was supported by the United States, and the government was blamed for allowing the bombing to occur. Farmers who had never concerned themselves with politics now flooded to the Communist opposition group, the Khmer Rouge. After a gruesome civil war, the Khmer Rouge took power in 1975 and became one of the bloodiest regimes of the 20th century.
D Campaign in Laos The United States began conducting secret bombing of Laos in 1964, targeting both the North Vietnamese forces along sections of the Ho Chi Minh Trail and the Communist Pathet Lao guerrillas, who controlled the northern part of the country. Roughly 150,000 tons of bombs were dropped on the Plain of Jars in northern Laos between 1964 and 1969. By 1970 at least one-quarter of the entire population of Laos were refugees, and about 750,000 Lao had been killed.
Prohibited by the Cooper-Church Amendment from deploying U.S. troops and anxious to demonstrate the fighting prowess of the improved ARVN, Nixon took the advice of General Creighton Abrams and attempted to cut vital Communist supply lines along the Ho Chi Minh Trail. On February 8, 1971, 21,000 ARVN troops, supported by American B-52 bombers, invaded Laos. Intended to disrupt any North Vietnamese and NLF plans for offensives and to test the strength of the ARVN, this operation was as much a failure as the Cambodian invasion. Abrams claimed 14,000 North Vietnamese casualties, but over 9000 ARVN soldiers were killed or wounded, while the rest were routed and expelled from Laos.
The success of Vietnamization seemed highly doubtful, since the Communist forces showed that the new ARVN could be defeated. Instead of inhibiting the Communist Pathet Lao, the U.S. attacks on Laos promoted their rise. In 1958 the Pathet Lao had the support of one-third of the population; by 1973 a majority denied the legitimacy of the U.S.-supported Royal Lao Government. By 1975 a Communist government was established in Laos.
E Bombing of North Vietnam In the spring of 1972, with only 6000 U.S. combat troops remaining in South Vietnam, the DRV leadership decided the time had come to crush the ARVN. On March 30 over 30,000 North Vietnamese troops crossed the Demilitarized Zone, along with another 150,000 PRG fighters, and attacked Quang Trí Province, easily scattering ARVN defenders. The attack, known as the Easter Offensive, could not have come at a worse time for Nixon and his National Security Adviser Henry Kissinger. A military defeat of the ARVN would leave the United States in a weak position at the Paris peace talks and would compromise its strategic position globally.
Risking the success of the upcoming Moscow summit, Nixon unleashed the first sustained bombing of North Vietnam since 1969 and moved quickly to mine the harbor of Haiphong. Between April and October 1972 the United States conducted 41,000 sorties over North Vietnam, especially targeting Quang Trí. North Vietnam’s Easter Offensive was crushed. At least 100,000 Communist troops were killed. The ailing Vo Nguyen Giap, founder of North Vietnam’s army, was forced into retirement and succeeded by Van Tien Dung, who counseled the renewal of negotiations with the United States.
Further negotiations were held in Paris between Kissinger and Le Duc Tho, who represented North Vietnam. Seeking an end to the war before the U.S. presidential elections in November, Kissinger made remarkable concessions. The United States would withdraw completely, while accepting the presence of 14 North Vietnamese divisions in South Vietnam and recognizing the political legitimacy of the PRG. Hanoi would drop its insistence on the resignation of Nguyen van Thieu, who had become president of South Vietnam in 1967. Kissinger announced on October 27 that “peace was at hand.” Thieu, however, accused the United States of selling him out and Nixon refused to sign the agreement.
After the 1972 elections, Kissinger attempted to revise the agreements he had already made. North Vietnam refused to consider these revisions, and Kissinger threatened to renew air assaults against North Vietnam unless the new conditions were met. Nixon then unleashed at Christmas the final and most intense bombing of the war over Hanoi and Haiphong.
F United States Withdrawal
While many U.S. officials were convinced that Hanoi was bombed back to the negotiating table, the final treaty changed nothing significant from what had already been agreed to by Kissinger and Tho in October. Nixon’s Christmas Bombing was intended to warn Hanoi that American air power remained a threat, and he secretly promised Thieu that the United States would punish North Vietnam should they violate the terms of the final settlement. Nixon’s political fortunes were about to decline, however. Although he had won reelection by a landslide in November 1972, he was suffering from revelations about the Watergate scandal. The president’s campaign officials had orchestrated a burglary at the Democratic National Committee headquarters, and Nixon had attempted to cover it up by lying to the American people about his role.
The president made new enemies when the secret bombing of Cambodia was revealed at last. Congress was threatening a bill of impeachment and in early January 1973 indicated it would cut off all funding for operations in Indochina once U.S. forces had withdrawn. In mid-January Nixon halted all military actions against North Vietnam.
On January 27, 1973, all four parties to the Vietnam conflict—the United States, South Vietnam, the PRG, and North Vietnam—signed the Treaty of Paris. The final terms provided for the release of all American prisoners of war from North Vietnam; the withdrawal of all U.S. forces from South Vietnam; the end of all foreign military operations in Laos and Cambodia; a cease-fire between North and South Vietnam; the formation of a National Council of Reconciliation to help South Vietnam form a new government; and continued U.S. military and economic aid to South Vietnam. In a secret addition to the treaty Nixon also promised $3.25 billion in reparations for the reconstruction of ravaged North Vietnam, an agreement that Congress ultimately refused to uphold.
G Cease-fire Aftermath On March 29, 1973, the last U.S. troops left Vietnam. Thieu quickly showed that he had no desire to honor the terms of the Paris peace treaty, which he had signed under duress. He issued an order to the ARVN: “If Communists come into your village, shoot them in the head.” Thieu immediately began offensives against PRG villages, in open violation of the treaty. Thieu believed the continued presence of North Vietnamese soldiers on South Vietnamese soil threatened South Vietnam’s existence.
North Vietnam and the PRG refrained from taking any action against the ARVN’s provocation, keeping carefully to the treaty terms (except for maintaining troops in Laos and Cambodia). They insisted that both Saigon and the United States also abide by the treaty. Not wishing to be caught unprepared by treaty violations, the Communists concentrated on logistics and infrastructure by building roads to accommodate the movement of troops.
Meanwhile, the withdrawal of U.S. personnel had resulted in a collapsing economy throughout South Vietnam. Millions had depended on the money spent by Americans in Vietnam. Thieu’s government was ill-equipped to treat the mass unemployment and deepening poverty that resulted from the U.S. withdrawal. The ARVN still received $700 million from the U.S. Congress and was twice the size of the Communist forces, but morale was collapsing. Over 200,000 ARVN soldiers deserted in 1974 in order to be with their families.
Having no faith that the Paris treaty would be implemented, the North Vietnamese set 1975 as the year to mount their final offensive. They believed it would take at least two years; the rapid collapse of the ARVN was therefore a surprise even to them. After the initial attack by the North Vietnamese in the Central Highlands northeast of Saigon on January 7, the ARVN immediately began to fall apart. On March 25 the ancient imperial city of Hue fell; then on March 29, ðà Nang, the former U.S. Marine headquarters, was overtaken. On April 20 Thieu resigned, accusing the United States of betrayal. His successor was Duong Van Minh, who had been among those who overthrew Diem in 1963. On April 30 Minh issued his unconditional surrender to the PRG. Almost 30 years after Ho Chi Minh’s declaration of independence, Vietnam was finally unified.
VI THE TROOPS
In the United States, military conscription, or the draft, had been in place virtually without interruption since the end of World War II, but volunteers generally predominated in combat units. When the first U.S. combat troops arrived in Vietnam in 1965 they were composed mainly of volunteers. The Air Force, Navy, and Marines were volunteer units. The escalating war, however, required more draftees. In 1965 about 20,000 men per month were inducted into the military, most into the Army; by 1968 about 40,000 young men were drafted each month to meet increased troop levels ordered for Vietnam. The conscript army was largely composed of teenagers; the average age of a U.S. soldier in Vietnam was 19.
Those conscripted were mostly youths from the poorer section of American society, who did not have access to the exemptions that were available to their more privileged fellow citizens. Of the numerous exemptions from military service that Congress had written into law, the most far-reaching were student deferments. The draft laws effectively enabled most upper- and middle-class youngsters to avoid military service. By 1968 it was increasingly evident that the draft system was deeply unfair and discriminatory. Responding to popular pressures, the Selective Service, the agency that administered the draft, instituted a lottery system, which might have produced an army more representative of society at large. Student deferments were kept by Nixon until 1971, however, so as not to alienate middle-class voters. By then his Vietnamization policy had lowered monthly draft calls, and physical exemptions were still easily obtained by the privileged, especially from draft boards in affluent communities.
Both North and South Vietnam also conscripted troops. Revolutionary nationalist ideology was quite strong in the north, and the DRV was able to create an army with well-disciplined, highly motivated troops. It became the fourth-largest army in the world and one of the most experienced. South Vietnam also drafted soldiers, beginning in 1955 when the ARVN was created. Most ARVN conscripts, however, had little personal motivation to fight other than a paycheck. In 1965, 113,000 deserted from the ARVN; by 1972, 20,000 per month were slipping away from the war.
Although equipped with high-tech weaponry that far exceeded the fire power available to its enemies, the ARVN was poorly led and failed most of the time to check its opponents’ actions. United States troops came to dislike and mistrust many ARVN units, accusing them of abandoning the battlefield. The ARVN also suffered from internal corruption. Numerous commanders would claim nonexistent troopers and then pocket the pay intended for those troopers; this practice made some units dangerously understaffed. Many ARVN soldiers were secretly working for the NLF, providing information that undermined the U.S. effort. At various times, battles verging on civil war broke out between troops within the ARVN. Internal disunity on this scale was never an issue among the North Vietnamese troops or the NLF guerrillas.
The armed forces of the United States serving in Vietnam began to suffer from internal dissension and low morale as well. Racism against the Vietnamese troubled many soldiers, particularly those who had experienced racism directed against themselves in the United States. In Vietnam, Americans routinely referred to all Vietnamese, both friend and foe, as “gooks.” This process of dehumanizing the Vietnamese led to many atrocities, including the massacre at My Lai, and it provoked profound misgivings among U.S. troops. The injustice of the Selective Service system also turned soldiers against the war. By 1968 coffeehouses run by soldiers had sprung up at 26 U.S. bases, serving as forums for antiwar activities. At least 250 underground antiwar newspapers were published by active-duty soldiers.
Soldiers sometimes took out their frustrations and resentments on those officers who put their lives at risk. The term “fragging” came to be used to describe soldiers attacking their officers, often tossing fragmentation grenades into the officers’ sleeping quarters. According to one official account, 382 such fragging incidents occurred between 1969 and 1971. Other sources estimate a higher number of fraggings, since many went unreported.
By 1971, as Vietnamization proceeded with U.S. troop withdrawals, no soldier wished to be the last one killed in Vietnam. Consequently, entire units refused to go out on combat patrols, disobeying direct orders. The desertion rate in the Army peaked at 73.5 per 1000 soldiers in 1971, noticeably higher than the peak desertion rates reached during the Korean War and World War II. Another half million men received less than honorable discharges. Vietnam Veterans Against the War was organized in the United States in 1967. By the 1970s the participation of Vietnam veterans in protests against the war in the United States had an important influence on the antiwar movement.
VII RESPONSE TO THE WAR IN THE UNITED STATES
Opposition to the war in the United States developed immediately after the Gulf of Tonkin Resolution, chiefly among traditional pacifists, such as the American Friends Service Committee and antinuclear activists. Early protests were organized around questions about the morality of U.S. military involvement in Vietnam. Virtually every key event of the war, including the Tet Offensive and the invasion of Cambodia, contributed to a steady rise in antiwar sentiment. The revelation of the My Lai Massacre in 1969 caused a dramatic turn against the war in national polls.
Students and professors began to organize “teach-ins” on the war in early 1965 at the University of Michigan, the University of Wisconsin, and the University of California at Berkeley. The teach-ins were large forums for discussion of the war between students and faculty members. Eventually, virtually no college or university was without an organized student movement, often spearheaded by Students for a Democratic Society (SDS). The first major student-led demonstration against the war was organized by SDS in April 1965 and stunned observers by mobilizing about 20,000 participants. Another important organization was the Student Non-Violent Coordinating Committee (SNCC), which denounced the war as racist as early as 1965. Students also joined The Resistance, an organization that urged its student members to refuse to register for the draft, or if drafted to refuse to serve.
While law enforcement authorities usually blamed student radicals for the violence that took place on campuses, often it was police themselves who initiated bloodshed as they cleared out students occupying campus buildings during “sit-ins” or street demonstrations. As antiwar sentiment mounted in intensity from 1965 to 1970 so did violence, culminating in the killings of four students at Kent State in Ohio and of two at Jackson State College in Mississippi.
Stokely Carmichael, Malcolm X, and other black leaders denounced the U.S. presence in Vietnam as evidence of American imperialism. Martin Luther King, Jr., had grown increasingly concerned about the racist nature of the war, toward both the Vietnamese and the disproportionately large numbers of young blacks who were sent to fight for the United States in Vietnam. In 1967 King delivered a major address at New York’s Riverside Church in which he condemned the war, calling the United States “the world’s greatest purveyor of violence.”
On October 15, 1969, citizens across the United States participated in The Moratorium, the largest one-day demonstration against the war. Millions of people stayed home from work to mark their opposition to the war; college and high school students demonstrated on hundreds of campuses. A Baltimore judge even interrupted court proceedings for a moment of reflection on the war. In Vietnam, troops wore black armbands in honor of the home-front protest. Nixon claimed there was a “great silent majority” who supported the war and he called on them to back his policies. Polls showed, however, that at that time half of all Americans felt that the war was “morally indefensible,” while 60 percent admitted that it was a mistake. In November 1969 students from all over the country headed for Washington, D.C., for the Mobilization Against the War. Over 40,000 participated in a March Against Death from Arlington National Cemetery to the White House, each carrying a placard with the name of a young person killed in Vietnam.
Opposition existed even among conservatives and business leaders, for primarily economic reasons. The government was spending more than $2 billion per month on the war by 1967. Some U.S. corporations, ranging from beer distributors to manufacturers of jet aircraft, benefited greatly from this money initially, but the high expense of the war began to cause serious inflation and rising tax rates. Some corporate critics warned of future costs to care for the wounded. Labor unions were also becoming increasingly militant in opposition to the war, as they were forced to respond to the concerns of their members that the draft was imposing an unfair burden on working-class people.
Another factor that turned public opinion against the war was the publication of the Pentagon Papers on June 13, 1971, by the New York Times. Compiled secretly by the U.S. Department of Defense, the papers were a complete history of the involvement of numerous government agencies in the Vietnam War. They showed a clear pattern of deception toward the public. One of the senior analysts compiling this history, Daniel Ellsberg, secretly photocopied key documents and gave them to the New York Times. Subsequently, support for Nixon’s war policies plummeted, and polls showed that 60 percent of the public now considered the war “immoral,” while 70 percent demanded an immediate withdrawal from Vietnam.
The Vietnam War cost the United States $130 billion directly, and at least that amount in indirect costs, such as veterans’ and widows’ benefits and the search for Americans Missing-in-Action (MIAs). The war also spurred serious inflation, contributing to a substantially increased cost of living in the United States between 1965 and 1975, with continued repercussions thereafter. More than 58,000 Americans lost their lives in Vietnam. Over 300,000 U.S. soldiers were wounded, half of them very seriously. No accurate accounting has ever been made of U.S civilians (U.S. government agents, religious missionaries, Red Cross nurses) killed throughout Indochina.
After returning from the war, many Vietnam veterans suffered from Post-Traumatic Stress Disorder, which is characterized by persistent emotional problems including anxiety and depression. The Department of Veterans Affairs estimates that 20,000 Vietnam veterans have committed suicide in the war’s aftermath. Throughout the 1970s and 1980s, unemployment and rates of prison incarceration for Vietnam veterans, especially those having seen heavy combat, were significantly higher than in the general population.
Having felt ignored or disrespected both by the Veterans Administration (now the Department of Veterans Affairs) and by traditional organizations such as the Veterans of Foreign Wars and the American Legion, Vietnam veterans have formed their own self-help groups. Collectively, they forced the Veterans Administration to establish storefront counseling centers, staffed by veterans, in every major city. The national organization, Vietnam Veterans of America (VVA), has become one of the most important service organizations lobbying in Washington, D.C.
Also in the capital, the Vietnam Veterans Memorial was dedicated in 1982 to commemorate the U.S. personnel who died or were declared missing in action in Vietnam. The memorial, which consists of a V-shaped black granite wall etched with more than 58,000 names, was at first a source of controversy because it does not glorify the military but invites somber reflection. The Asian ancestry of its prizewinning designer, Maya Lin, was also an issue for some veterans. In 1983 a bronze cast was added, depicting one white, one black, and one Hispanic American soldier. This led to additional controversy since some argued that the sculpture muted the original memorial’s solemn message. In 1993 a statue of three women cradling a wounded soldier was also added to the site to commemorate the service of the 11,000 military nurses who treated soldiers in Vietnam. Despite all of the controversies, the Vietnam Veterans Memorial has become a site of pilgrimage for veterans and civilians alike.
While the United States has been involved in a number of armed interventions worldwide since it withdrew from Vietnam in 1973, defense planners have taken pains to persuade the public that goals were limited and troops would be committed only for a specified duration. The war in Vietnam created an ongoing debate about the right of the United States to intervene in the affairs of other nations.
VIII EFFECTS AND RECOVERY IN VIETNAM
Although South Vietnam was ostensibly the U.S. ally in the conflict, far more firepower was unleashed on South Vietnamese civilians than on northerners. About 10 percent of all bombs and shells went unexploded and continued to kill and maim throughout the region long after the war, as did buried land mines. Vietnam developed the highest rate of birth defects in the world, probably due to the use of Agent Orange and other chemical defoliants. The defoliants used during the war also destroyed about 15 percent of South Vietnam’s valuable timber resources and contributed to a serious decline in rice and fish production, the major sources of food for Vietnam.
There were 800,000 orphans created in South Vietnam alone. At least 10 million people became homeless refugees in the south. Vietnam’s government punished those Vietnamese who had been allied with the United States by sending them to “re-education camps” and depriving their families of employment. These measures combined with economic hardships throughout Vietnam led to the exodus of about 1.5 million people, most of them to the United States as refugees. The children of U.S. soldiers and Vietnamese women, often called “AmerAsians,” were looked down upon by the Vietnamese, and many of them immigrated to the United States.
Nixon promised $3.25 billion in reconstruction aid to Vietnam, but the aid was never granted. Neither Gerald Ford, who became president after Nixon’s resignation, nor Congress would assume any responsibility for the devastation of Vietnam. Instead, in 1975 Ford extended the embargo already in effect against North Vietnam to all of newly unified Vietnam. In the Foreign Assistance Appropriation Act of 1976, Congress forbade any assistance for Vietnam or Cambodia.
President Jimmy Carter attempted to resume relations with Vietnam in 1977, declaring that “the destruction was mutual.” Talks broke down, however, over the issue of American MIAs and over the promised reparations, especially after the Vietnamese released a copy of Nixon’s secret letter of 1973, which promised aid “without any preconditions.” Fearing that reparations would amount to an admission of wrongdoing, Congress added amendments to trade bills that also cut Vietnam off from international lending agencies like the International Bank for Reconstruction and Development (World Bank) and the International Monetary Fund (IMF). Normalization was suspended, deepening the economic crisis facing Vietnam in the aftermath of the war’s destruction. The crisis was worsened by new wars with China and Cambodia in 1978 and 1979.
Cut off from all other sources of aid, the SRV turned to the Soviet Union for loans and technical advisers. The SRV reasoned that, faced with widespread hunger and enormous health problems, restoring agricultural production was paramount. The government therefore seized private property, collectivized plantations, and nationalized businesses. About 1 million civilians were forcibly moved from cities to new economic zones. Mismanagement and corruption became common, and popular disillusion with the regime grew. At the Sixth Party Congress in 1986, the SRV leadership declared Communism a failed experiment and vowed radical change. Calling the reforms doi moi (economic renovation), the SRV opened Vietnam to capitalism. After the collapse of the USSR in 1991, the SRV leadership was forced to move further in this direction.
Stepping up efforts to find American MIAs and cooperating with World Bank and IMF guidelines for economic reform, Vietnam worked to improve relations with the United States. In February 1994 President Bill Clinton lifted the trade embargo, and on July 11, 1995, the United States formally restored full diplomatic relations with Vietnam. |
Demand and Supply of Money | CFA Level I Economics
Welcome back! In this lesson, we’ll explore the demand and supply of money, discussing sources of money demand, the relationship between money demand and supply with short-term interest rates, and the connection between money supply and the price level in an economy. Let’s dive in!
Sources of Demand for Money
The demand for money refers to the amount of wealth that households and firms in an economy choose to hold in the form of money, which includes notes and coins in circulation, as well as very liquid bank deposits. There are three main reasons for holding money:
- Transaction demand: Money needed for undertaking transactions like paying employee salaries and purchasing goods and services. As real GDP increases, demand for money to carry out transactions also increases.
- Precautionary demand: Money held for unforeseen future needs. The total amount of precautionary demand for money increases with the size of the economy.
- Speculative demand: Money set aside to take advantage of future investment opportunities.
Factors Affecting Speculative Demand
There are several key factors that influence the level of speculative demand for money:
- Expected returns: Speculative demand is inversely related to returns available in the market. Higher returns from bonds and other financial instruments reduce speculative money balances.
- Perceived risk: Speculative demand is positively related to perceived risk in the market. When risk is perceived to be higher, people choose to reduce exposure to risky assets and hold money instead.
- Short-term interest rates: Demand for money for speculative reasons is inversely related to short-term interest rates. Lower interest rates encourage higher money demand, while higher interest rates increase the opportunity cost of holding money and lower speculative demand.
Money Demand, Money Supply, and Short-Term Interest Rates
When plotted against nominal interest rates, the money demand curve is downward sloping. The money supply, determined by the central bank, is independent of the interest rate and has a perfectly inelastic supply curve. Short-term interest rates are determined by the equilibrium between money supply and money demand.
A central bank can affect short-term interest rates by increasing or decreasing the money supply. For example, increasing the money supply shifts the money supply curve to the right, causing the equilibrium rate to fall and pressuring market interest rates to follow. Conversely, decreasing the money supply results in higher interest rates.
Money Neutrality and the Quantity Theory of Money
Some economists believe that in the long run, an increase in the money supply results in an increase in the aggregate price level, while real output and money velocity remain unchanged. This belief is referred to as money neutrality.
The quantity theory of money states that the quantity of money is proportional to the total spending in an economy. The theory is explained with the quantity equation of exchange:
MV = PY
Where M is the money supply, V is the velocity of circulation of money, P is the average price level, and Y is real output.
Monetarists assume that velocity and real output change very slowly. Thus, any increase in the money supply will lead to a proportionate increase in the price level. For example, a 10% increase in the money supply will increase the price of goods and services by 10%. This supports the money neutrality belief that when the money supply is increased, real output and money velocity remain unchanged, resulting in an increase in the aggregate price level.
Monetary Policy and Inflation Control
Monetarists argue that monetary policy, which regulates the supply of money, can be used to control inflation in an economy. By managing the money supply, central banks can influence short-term interest rates and, consequently, the overall price level.
That wraps up our lesson on the demand and supply of money. We’ve explored the reasons behind money demand, the factors affecting speculative demand, the relationship between money demand and supply with short-term interest rates, and the concept of money neutrality. In our next lesson, we’ll examine the costs of inflation on the economy. See you then! |
The permanent dentition consists of the permanent teeth in the human oral cavity, those teeth which remain throughout one’s adult life. The permanent dentition is also known as the secondary dentition. The permanent dentition numbers 32 teeth altogether. There are 16 teeth in both the maxillary and mandibular arches. Each arch can be further divided into anterior and posterior teeth. The anterior teeth are the front six teeth, including two lateral incisors, two central incisors, and two cuspids. There are 10 posterior teeth in each arch, including four premolars and six molars. There are very specific functions of both the anterior and posterior teeth. Teeth generally erupt during a time period (age range) for that particular tooth.
Maxillary Central Incisor of permanent dentition
The maxillary central incisors are the two most anterior teeth in the maxillary arch. The median line bisects them. They are also the first teeth to be found in each of the maxillary quadrants of permanent dentition. This tooth has a sharp incisal edge for cutting food. This edge has developmental mamelons upon eruption. Mamelons are prominences at the incisal edge of a newly erupted tooth. These mamelons wear away with the use of the tooth. This tooth is the widest of all the anterior teeth.
Maxillary Lateral Incisor of permanent dentition
The lateral incisors are the next teeth to be found in the maxillary arch. They are adjacent to the central incisors and have the same form and function. They have a cutting edge and are utilized primarily for ripping and shredding food items. Interestingly, they are the smallest and weakest of the teeth in the maxillary arch.
Maxillary Cuspid/Canine of permanent dentition
The cuspid (canine) is the longest and strongest rooted tooth in the maxillary arch. It is often referred to as the “eye tooth.” The purpose of this tooth is to aid in ripping and tearing food. It’s a very strong tooth and forms the cornerstone of the arch. The shape and function of the cuspid (canine) starts the transition into the posterior portion of the arch of permanent dentition. The cuspid (canine) is the last anterior tooth and is located just distal to both maxillary lateral incisors. The root is single and is the longest root in the arch.
Maxillary First Premolar of permanent dentition
The maxillary first premolar (or first bicuspid) is considered the first posterior tooth. It is double- rooted and double-cusped. It is utilized for the grinding of food on its occlusal surface. It is located distal to the cuspid on both sides of the arch. This tooth is called a premolar because it is in front of the molars.
Maxillary Second Premolar of permanent dentition
The maxillary second premolar is a single-rooted tooth. It has two cusps that are basically of the same size. It has a slightly rounded, molar-like occlusal surface. The occlusal surface is the horizontal surface of the posterior teeth. Its function is in grinding and tearing food. The second premolar continues the transition to a wider occlusal table in the molar area. It is located just distal to the first premolars on both sides of the arch.
Maxillary First Molar of permanent dentition
The maxillary first molar is the largest tooth in the oral cavity. It has three roots—two buccal and one lingual or palatal. It also has four cusps—two buccal and two lingual. It may also contain an accessory cusp that would be located lingually. This cusp is termed the Cusp of Carabelli and may vary in size and shape. The maxillary first molar of permanent dentition is sometimes called the six-year molar due to the time of its initial eruption. It is located distal to the second premolar. The function of the maxillary first molar is in fine grinding of food for deglutition (swallowing).
Maxillary Second Molar of permanent dentition
The maxillary second molar closely resembles the maxillary first molar. However, this molar is
smaller in diameter and the Cusp of Carabelli is not present. The function of this tooth is the same as the maxillary first molar—namely, fme grinding of food. It is sometimes called the twelve-year molar due to its eruption time. The maxillary second molar is located just distal to the maxillary first molar. It is the seventh tooth from the midline and, therefore, the seventh tooth in each maxillary quadrant.
Maxillary Third Molar of permanent dentition
The maxillary third molar is the eighth and last tooth in the maxillary arch. It can vary in shape and size dramatically. It can have a single fused root or many roots. Its occlusal surface is generally heart-shaped, but this can greatly differ. It is also known as the wisdom tooth due to the fact that its eruption time occurs in later years, usually when a person is 16 to 22 years old.
Mandibular Central Incisor of permanent dentition
The mandibular central incisor is located on both sides of the median line. It is the smallest tooth in the oral cavity. It also has the most symmetrical design. It has no mamelons or developmental grooves. This tooth is the first tooth of both quadrants of the mandibular arch.
Mandibular Lateral Incisor of permanent dentition
The mandibular lateral incisor closely resembles the central incisor. It is shaped similarly. However, it is larger in all dimensions. It is the second tooth of each mandibular quadrant and just distal to the mandibular central incisor.
Mandibular Cuspid (Canine) of permanent dentition
The mandibular cuspid (canine) resembles the maxillary cuspid (canine). However, this tooth has a shorter root and a longer, wider crown. It functions as the lower arch stabilizer. It is the last anterior tooth, located just distal to the lateral incisor. The root is not as long as the maxillary cuspid’s (canine’s) and is flatter.
The mandibular first premolar of permanent dentition (also known as bicuspid) is the first tooth of the posterior dentition. Its function is grinding. It has a large functional buccal cusp and a smaller, relatively non-functional lingual cusp. It is located distal to the cuspid and is the fourth tooth from the median line.
Mandibular Second Premolar of permanent dentition
The mandibular second premolar (sometimes called bicuspid) has three cusps—one larger buccal cusp and two smaller lingual cusps. Its primary purpose is in the grinding of food. It is slightly larger than the first premolar. The second premolar is found just distal to the first premolar in both quadrants and is the fifth tooth from the median line.
Mandibular First Molar of permanent dentition
The mandibular first molar is the largest tooth in the mandibular arch and is the first permanent tooth to erupt. It has two roots-one mesial and one distal. It has five cusps on the occlusal surface. It is just distal to the second premolar.
Mandibular Second Molar of permanent dentition
The mandibular second molar is located distal to the first molar. It is similar in function to the first molar. It has four cusps—two buccal and two lingual. The second molar also has two roots-one distal and the other mesial, both smaller than those of the first molar.
Mandibular Third Molar of permanent dentition
The mandibular third molar is the last tooth in each mandibular quadrant. It is sometimes called the wisdom tooth, just like its maxillary counterpart. The third molar can have many distinctive shapes, forms, and sizes. It may have four or five cusps like the first and second molars. The roots may either be fused together or widely spread apart. The third molar usually has a mesial angle.
THE ERUPTION OF PERMANENT DENTITION
Tooth Eruption Age (in years)
- Central Incisor 7—8 years
- Lateral Incisor 7—9 years
- Cuspids 11—13 years
- First Premolar 9—11 years
- Second Premolar 10—12 years
- First Molar 6—7 years
- Second Molar 11—14 years
- Third Molar 16—22 years
- Central Incisor 6—8 years
- Lateral Incisor 7—8 years
- Cuspid 9—10 years
- First Premolar 10—12 years
- Second Premolar 11—12 years
- First Molar 6—7 years
- Second Molar 11—14 years
- Third Molar 16—22 years |
Addition and subtraction of fractions by decomposition: Addition with Tenths and Hundredths Standard: Prime and composite numbers review Topic F: Fraction equivalence, ordering, and operations Topic C: Fraction equivalence, ordering, and operations Topic B: Rounding multi-digit whole numbers: Interpret a multiplication equation as a comparison.
Solve problems involving mixed units of length. Operations and Algebraic Thinking. Solve division problems with a zero in the dividend or with a zero in the quotient. Create conversion tables for length, weight, and capacity units using measurement tools, and use the tables to solve problems. Multiplication of two-digit by two-digit numbers: Decompose and compose fractions greater than 1 to express them in various forms.
Use place value understanding to fluently add multi-digit whole numbers using the standard addition algorithm and apply the algorithm to solve word problems using tape diagrams.
Decompose unit fractions using area models to show equivalence. Round multi-digit numbers to the thousands place using the vertical number line.
4th grade (Eureka Math/EngageNY)
Year in Review Days: Subtract a mixed number from a mixed number. Use place value disks to represent homewwork by one-digit multiplication. Links to Module 1 Videos.
Multiplication of up to four digits by single-digit numbers: Problem solving with the addition of angle measures: Decompose fractions using area models to show equivalence. Solve word problems involving money.
Eureka math grade 4 module 4 lesson 2 homework
Video Lesson 37Lesson Division of thousands, hundreds, tens, and ones: Multiplication Word Problems Standard: Use addition and subtraction to solve multi-step word lessom involving length, mass, and capacity. Fraction equivalence, ordering, and operations Topic C: Fraction equivalence using multiplication and division: Operations and Algebraic Thinking.
Use place value understanding to decompose to smaller units up to 3 times using the standard subtraction algorithm, and apply the algorithm to solve word problems using tape diagrams. Express metric length measurements in terms homswork a smaller unit; model and solve addition and subtraction word problems involving metric length. Write number as a fraction and decimal Topic B: Exploring measurement with lesspn.
Extending fraction equivalence to fractions greater than 1: Multiply three- and four-digit numbers by one-digit numbers applying the standard algorithm.
Course: G4M3: Multi-Digit Multiplication and Division
Interpret and find whole number quotients and remainders to solve one-step division word problems with larger divisors of 6, 7, 8, and 9. Measurement Gomework and Games.
Solve multi-step word problems modeled with tape diagrams and assess the reasonableness of answers using rounding. Extending fraction equivalence to fractions lesson than 1.
Fraction equivalence, ordering, and operations Topic B: Angle measure and plane figures Topic C: Solve word problems involving the addition of measurements in decimal form. Represent and solve division problems requiring decomposing a remainder in the tens.
Money Amounts as Decimal Numbers Standard: Exploration of Tenths Standard: Video Lesson 10Lesson |
Points, Vectors, and Functions
Rules of Graphing We Do (or Don't) Have
The best way to graph polar functions is by using a graphing calculator or a computer program. We can wave our hands and pull a rabbit out of a hat. That's because there aren't as many rules about graphing polar functions. Those few rules that we do have can be much more complex.
With a rectangular function
y = f (x)
there are certain rules about how the function stretches or translates if we look at variations such as:
c + f (x)
f (c + x)
where c is a constant.
We have rules like this when dealing with polar functions too, but not as many.
- The graph of r = cf (θ) will be the same shape as the graph of r = f(θ), but stretched away from or squished toward the origin by a factor of c.
- The graph of r = f (θ - c) is the same as the graph of r = f(θ), but rotated by an angle of c.
As far as nice rules for graphing go, that's all we get.
- There's no nice rule that tells us how the function r = f(cθ) looks.
- There's no nice rule that tells us how the function r = c + f (θ) looks.
We can verify that the function r = f (cθ) is weird by trying different values in the graphing calculator.
The function r = c + f(θ) is also weird. Adding a constant can change whether your r values are positive or negative, which can totally change the shape of the graph. It may also change the bounds we need for θ if we want to find the whole graph. |
|Part of the Politics series|
|Basic forms of government|
|Part of a series on|
Democracy is "a system of government in which all the people of a state or polity ... are involved in making decisions about its affairs, typically by voting to elect representatives to a parliament or similar assembly." According to American political scientist Larry Diamond, it consists of four key elements: "1. A political system for choosing and replacing the government through free and fair elections. 2. The active participation of the people, as citizens, in politics and civic life. 3. Protection of the human rights of all citizens. 4. A rule of law, in which the laws and procedures apply equally to all citizens".
The term originates from the Greek δημοκρατία (dēmokratía) "rule of the people", which was found from δῆμος (dêmos) "people" and κράτος (krátos) "power" or "rule" in the 5th century BC to denote the political systems then existing in Greek city-states, notably Athens; the term is an antonym to ἀριστοκρατία (aristokratía) "rule of an elite". While theoretically these definitions are in opposition, in practice the distinction has been blurred historically. The political system of Classical Athens, for example, granted democratic citizenship to an elite class of free men and excluded slaves and women from political participation. In virtually all democratic governments throughout ancient and modern history, democratic citizenship consisted of an elite class until full enfranchisement was won for all adult citizens in most modern democracies through the suffrage movements of the 19th and 20th centuries. The English word dates to the 16th century, from the older Middle French and Middle Latin equivalents.
Democracy contrasts with forms of government where power is either held by an individual, as in an absolute monarchy, or where power is held by a small number of individuals, as in an oligarchy. Nevertheless, these oppositions, inherited from Greek philosophy, are now ambiguous because contemporary governments have mixed democratic, oligarchic, and monarchic elements. Karl Popper defined democracy in contrast to dictatorship or tyranny, thus focusing on opportunities for the people to control their leaders and to oust them without the need for a revolution.
Several variants of democracy exist, but there are two basic forms, both of which concern how the whole body of all eligible citizens executes its will. One form of democracy is direct democracy, in which all eligible citizens have direct and active participation in the political decision making. In most modern democracies, the whole body of eligible citizens remain the sovereign power but political power is exercised indirectly through elected representatives; this is called a representative democracy.
- 1 Characteristics
- 2 History
- 3 Countries and regions
- 4 Types
- 4.1 Basic forms
- 4.2 Variants
- 4.3 Non-governmental
- 5 Theory
- 6 Development
- 7 See also
- 8 References
- 9 Further reading
- 10 External links
No consensus exists on how to define democracy, but legal equality, political freedom and rule of law have been identified as important characteristics. These principles are reflected in all eligible citizens being equal before the law and having equal access to legislative processes. For example, in a representative democracy, every vote has equal weight, no unreasonable restrictions can apply to anyone seeking to become a representative[according to whom?], and the freedom of its eligible citizens is secured by legitimised rights and liberties which are typically protected by a constitution.
One theory holds that democracy requires three fundamental principles: 1) upward control, i.e. sovereignty residing at the lowest levels of authority, 2) political equality, and 3) social norms by which individuals and institutions only consider acceptable acts that reflect the first two principles of upward control and political equality.
The term "democracy" is sometimes used as shorthand for liberal democracy, which is a variant of representative democracy that may include elements such as political pluralism; equality before the law; the right to petition elected officials for redress of grievances; due process; civil liberties; human rights; and elements of civil society outside the government. Roger Scruton argues that democracy alone cannot provide personal and political freedom unless the institutions of civil society are also present.
In some countries, notably in the United Kingdom which originated the Westminster system, the dominant principle is that of parliamentary sovereignty, while maintaining judicial independence. In the United States, separation of powers is often cited as a central attribute. In India, the world's largest democracy, parliamentary sovereignty is subject to a constitution which includes judicial review. Other uses of "democracy" include that of direct democracy. Though the term "democracy" is typically used in the context of a political state, the principles also are applicable to private organisations.
Majority rule is often listed as a characteristic of democracy. Hence, democracy allows for political minorities to be oppressed by the "tyranny of the majority" in the absence of legal protections of individual or group rights. An essential part of an "ideal" representative democracy is competitive elections that are fair both substantively and procedurally. Furthermore, freedom of political expression, freedom of speech, and freedom of the press are considered to be essential rights that allow eligible citizens to be adequately informed and able to vote according to their own interests.
It has also been suggested that a basic feature of democracy is the capacity of all voters to participate freely and fully in the life of their society. With its emphasis on notions of social contract and the collective will of the all voters, democracy can also be characterised as a form of political collectivism because it is defined as a form of government in which all eligible citizens have an equal say in lawmaking.
While representative democracy is sometimes equated with the republican form of government, the term "republic" classically has encompassed both democracies and aristocracies. Many democracies are constitutional monarchies, such as the United Kingdom.
The term "democracy" first appeared in ancient Greek political and philosophical thought in the city-state of Athens during classical antiquity. Led by Cleisthenes, Athenians established what is generally held as the first democracy in 508–507 BC. Cleisthenes is referred to as "the father of Athenian democracy."
Athenian democracy took the form of a direct democracy, and it had two distinguishing features: the random selection of ordinary citizens to fill the few existing government administrative and judicial offices, and a legislative assembly consisting of all Athenian citizens. All eligible citizens were allowed to speak and vote in the assembly, which set the laws of the city state. However, Athenian citizenship excluded women, slaves, foreigners (μέτοικοι métoikoi), non-landowners, and males under 20 years old.[contradictory]
Of the estimated 200,000 to 400,000 inhabitants of Athens, there were between 30,000 and 60,000 citizens. The exclusion of large parts of the population from the citizen body is closely related to the ancient understanding of citizenship. In most of antiquity the benefit of citizenship was tied to the obligation to fight war campaigns.
Athenian democracy was not only direct in the sense that decisions were made by the assembled people, but also the most direct in the sense that the people through the assembly, boule and courts of law controlled the entire political process and a large proportion of citizens were involved constantly in the public business. Even though the rights of the individual were not secured by the Athenian constitution in the modern sense (the ancient Greeks had no word for "rights"), the Athenians enjoyed their liberties not in opposition to the government but by living in a city that was not subject to another power and by not being subjects themselves to the rule of another person.
Range voting appeared in Sparta as early as 700 BC. The Apella was an assembly of the people, held once a month, in which every male citizen of age 30 could participate. In the Apella, Spartans elected leaders and cast votes by range voting and shouting. Aristotle called this "childish," as compared with the stone voting ballots used by the Athenians. Sparta adopted it because of its simplicity, and to prevent any bias voting, buying, or cheating that was predominant in the early democratic elections.
Even though the Roman Republic contributed significantly to many aspects of democracy, only a minority of Romans were citizens with votes in elections for representatives. The votes of the powerful were given more weight through a system of gerrymandering, so most high officials, including members of the Senate, came from a few wealthy and noble families. In addition, the Roman Republic was the first government in the western world to have a Republic as a nation-state, although it didn't have much of a democracy. The Romans invented the concept of classics and many works from Ancient Greece were preserved. Additionally, the Roman model of governance inspired many political thinkers over the centuries, and today's modern representative democracies imitate more the Roman than the Greek models because it was a state in which supreme power was held by the people and their elected representatives, and which had an elected or nominated leader. Other cultures, such as the Iroquis Nation in the Americas between around 1450 and 1600 AD also developed a form of democratic society before they came in contact with the Europeans. This indicates that forms of democracy may have been invented in other societies around the world.
During the Middle Ages, there were various systems involving elections or assemblies, although often only involving a small part of the population. These included:
- the South Indian Kingdom of the Chola in the Tamil Nadu region of the Indian Subcontinent had an electoral system 1,000 years ago,
- Carantania, old Slavic/Slovenian principality, the Ducal Inauguration from 7th to 15th century,
- the upper-caste election of the Gopala in the Bengal region of the Indian Subcontinent,
- the Holy Roman Empire's Hoftag and Imperial Diets (mostly Nobles and Clergy),
- the Polish–Lithuanian Commonwealth (10% of population),
- the Althing in Iceland,
- the Løgting in the Faeroe Islands,
- certain medieval Italian city-states such as Venice, Genoa, Florence, Pisa, Lucca, Amalfi, Siena and San Marino
- the tuatha system in early medieval Ireland,
- the Veche in Novgorod and Pskov Republics of medieval Russia,
- Scandinavian Things,
- The States in Tirol and Switzerland,
- the autonomous merchant city of Sakai in the 16th century in Japan,
- Volta-Nigeric societies such as Igbo.
- the Mekhk-Khel system of the Nakh peoples of the North Caucasus, by which representatives to the Council of Elders for each teip (clan) were popularly elected by that teip's members.
- The 10th Sikh Guru Gobind Singh ji (Nanak X) established world's first Sikh democratic republic state ending the aristocracy on day of 1st Vasakh 1699 and Gurbani as sole constitution of this Sikh republic.
Most regions in medieval Europe were ruled by clergy or feudal lords.
The Kouroukan Fouga divided the Mali Empire into ruling clans (lineages) that were represented at a great assembly called the Gbara. However, the charter made Mali more similar to a constitutional monarchy than a democratic republic. A little closer to modern democracy were the Cossack republics of Ukraine in the 16th and 17th centuries: Cossack Hetmanate and Zaporizhian Sich. The highest post – the Hetman – was elected by the representatives from the country's districts.
The Parliament of England had its roots in the restrictions on the power of kings written into Magna Carta (1215), which explicitly protected certain rights of the King's subjects and implicitly supported what became the English writ of habeas corpus, safeguarding individual freedom against unlawful imprisonment with right to appeal. The first elected national assembly was Simon de Montfort's Parliament in England in 1265. The emergence of petitioning is some of the earliest evidence of parliament being used as a forum to address the general grievances of ordinary people. However, the power to call parliament remained at the pleasure of the monarch.
Early modern period
During the early modern period, the power of the Parliament of England continually increased. Passage of the Petition of Right in 1628 and Habeas Corpus Act in 1679 established certain liberties and remain in effect. The idea of a political party took form with groups freely debating rights to political representation during the Putney Debates of 1647. After the English Civil Wars (1642–1651) and the Glorious Revolution of 1688, the Bill of Rights was enacted in 1689, which codified certain rights and liberties, and is still in effect. The Bill set out the requirement for regular elections, rules for freedom of speech in Parliament and limited the power of the monarch, ensuring that, unlike much of Europe at the time, royal absolutism would not prevail.
In North America, representative government began in Jamestown, Virginia, with the election of the House of Burgesses (forerunner of the Virginia General Assembly) in 1619. English Puritans who migrated from 1620 established colonies in New England whose local governance was democratic and which contributed to the democratic development of the United States; although these local assemblies had some small amounts of devolved power, the ultimate authority was held by the Crown and the English Parliament. The Puritans (Pilgrim Fathers), Baptists, and Quakers who founded these colonies applied the democratic organisation of their congregations also to the administration of their communities in worldly matters.
18th and 19th centuries
The first Parliament of Great Britain was established in 1707, after the merger of the Kingdom of England and the Kingdom of Scotland under the Acts of Union. Although the monarch increasingly became a figurehead, only a small minority actually had a voice; Parliament was elected by only a few percent of the population (less than 3% as late as 1780).
The creation of the short-lived Corsican Republic in 1755 marked the first nation in modern history to adopt a democratic constitution (all men an women above age of 25 could vote). This Corsican Constitution was the first based on Enlightenment principles and included female suffrage, something that was not granted in most other democracies until the 20th century.
In the American colonial period before 1776, and for some time after, often only adult white male property owners could vote; enslaved Africans, most free black people and most women were not extended the franchise. On the American frontier, democracy became a way of life, with more widespread social, economic and political equality. Although not described as a democracy by the founding fathers, they shared a determination to root the American experiment in the principles of natural freedom and equality.
The American Revolution led to the adoption of the United States Constitution in 1787. The Constitution provided for an elected government and protected civil rights and liberties for some, but did not end slavery nor give voting rights to women. This constitution is the oldest surviving, still active, governmental codified constitution. The Bill of Rights in 1791 set limits on government power to protect personal freedoms.
In 1789, Revolutionary France adopted the Declaration of the Rights of Man and of the Citizen and, although short-lived, the National Convention was elected by all males in 1792. However, in the early 19th century, little of democracy - as theory, practice, or even as word - remained in the North Atlantic world.
During this period, slavery remained a social and economic institution in places around the world. This was particularly the case in the eleven states of the American South. A variety of organisations were established advocating the movement of black people from the United States to locations where they would enjoy greater freedom and equality.
The United Kingdom's Slave Trade Act 1807 banned the trade across the British Empire, enforced internationally by the Royal Navy's West Africa Squadron under treaties Britain negotiated with other nations. As the voting franchise in the U.K. was increased, it also was made more uniform in a series of reforms beginning with the Reform Act of 1832. In 1833, the United Kingdom passed the Slavery Abolition Act which took effect across the British Empire.
Universal male suffrage was established in France in March 1848 in the wake of the French Revolution of 1848. In 1848, several revolutions broke out in Europe as rulers were confronted with popular demands for liberal constitutions and more democratic government.
In the 1860 United States Census, the slave population in the United States had grown to four million, and in Reconstruction after the Civil War (late 1860s), the newly freed slaves became citizens with a nominal right to vote for men. Full enfranchisement of citizens was not secured until after the African-American Civil Rights Movement (1955–1968) gained passage by the United States Congress of the Voting Rights Act of 1965.
20th and 21st centuries
20th-century transitions to liberal democracy have come in successive "waves of democracy," variously resulting from wars, revolutions, decolonisation, and religious and economic circumstances. World War I and the dissolution of the Ottoman and Austro-Hungarian empires resulted in the creation of new nation-states from Europe, most of them at least nominally democratic.
In the 1920s democracy flourished and women's suffrage advanced, but the Great Depression brought disenchantment and most of the countries of Europe, Latin America, and Asia turned to strong-man rule or dictatorships. Fascism and dictatorships flourished in Nazi Germany, Italy, Spain and Portugal, as well as nondemocratic regimes in the Baltics, the Balkans, Brazil, Cuba, China, and Japan, among others.
World War II brought a definitive reversal of this trend in western Europe. The democratisation of the American, British, and French sectors of occupied Germany (disputed), Austria, Italy, and the occupied Japan served as a model for the later theory of regime change. However, most of Eastern Europe, including the Soviet sector of Germany fell into the non-democratic Soviet bloc.
The war was followed by decolonisation, and again most of the new independent states had nominally democratic constitutions. India emerged as the world's largest democracy and continues to be so. Countries that were once part of the British Empire often adopted the British Westminster system.
By 1960, the vast majority of country-states were nominally democracies, although most of the world's populations lived in nations that experienced sham elections, and other forms of subterfuge (particularly in Communist nations and the former colonies.)
A subsequent wave of democratisation brought substantial gains toward true liberal democracy for many nations. Spain, Portugal (1974), and several of the military dictatorships in South America returned to civilian rule in the late 1970s and early 1980s (Argentina in 1983, Bolivia, Uruguay in 1984, Brazil in 1985, and Chile in the early 1990s). This was followed by nations in East and South Asia by the mid-to-late 1980s.
Economic malaise in the 1980s, along with resentment of Soviet oppression, contributed to the collapse of the Soviet Union, the associated end of the Cold War, and the democratisation and liberalisation of the former Eastern bloc countries. The most successful of the new democracies were those geographically and culturally closest to western Europe, and they are now members or candidate members of the European Union.
The liberal trend spread to some nations in Africa in the 1990s, most prominently in South Africa. Some recent examples of attempts of liberalisation include the Indonesian Revolution of 1998, the Bulldozer Revolution in Yugoslavia, the Rose Revolution in Georgia, the Orange Revolution in Ukraine, the Cedar Revolution in Lebanon, the Tulip Revolution in Kyrgyzstan, and the Jasmine Revolution in Tunisia.
According to Freedom House, in 2007 there were 123 electoral democracies (up from 40 in 1972). According to World Forum on Democracy, electoral democracies now represent 120 of the 192 existing countries and constitute 58.2 percent of the world's population. At the same time liberal democracies i.e. countries Freedom House regards as free and respectful of basic human rights and the rule of law are 85 in number and represent 38 percent of the global population.
Countries and regions
- New Zealand
- United Kingdom
- United States of America
- South Korea
- Costa Rica
The Index assigns 52 countries or regions to the lower category, Flawed democracy: Argentina, Belgium, Botswana, Brazil, Bulgaria, Cape Verde, Chile, Colombia, Croatia, Cyprus, Czech Republic, Dominican Republic, El Salvador, Estonia, Ghana, Greece, Hungary, Hong Kong, India, Indonesia, Israel, Italy, Jamaica, Latvia, Lesotho, Lithuania, Macedonia, Malaysia, Mexico, Moldova, Mongolia, Namibia, Panama, Papua New Guinea, Paraguay, Peru, Philippines, Poland, Portugal, Indonesia, Romania, Senegal, Serbia,Singapore, Slovakia, Slovenia, South Africa, Suriname, Taiwan, Timor-Leste, Trinidad and Tobago, Tunisia, Zambia.
Democracy has taken a number of forms, both in theory and practice. Some varieties of democracy provide better representation and more freedom for their citizens than others. However, if any democracy is not structured so as to prohibit the government from excluding the people from the legislative process, or any branch of government from altering the separation of powers in its own favour, then a branch of the system can accumulate too much power and destroy the democracy.
The following kinds of democracy are not exclusive of one another: many specify details of aspects that are independent of one another and can co-exist in a single system.
Representative democracy is a form of democracy in which people vote for representatives who then vote on policy initiatives as opposed to a direct democracy, a form of democracy in which people vote on policy initiatives directly.
Direct democracy is a political system where the citizens participate in the decision-making personally, contrary to relying on intermediaries or representatives. The supporters of direct democracy argue that democracy is more than merely a procedural issue. A direct democracy gives the voting population the power to:
- Change constitutional laws,
- Put forth initiatives, referendums and suggestions for laws,
- Give binding orders to elective officials, such as revoking them before the end of their elected term, or initiating a lawsuit for breaking a campaign promise.
Representative democracy involves the election of government officials by the people being represented. If the head of state is also democratically elected then it is called a democratic republic. The most common mechanisms involve election of the candidate with a majority or a plurality of the votes. Most western countries have representative systems.
Representatives may be elected or become diplomatic representatives by a particular district (or constituency), or represent the entire electorate through proportional systems, with some using a combination of the two. Some representative democracies also incorporate elements of direct democracy, such as referendums. A characteristic of representative democracy is that while the representatives are elected by the people to act in the people's interest, they retain the freedom to exercise their own judgement as how best to do so. Such reasons have driven criticism upon representative democracy, pointing out the contradictions of representation mechanisms' with democracy
Parliamentary democracy is a representative democracy where government is appointed by, or can be dismissed by, representatives as opposed to a "presidential rule" wherein the president is both head of state and the head of government and is elected by the voters. Under a parliamentary democracy, government is exercised by delegation to an executive ministry and subject to ongoing review, checks and balances by the legislative parliament elected by the people.
Parliamentary systems have the right to dismiss a Prime Minister at any point in time that they feel he or she is not doing their job to the expectations of the legislature. This is done through a Vote of No Confidence where the legislature decides whether or not to remove the Prime Minister from office by a majority support for his or her dismissal. In some countries, the Prime Minister can also call an election whenever he or she so chooses, and typically the Prime Minister will hold an election when he or she knows that they are in good favour with the public as to get re-elected. In other parliamentary democracies extra elections are virtually never held, a minority government being preferred until the next ordinary elections. An important feature of the parliamentary democracy is the concept of the "loyal opposition". The essence of the concept is that the second largest political party (or coalition) opposes the governing party (or coalition), while still remaining loyal to the state and its democratic principles.
Presidential Democracy is a system where the public elects the president through free and fair elections. The president serves as both the head of state and head of government controlling most of the executive powers. The president serves for a specific term and cannot exceed that amount of time. Elections typically have a fixed date and aren't easily changed. The president has direct control over the cabinet, specifically appointing the cabinet members.
The president cannot be easily removed from office by the legislature, but he or she cannot remove members of the legislative branch any more easily. This provides some measure of separation of powers. In consequence however, the president and the legislature may end up in the control of separate parties, allowing one to block the other and thereby interfere with the orderly operation of the state. This may be the reason why presidential democracy is not very common outside the Americas, Africa, and Central and Southeast Asia.
A semi-presidential system is a system of democracy in which the government includes both a prime minister and a president. The particular powers held by the prime minister and president vary by country.
Hybrid or semi-direct
Some modern democracies that are predominately representative in nature also heavily rely upon forms of political action that are directly democratic. These democracies, which combine elements of representative democracy and direct democracy, are termed hybrid democracies, semi-direct democracies or participatory democracies. Examples include Switzerland and some U.S. states, where frequent use is made of referendums and initiatives.
The Swiss confederation is a semi-direct democracy. At the federal level, citizens can propose changes to the constitution (federal popular initiative) or ask for a referendum to be held on any law voted by the parliament. Between January 1995 and June 2005, Swiss citizens voted 31 times, to answer 103 questions (during the same period, French citizens participated in only two referendums). Although in the past 120 years less than 250 initiatives have been put to referendum. The populace has been conservative, approving only about 10% of the initiatives put before them; in addition, they have often opted for a version of the initiative rewritten by government.
In the United States, no mechanisms of direct democracy exists at the federal level, but over half of the states and many localities provide for citizen-sponsored ballot initiatives (also called "ballot measures", "ballot questions" or "propositions"), and the vast majority of states allow for referendums. Examples include the extensive use of referendums in the US state of California, which is a state that has more than 20 million voters.
In New England Town meetings are often used, especially in rural areas, to manage local government. This creates a hybrid form of government, with a local direct democracy and a state government which is representative. For example, most Vermont towns hold annual town meetings in March in which town officers are elected, budgets for the town and schools are voted on, and citizens have the opportunity to speak and be heard on political matters.
Many countries such as the United Kingdom, the Netherlands, Belgium, Scandinavian countries, Thailand, Japan and Bhutan turned powerful monarchs into constitutional monarchs with limited or, often gradually, merely symbolic roles. For example, in England, constitutional monarchy began to emerge with the Glorious Revolution of 1688 and passage of the Bill of Rights 1689.
In other countries, the monarchy was abolished along with the aristocratic system (as in France, China, Russia, Germany, Austria, Hungary, Italy, Greece and Egypt). An elected president, with or without significant powers, became the head of state in these countries.
Élite upper houses of legislatures, which often had lifetime or hereditary tenure, were common in many nations. Over time, these either had their powers limited (as with the British House of Lords) or else became elective and remained powerful (as with the Australian Senate).
The term republic has many different meanings, but today often refers to a representative democracy with an elected head of state, such as a president, serving for a limited term, in contrast to states with a hereditary monarch as a head of state, even if these states also are representative democracies with an elected or appointed head of government such as a prime minister.
The Founding Fathers of the United States rarely praised and often criticised democracy, which in their time tended to specifically mean direct democracy, often without the protection of a Constitution enshrining basic rights; James Madison argued, especially in The Federalist No. 10, that what distinguished a democracy from a republic was that the former became weaker as it got larger and suffered more violently from the effects of faction, whereas a republic could get stronger as it got larger and combats faction by its very structure.
What was critical to American values, John Adams insisted, was that the government be "bound by fixed laws, which the people have a voice in making, and a right to defend." As Benjamin Franklin was exiting after writing the U.S. constitution, a woman asked him "Well, Doctor, what have we got—a republic or a monarchy?". He replied "A republic—if you can keep it."
A liberal democracy is a representative democracy in which the ability of the elected representatives to exercise decision-making power is subject to the rule of law, and moderated by a constitution or laws that emphasise the protection of the rights and freedoms of individuals, and which places constraints on the leaders and on the extent to which the will of the majority can be exercised against the rights of minorities (see civil liberties).
In a liberal democracy, it is possible for some large-scale decisions to emerge from the many individual decisions that citizens are free to make. In other words, citizens can "vote with their feet" or "vote with their dollars", resulting in significant informal government-by-the-masses that exercises many "powers" associated with formal government elsewhere.
Socialist thought has several different views on democracy. Social democracy, democratic socialism, and the dictatorship of the proletariat (usually exercised through Soviet democracy) are some examples. Many democratic socialists and social democrats believe in a form of participatory democracy and/or workplace democracy combined with a representative democracy.
Within Marxist orthodoxy there is a hostility to what is commonly called "liberal democracy", which they simply refer to as parliamentary democracy because of its often centralised nature. Because of their desire to eliminate the political elitism they see in capitalism, Marxists, Leninists and Trotskyists believe in direct democracy implemented through a system of communes (which are sometimes called soviets). This system ultimately manifests itself as council democracy and begins with workplace democracy. (See Democracy in Marxism.)
Democracy cannot consist solely of elections that are nearly always fictitious and managed by rich landowners and professional politicians.
Anarchists are split in this domain, depending on whether they believe that a majority-rule is tyrannic or not. The only form of democracy considered acceptable to many anarchists is direct democracy. Pierre-Joseph Proudhon argued that the only acceptable form of direct democracy is one in which it is recognised that majority decisions are not binding on the minority, even when unanimous. However, anarcho-communist Murray Bookchin criticised individualist anarchists for opposing democracy, and says "majority rule" is consistent with anarchism.
Some anarcho-communists oppose the majoritarian nature of direct democracy, feeling that it can impede individual liberty and opt in favour of a non-majoritarian form of consensus democracy, similar to Proudhon's position on direct democracy. Henry David Thoreau, who did not self-identify as an anarchist but argued for "a better government" and is cited as an inspiration by some anarchists, argued that people should not be in the position of ruling others or being ruled when there is no consent.
Anarcho-capitalists, voluntaryists and other right-anarchists oppose institutional democracy as they consider it in conflict with widely held moral values and ethical principles and their conception of individual rights. The a priori Rothbardian argument is that the state is a coercive institution which necessarily violates the non-aggression principle (NAP). Some right-anarchists also criticise democracy on a posteriori consequentialist grounds, in terms of inefficiency or disability in bringing about maximisation of individual liberty. They maintain the people who participate in democratic institutions are foremost driven by economic self-interest.
Sometimes called "democracy without elections", sortition chooses decision makers via a random process. The intention is that those chosen will be representative of the opinions and interests of the people at large, and be more fair and impartial than an elected official. The technique was in widespread use in Athenian Democracy and is still used in modern jury selection.
A consociational democracy allows for simultaneous majority votes in two or more ethno-religious constituencies, and policies are enacted only if they gain majority support from both or all of them.
A consensus democracy, in contrast, would not be dichotomous. Instead, decisions would be based on a multi-option approach, and policies would be enacted if they gained sufficient support, either in a purely verbal agreement, or via a consensus vote - a multi-option preference vote. If the threshold of support were at a sufficiently high level, minorities would be as it were protected automatically. Furthermore, any voting would be ethno-colour blind.
Qualified majority voting is designed by the Treaty of Rome to be the principal method of reaching decisions in the European Council of Ministers. This system allocates votes to member states in part according to their population, but heavily weighted in favour of the smaller states. This might be seen as a form of representative democracy, but representatives to the Council might be appointed rather than directly elected.
Inclusive democracy is a political theory and political project that aims for direct democracy in all fields of social life: political democracy in the form of face-to-face assemblies which are confederated, economic democracy in a stateless, moneyless and marketless economy, democracy in the social realm, i.e. self-management in places of work and education, and ecological democracy which aims to reintegrate society and nature. The theoretical project of inclusive democracy emerged from the work of political philosopher Takis Fotopoulos in "Towards An Inclusive Democracy" and was further developed in the journal Democracy & Natureand its successor The International Journal of Inclusive Democracy.
The basic unit of decision making in an inclusive democracy is the demotic assembly, i.e. the assembly of demos, the citizen body in a given geographical area which may encompass a town and the surrounding villages, or even neighbourhoods of large cities. An inclusive democracy today can only take the form of a confederal democracy that is based on a network of administrative councils whose members or delegates are elected from popular face-to-face democratic assemblies in the various demoi. Thus, their role is purely administrative and practical, not one of policy-making like that of representatives in representative democracy.
The citizen body is advised by experts but it is the citizen body which functions as the ultimate decision-taker . Authority can be delegated to a segment of the citizen body to carry out specific duties, for example to serve as members of popular courts, or of regional and confederal councils. Such delegation is made, in principle, by lot, on a rotation basis, and is always recallable by the citizen body. Delegates to regional and confederal bodies should have specific mandates.
A Parpolity or Participatory Polity is a theoretical form of democracy that is ruled by a Nested Council structure. The guiding philosophy is that people should have decision making power in proportion to how much they are affected by the decision. Local councils of 25–50 people are completely autonomous on issues that affect only them, and these councils send delegates to higher level councils who are again autonomous regarding issues that affect only the population affected by that council.
A council court of randomly chosen citizens serves as a check on the tyranny of the majority, and rules on which body gets to vote on which issue. Delegates may vote differently from how their sending council might wish, but are mandated to communicate the wishes of their sending council. Delegates are recallable at any time. Referendums are possible at any time via votes of most lower-level councils, however, not everything is a referendum as this is most likely a waste of time. A parpolity is meant to work in tandem with a participatory economy.
Cosmopolitan democracy, also known as Global democracy or World Federalism, is a political system in which democracy is implemented on a global scale, either directly or through representatives. An important justification for this kind of system is that the decisions made in national or regional democracies often affect people outside the constituency who, by definition, cannot vote. By contrast, in a cosmopolitan democracy, the people who are affected by decisions also have a say in them.
According to its supporters, any attempt to solve global problems is undemocratic without some form of cosmopolitan democracy. The general principle of cosmopolitan democracy is to expand some or all of the values and norms of democracy, including the rule of law; the non-violent resolution of conflicts; and equality among citizens, beyond the limits of the state. To be fully implemented, this would require reforming existing international organisations, e.g. the United Nations, as well as the creation of new institutions such as a World Parliament, which ideally would enhance public control over, and accountability in, international politics.
Cosmopolitan Democracy has been promoted, among others, by physicist Albert Einstein, writer Kurt Vonnegut, columnist George Monbiot, and professors David Held and Daniele Archibugi. The creation of the International Criminal Court in 2003 was seen as a major step forward by many supporters of this type of cosmopolitan democracy.
Creative Democracy is advocated by American philosopher John Dewey. The main idea about Creative Democracy is that democracy encourages individual capacity building and the interaction among the society. Dewey argues that democracy is a way of life in his work of ""Creative Democracy: The Task Before Us" and an experience built on faith in human nature, faith in human beings, and faith in working with others. Democracy, in Dewey's view, is a moral ideal requiring actual effort and work by people; it is not an institutional concept that exists outside of ourselves. "The task of democracy", Dewey concludes, "is forever that of creation of a freer and more humane experience in which all share and to which all contribute".
Aside from the public sphere, similar democratic principles and mechanisms of voting and representation have been used to govern other kinds of groups. Many non-governmental organisations decide policy and leadership by voting. Most trade unions and cooperatives are governed by democratic elections. Corporations are controlled by shareholders on the principle of one share, one vote.
Aristotle contrasted rule by the many (democracy/polity), with rule by the few (oligarchy/aristocracy), and with rule by a single person (tyranny or today autocracy/absolute monarchy). He also thought that there was a good and a bad variant of each system (he considered democracy to be the degenerate counterpart to polity).
For Aristotle the underlying principle of democracy is freedom, since only in a democracy the citizens can have a share in freedom. In essence, he argues that this is what every democracy should make its aim. There are two main aspects of freedom: being ruled and ruling in turn, since everyone is equal according to number, not merit, and to be able to live as one pleases.
But one factor of liberty is to govern and be governed in turn; for the popular principle of justice is to have equality according to number, not worth, ... And one is for a man to live as he likes; for they say that this is the function of liberty, inasmuch as to live not as one likes is the life of a man that is a slave.
The theory of aggregative democracy claims that the aim of the democratic processes is to solicit citizens' preferences and aggregate them together to determine what social policies society should adopt. Therefore, proponents of this view hold that democratic participation should primarily focus on voting, where the policy with the most votes gets implemented.
Different variants of aggregative democracy exist. Under minimalism, democracy is a system of government in which citizens have given teams of political leaders the right to rule in periodic elections. According to this minimalist conception, citizens cannot and should not "rule" because, for example, on most issues, most of the time, they have no clear views or their views are not well-founded. Joseph Schumpeter articulated this view most famously in his book Capitalism, Socialism, and Democracy. Contemporary proponents of minimalism include William H. Riker, Adam Przeworski, Richard Posner.
According to the theory of direct democracy, on the other hand, citizens should vote directly, not through their representatives, on legislative proposals. Proponents of direct democracy offer varied reasons to support this view. Political activity can be valuable in itself, it socialises and educates citizens, and popular participation can check powerful elites. Most importantly, citizens do not really rule themselves unless they directly decide laws and policies.
Governments will tend to produce laws and policies that are close to the views of the median voter – with half to their left and the other half to their right. This is not actually a desirable outcome as it represents the action of self-interested and somewhat unaccountable political elites competing for votes. Anthony Downs suggests that ideological political parties are necessary to act as a mediating broker between individual and governments. Downs laid out this view in his 1957 book An Economic Theory of Democracy.
Robert A. Dahl argues that the fundamental democratic principle is that, when it comes to binding collective decisions, each person in a political community is entitled to have his/her interests be given equal consideration (not necessarily that all people are equally satisfied by the collective decision). He uses the term polyarchy to refer to societies in which there exists a certain set of institutions and procedures which are perceived as leading to such democracy. First and foremost among these institutions is the regular occurrence of free and open elections which are used to select representatives who then manage all or most of the public policy of the society. However, these polyarchic procedures may not create a full democracy if, for example, poverty prevents political participation.
Deliberative democracy is based on the notion that democracy is government by deliberation. Unlike aggregative democracy, deliberative democracy holds that, for a democratic decision to be legitimate, it must be preceded by authentic deliberation, not merely the aggregration of preferences that occurs in voting. Authentic deliberation is deliberation among decision-makers that is free from distortions of unequal political power, such as power a decision-maker obtained through economic wealth or the support of interest groups. If the decision-makers cannot reach consensus after authentically deliberating on a proposal, then they vote on the proposal using a form of majority rule.
Radical democracy is based on the idea that there are hierarchical and oppressive power relations that exist in society. Democracy's role is to make visible and challenge those relations by allowing for difference, dissent and antagonisms in decision making processes.
Economists like Milton Friedman have strongly criticised the efficiency of democracy. They base this on their premise of the irrational voter. Their argument is that voters are highly uninformed about many political issues, especially relating to economics, and have a strong bias about the few issues on which they are fairly knowledgeable.
Popular rule as a façade
The 20th-century Italian thinkers Vilfredo Pareto and Gaetano Mosca (independently) argued that democracy was illusory, and served only to mask the reality of elite rule. Indeed, they argued that elite oligarchy is the unbendable law of human nature, due largely to the apathy and division of the masses (as opposed to the drive, initiative and unity of the elites), and that democratic institutions would do no more than shift the exercise of power from oppression to manipulation. As Louis Brandeis once professed, "We may have democracy, or we may have wealth concentrated in the hands of a few, but we can't have both."
All political parties in Canada are now cautious about criticism of the high level of immigration, because, as noted by The Globe and Mail, "in the early 1990s, the old Reform Party was branded 'racist' for suggesting that immigration levels be lowered from 250,000 to 150,000." As Professor of Economics Don J. DeVoretz pointed out, "In a liberal democracy such as Canada, the following paradox persists. Even though the majority of respondents answer yes to the question: 'Are there too many immigrant arrivals each year?' immigrant numbers continue to rise until a critical set of economic costs appear."
Plato's The Republic presents a critical view of democracy through the narration of Socrates: "Democracy, which is a charming form of government, full of variety and disorder, and dispensing a sort of equality to equals and unequaled alike." In his work, Plato lists 5 forms of government from best to worst. Assuming that the Republic was intended to be a serious critique of the political thought in Athens, Plato argues that only Kallipolis, an aristocracy led by the unwilling philosopher-kings (the wisest men), is a just form of government.
James Madison critiqued direct democracy (which he referred to simply as "democracy") in Federalist No. 10, arguing that representative democracy—which he described using the term "republic"—is a preferable form of government, saying: "... democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths." Madison offered that republics were superior to democracies because republics safeguarded against tyranny of the majority, stating in Federalist No. 10: "the same advantage which a republic has over a democracy, in controlling the effects of faction, is enjoyed by a large over a small republic".
More recently, democracy is criticised for not offering enough political stability. As governments are frequently elected on and off there tends to be frequent changes in the policies of democratic countries both domestically and internationally. Even if a political party maintains power, vociferous, headline grabbing protests and harsh criticism from the mass media are often enough to force sudden, unexpected political change. Frequent policy changes with regard to business and immigration are likely to deter investment and so hinder economic growth. For this reason, many people have put forward the idea that democracy is undesirable for a developing country in which economic growth and the reduction of poverty are top priorities.
This opportunist alliance not only has the handicap of having to cater to too many ideologically opposing factions, but it is usually short lived since any perceived or actual imbalance in the treatment of coalition partners, or changes to leadership in the coalition partners themselves, can very easily result in the coalition partner withdrawing its support from the government.
In representative democracies, it may not benefit incumbents to conduct fair elections. A study showed that incumbents who rig elections stay in office 2.5 times as long as those who permit fair elections. In countries with income above per capita, democracies have been found to be less prone to violence, but below that threshold, more prone violence. Election misconduct is more likely in countries with low per capita incomes, small populations, rich in natural resources, and a lack of institutional checks and balances. Sub-Saharan countries, as well as Afghanistan, all tend to fall into that category.
Governments that have frequent elections tend to have significantly more stable economic policies than those governments who have infrequent elections. However, this trend does not apply to governments that hold fraudulent elections.
Democracy in modern times has almost always faced opposition from the previously existing government, and many times it has faced opposition from social elites. The implementation of a democratic government within a non-democratic state is typically brought about by democratic revolution.
Post-Enlightenment ideologies such as fascism, Nazism and neo-fundamentalism oppose democracy on different grounds, generally citing that the concept of democracy as a constant process is flawed and detrimental to a preferable course of development.
Several philosophers and researchers outlined historical and social factors supporting the evolution of democracy. Cultural factors like Protestantism influenced the development of democracy, rule of law, human rights and political liberty (the faithful elected priests, religious freedom and tolerance has been practiced).
Others mentioned the influence of wealth (e.g. S. M. Lipset, 1959). In a related theory, Ronald Inglehart suggests that the increase in living standards has convinced people that they can take their basic survival for granted, and led to increased emphasis on self-expression values, which is highly correlated to democracy.
Carroll Quigley concludes that the characteristics of weapons are the main predictor of democracy: Democracy tends to emerge only when the best weapons available are easy for individuals to buy and use. By the 1800s, guns were the best weapon available, and in America, almost everyone could afford to buy a gun, and could learn how to use it fairly easily. Governments couldn't do any better: It became the age of mass armies of citizen soldiers with guns Similarly, Periclean Greece was an age of the citizen soldier and democracy.
Recently established theories stress the relevance of education and human capital and within them of cognitive ability to increasing tolerance, rationality, political literacy and participation. Two effects of education and cognitive ability are distinguished: a cognitive effect (competence to make rational choices, better information processing) and an ethical effect (support of democratic values, freedom, human rights etc.), which itself depends on intelligence.
Evidence that is consistent with conventional theories of why democracy emerges and is sustained has been hard to come by. Recent statistical analyses have challenged modernisation theory by demonstrating that there is no reliable evidence for the claim that democracy is more likely to emerge when countries become wealthier, more educated, or less unequal. Neither is there convincing evidence that increased reliance on oil revenues prevents democratisation, despite a vast theoretical literature called "The Resource Curse" that asserts that oil revenues sever the link between citizen taxation and government accountability, the key to representative democracy. The lack of evidence for these conventional theories of democratisation have led researchers to search for the "deep" determinants of contemporary political institutions, be they geographical or demographic.
In the 21st century, democracy has become such a popular method of reaching decisions that its application beyond politics to other areas such as entertainment, food and fashion, consumerism, urban planning, education, art, literature, science and theology has been criticised as "the reigning dogma of our time". The argument is that applying a populist or market-driven approach to art and literature for example, means that innovative creative work goes unpublished or unproduced. In education, the argument is that essential but more difficult studies are not undertaken. Science, which is a truth-based discipline, is particularly corrupted by the idea that the correct conclusion can be arrived at by popular vote.
Robert Michels asserts that although democracy can never be fully realised, democracy may be developed automatically in the act of striving for democracy: "The peasant in the fable, when on his death-bed, tells his sons that a treasure is buried in the field. After the old man's death the sons dig everywhere in order to discover the treasure. They do not find it. But their indefatigable labor improves the soil and secures for them a comparative well-being. The treasure in the fable may well symbolise democracy."
Dr. Harald Wydra, in his book Communism and The Emergence of Democracy, maintains that the development of democracy should not be viewed as a purely procedural or as a static concept but rather as an ongoing "process of meaning formation". Drawing on Claude Lefort's idea of the empty place of power, that "power emanates from the people [...] but is the power of nobody", he remarks that democracy is reverence to a symbolic mythical authority as in reality, there is no such thing as the people or demos. Democratic political figures are not supreme rulers but rather temporary guardians of an empty place. Any claim to substance such as the collective good, the public interest or the will of the nation is subject to the competitive struggle and times of for gaining the authority of office and government. The essence of the democratic system is an empty place, void of real people which can only be temporarily filled and never be appropriated. The seat of power is there, but remains open to constant change. As such, what "democracy" is or what is "democratic" progresses throughout history as a continual and potentially never ending process of social construction.
In 2010 a study by a German military think tank has analyzed how peak oil might change the global economy. The study raises fears for the survival of democracy itself. It suggests that parts of the population could perceive the upheaval triggered by peak oil as a general systemic crisis. This would create "room for ideological and extremist alternatives to existing forms of government".
- Oxford English Dictionary: Democracy.
- Diamond, L., Lecture at Hilla University for Humanistic Studies January 21, 2004: "What is Democracy"
- δημοκρατία in Henry George Liddell, Robert Scott, "A Greek-English Lexicon", at Perseus
- Wilson, N. G. (2006). Encyclopedia of ancient Greece. New York: Routledge. p. 511. ISBN 0-415-97334-1.
- Barker, Ernest (1906). The Political Thought of Plato and Aristotle. Chapter VII, Section 2: G. P. Putnam's Sons.
- Jarvie, 2006, pp. 218–9
- Liberty and justice for some at Economist.com
- O'Donnell, G., In Diamond, L.; Morlino, L., Assessing the Quality of Democracy, JHU Press, 2005, p. 3.
- R. Alan Dahl, I. Shapiro, J. A. Cheibub, The Democracy Sourcebook, MIT Press 2003, ISBN 0-262-54147-5, Google Books link
- M. Hénaff, T. B. Strong, Public Space and Democracy, University of Minnesota Press, ISBN 0-8166-3387-8
- Kimber, Richard (1989). "On Democracy". Scandinavian Political Studies 12 (3): 201, 199–219. ISSN 0080-6757.
- Roger Scruton (2013-08-09). "A Point of View: Is democracy overrated?". BBC News.
- "Parliamentary sovereignty". UK Parliament. Retrieved 18 August 2014.
- "Independence". Courts and Tribunals Judiciary. Retrieved 9 November 2014.
- "All-party meet vows to uphold Parliament supremacy". The New Indian Express. 2 August 2013. Retrieved 18 August 2013.
- A. Barak,The Judge in a Democracy, Princeton University Press, 2006, p. 27, ISBN 0-691-12017-X, Google Books link
- H. Kelsen, Ethics, Vol. 66, No. 1, Part 2: Foundations of Democracy (October , 1955), pp. 1–101
- Martha Nussbaum, Women and human development: the capabilities approach (Cambridge University Press, 2000).
- Larry Jay Diamond, Marc F. Plattner (2006). Electoral systems and democracy p.168. Johns Hopkins University Press, 2006.
- Montesquieu, Spirit of the Laws, Bk. II, ch. 2–3.
- William R. Everdell. The End of Kings: A History of Republics and Republicans. University of Chicago Press, 2000.
- Barbara Kipfer (2002). Flip Dictionary. Writer's Digest Books. p. 45. ISBN 1582971404.
- John Dunn, Democracy: the unfinished journey 508 BC – 1993 AD, Oxford University Press, 1994, ISBN 0-19-827934-5
- Raaflaub, Ober & Wallace 2007, p. [page needed].
- R. Po-chia Hsia, Lynn Hunt, Thomas R. Martin, Barbara H. Rosenwein, and Bonnie G. Smith, The Making of the West, Peoples and Cultures, A Concise History, Volume I: To 1740 (Boston and New York: Bedford/St. Martin's, 2007), 44.
- Aristotle Book 6
- Leonid E. Grinin, The Early State, Its Alternatives and Analogues 'Uchitel' Publishing House, 2004
- Susan Lape, Reproducing Athens: Menander's Comedy, Democratic Culture, and the Hellenistic City, Princeton University Press, 2009, p. 4, ISBN 1400825911
- Raaflaub, Ober & Wallace 2007, p. 5.
- Ober & Hedrick 1996, p. 107.
- Clarke, 2001, pp. 194–201
- "Full historical description of the Spartan government". Rangevoting.org. Retrieved 2013-09-28.
- Terrence A. Boring, Literacy in Ancient Sparta, Leiden Netherlands (1979). ISBN 90-04-05971-7
- "Ancient Rome from the earliest times down to 476 A.D". Annourbis.com. Retrieved 2010-08-22.
- Watson 2005, p. 285
- Livy 2002, p. 34
- Watson 2005, p. 271
- "Constitution 1,000 years ago". The Hindu (Chennai, India). 2008-07-11.
- "Magna Carta: an introduction". The British Library. Retrieved 28 January 2015.
Magna Carta is sometimes regarded as the foundation of democracy in England. ...Revised versions of Magna Carta were issued by King Henry III (in 1216, 1217 and 1225), and the text of the 1225 version was entered onto the statute roll in 1297. ...The 1225 version of Magna Carta had been granted explicitly in return for a payment of tax by the whole kingdom, and this paved the way for the first summons of Parliament in 1265, to approve the granting of taxation.
- "Citizen or Subject?". The National Archives. Retrieved 2013-11-17.
- "The January Parliament and how it defined Britain". The Telegraph. 20 January 2015. Retrieved 28 January 2015.
- "Origins and growth of Parliament". The National Archives. Retrieved 2013-11-17.
- "Rise of Parliament". The National Archives. Retrieved 2010-08-22.
- "Constitutionalism: America & Beyond". Bureau of International Information Programs (IIP), U.S. Department of State. Retrieved 30 October 2014.
The earliest, and perhaps greatest, victory for liberalism was achieved in England. The rising commercial class that had supported the Tudor monarchy in the 16th century led the revolutionary battle in the 17th, and succeeded in establishing the supremacy of Parliament and, eventually, of the House of Commons. What emerged as the distinctive feature of modern constitutionalism was not the insistence on the idea that the king is subject to law (although this concept is an essential attribute of all constitutionalism). This notion was already well established in the Middle Ages. What was distinctive was the establishment of effective means of political control whereby the rule of law might be enforced. Modern constitutionalism was born with the political requirement that representative government depended upon the consent of citizen subjects.... However, as can be seen through provisions in the 1689 Bill of Rights, the English Revolution was fought not just to protect the rights of property (in the narrow sense) but to establish those liberties which liberals believed essential to human dignity and moral worth. The "rights of man" enumerated in the English Bill of Rights gradually were proclaimed beyond the boundaries of England, notably in the American Declaration of Independence of 1776 and in the French Declaration of the Rights of Man in 1789.
- Tocqueville, Alexis de (2003). Democracy in America. USA: Barnes & Noble. pp. 11, 18-19. ISBN 0-7607-5230-3.
- Allan Weinstein and David Rubel (2002), The Story of America: Freedom and Crisis from Settlement to Superpower, DK Publishing, Inc., New York, N.Y., ISBN 0-7894-8903-1, p. 61
- Clifton E. Olmstead (1960), History of Religion in the United States, Prentice-Hall, Englewood Cliffs, N.J., pp. 63-65, 74-75, 102-105, 114-115
- Christopher Fennell (1998), Plymouth Colony Legal Structure
- "Citizenship 1625-1789". The National Archives. Retrieved 2013-11-17.
- "Getting the vote". The National Archives. Retrieved 2010-08-22.
- Gregory, Desmond (1985). The ungovernable rock: a history of the Anglo-Corsican Kingdom and its role in Britain's Mediterranean strategy during the Revolutionary War, 1793-1797. London: Fairleigh Dickinson University Press. p. 31. ISBN 0-8386-3225-4.
- Ray Allen Billington, America's Frontier Heritage (1974) 117–158. ISBN 0-8263-0310-2
- Jacqueline Newmyer, "Present from the start: John Adams and America", Oxonian Review of Books, 2005, vol 4 issue 2
- United States Constitution
- "The French Revolution II". Mars.wnec.edu. Retrieved 2010-08-22.
- Michael Denning (2004). Culture in the Age of Three Worlds. Verso. p. 212. ISBN 978-1-85984-449-6. Retrieved 10 July 2013.
- Lovejoy, Paul E. (2000). Transformations in slavery: a history of slavery in Africa (2nd ed.). New York: Cambridge University Press. p. 290. ISBN 0521780128.
- French National Assembly. "1848 " Désormais le bulletin de vote doit remplacer le fusil "". Retrieved 2009-09-26.
- "Movement toward greater democracy in Europe". Indiana University Northwest.
- "Introduction – Social Aspects of the Civil War". Itd.nps.gov. Retrieved 2010-08-22.
- Transcript of Voting Rights Act (1965) U.S. National Archives.
- The Constitution: The 24th Amendment Time.
- Age of Dictators: Totalitarianism in the inter-war period at the Wayback Machine (archived September 7, 2006)
- "Did the United States Create Democracy in Germany?: The Independent Review: The Independent Institute". Independent.org. Retrieved 2010-08-22.
- "World | South Asia | Country profiles | Country profile: India". BBC News. 2010-06-07. Retrieved 2010-08-22.
- "How the Westminster Parliamentary System was exported around the World". University of Cambridge. 2 December 2013. Retrieved 16 December 2013.
- "Tables and Charts". Freedomhouse.org. 2004-05-10. Retrieved 2010-08-22.[dead link]
- List of Electoral Democracies fordemocracy.net
- "General Assembly declares 15 September International Day of Democracy; Also elects 18 Members to Economic and Social Council". Un.org. Retrieved 2010-08-22.
- G. F. Gaus, C. Kukathas, Handbook of Political Theory, SAGE, 2004, p. 143–145, ISBN 0-7619-6787-7, Google Books link
- The Judge in a Democracy, Princeton University Press, 2006, p. 26, ISBN 0-691-12017-X, Google Books link
- A. Barak, The Judge in a Democracy, Princeton University Press, 2006, p. 40, ISBN 0-691-12017-X, Google Books link
- T. R. Williamson, Problems in American Democracy, Kessinger Publishing, 2004, p. 36, ISBN 1-4191-4316-6, Google Books link
- U. K. Preuss, "Perspectives of Democracy and the Rule of Law." Journal of Law and Society, 18:3 (1991). pp. 353–364
- Budge, Ian (2001). "Direct democracy". In Clarke, Paul A.B. & Foweraker, Joe. Encyclopedia of Political Thought. Taylor & Francis. ISBN 978-0-415-19396-2.
- Vincent Golay and Mix et Remix, Swiss political institutions, Éditions loisirs et pédagogie, 2008. ISBN 978-2-606-01295-3.
- "Radical Revolution - The Thermidorean Reaction". Wsu.edu. 1999-06-06. Archived from the original on 1999-02-03. Retrieved 2010-08-22.
- Köchler, Hans (1987). The Crisis of Representative Democracy. Frankfurt/M., Bern, New York. ISBN 978-3-8204-8843-2.
- Urbinati, Nadia (October 1, 2008). "2". Representative Democracy: Principles and Genealogy. ISBN 978-0226842790.
- Fenichel Pitkin, Hanna (September 2004). "Representation and Democracy: Uneasy Alliance". Scandinavian Political Studies 27 (3): 335–342.
- Aristotle. "Ch.9". Politics. Book 4.
- Keen, Benjamin, A History of Latin America. Boston: Houghton Mifflin, 1980.
- Kuykendall, Ralph, Hawaii: A History. New York: Prentice Hall, 1948.
- Brown, Charles H., The Correspondents' War. New York: Charles Scribners' Sons, 1967.
- Taussig, Capt. J. K., "Experiences during the Boxer Rebellion," in Quarterdeck and Fo'c'sle. Chicago: Rand McNally & Company, 1963
- O'Neil, Patrick H. Essentials of Comparative Politics. 3rd ed. New York: W. W. Norton &, 2010. Print
- Garret, Elizabeth (October 13, 2005). "The Promise and Perils of Hybrid Democracy". The Henry Lecture, University of Oklahoma Law School. Retrieved 2012-08-07.
- "Article on direct democracy by Imraan Buccus". Themercury.co.za. Retrieved 2010-08-22.[dead link]
- "A Citizen's Guide To Vermont Town Meeting". July 2008. Retrieved 12 October 2012.
- "Republic – Definition from the Merriam-Webster Online Dictionary". M-w.com. 2007-04-25. Retrieved 2010-08-22.
- Novanglus, no. 7, 6 March 1775
- "The Founders' Constitution: Volume 1, Chapter 18, Introduction, "Epilogue: Securing the Republic"". Press-pubs.uchicago.edu. Retrieved 2010-08-22.
- "Economics Cannot be Separated from Politics" speech by Che Guevara to the ministerial meeting of the Inter-American Economic and Social Council (CIES), in Punta del Este, Uruguay on August 8, 1961
- Pierre-Joseph Proudhon. General Idea of the Revolution See also commentary by Graham, Robert. The General Idea of Proudhon's Revolution
- Bookchin, Murray. Communalism: The Democratic Dimensions of Social Anarchism. Anarchism, Marxism and the Future of the Left: Interviews and Essays, 1993–1998, AK Press 1999, p. 155
- Bookchin, Murray. Social Anarchism or Lifestyle Anarchism: An Unbridgeable Chasm
- Graeber, David and Grubacic, Andrej. Anarchism, Or The Revolutionary Movement Of The Twenty-first Century
- Thoreau, H. D. On the Duty of Civil Disobedience
- Rothbard, Murray N. Man, Economy and State: Chapter 5 - Binary Intervention: Government Expenditures
- Rothbard, Murray N. The Anatomy of the State
- "Article on Cosmopolitan democracy by Daniele Archibugi" (PDF). Retrieved 2010-08-22.
- "letter by Einstein – "To the General Assembly of the United Nations"". Retrieved 2 July 2013., first published in United Nations World New York, Oct 1947, pp13-14
- Daniele Archibugi & David Held, eds., Cosmopolitan Democracy. An Agenda for a New World Order, Polity Press, Cambridge, 1995; David Held, Democracy and the Global Order, Polity Press, Cambridge, 1995, Daniele Archibugi,The Global Commonwealth of Citizens. Toward Cosmopolitan Democracy, Princeton University Press, Princeton, 2008
- "Aristotle, The Politics". Humanities.mq.edu.au. Retrieved 2010-08-22.
- Aristotle (384–322 BC): General Introduction Internet Encyclopedia of Philosophy
- Springer, Simon (2011). "Public Space as Emancipation: Meditations on Anarchism, Radical Democracy, Neoliberalism and Violence". Antipode: A Radical Journal of Geography 43 (2): 525–562.
- Joseph Schumpeter, (1950). Capitalism, Socialism, and Democracy. Harper Perennial. ISBN 0-06-133008-6.
- Anthony Downs, (1957). An Economic Theory of Democracy. Harpercollins College. ISBN 0-06-041750-1.
- Dahl, Robert, (1989). Democracy and its Critics. New Haven: Yale University Press. ISBN 0-300-04938-2
- Gutmann, Amy, and Dennis Thompson (2002). Why Deliberative Democracy? Princeton University Press. ISBN 978-0-691-12019-5
- Joshua Cohen, “Deliberation and Democratic Legitimacy” in Essays on Reason and Politics: Deliberative Democracy Ed. James Bohman and William Rehg (The MIT Press: Cambridge) 1997, 72-73.
- Ethan J. "Can Direct Democracy Be Made Deliberative?", Buffalo Law Review, Vol. 54, 2006
- Femia, Joseph V. "Against the Masses", Oxford 2001
- Dilliard, Iriving. "Mr. Justice Brandeis, Great American", Modern View Press 1941, p. 42. Quoting Raymond Lonergan. See, http://babel.hathitrust.org/cgi/pt?id=mdp.39015009170443;seq=56;view=1up;num=42
- "Is the current model of immigration the best one for Canada?", The Globe and Mail, 12 December 2005.
- "The Political Economy of Canadian Immigration Debate: A Crumbling Consensus?". Don J. DeVoretz. Co-Director Centre of Excellence on Immigration and Integration and Professor of Economics , Simon Fraser University. April 17, 1996
- "Canadian Immigration Targets for 1994". Migration News. June 1994.
- Plato, the Republic of Plato (London: J.M Dent & Sons LTD.; New York: E.P. Dutton & Co. Inc.), 558-C.
- The contrast between Plato's theory of philosopher-kings, arresting change, and Aristotle's embrace of change, is the historical tension espoused by Karl Raimund Popper in his WWII treatise, The Open Society and its Enemies (1943).
- "Head to head: African democracy". BBC News. 2008-10-16. Retrieved 2010-04-01.
- Paul Collier (2009-11-08). "5 myths about the beauty of the ballot box". Washington Post (Washington Post). pp. B2.
- Inglehart, Ronald. Welzel, Christian Modernisation, Cultural Change and Democracy: The Human Development Sequence, 2005. Cambridge: Cambridge University Press
- Foreword, written by historian Harry J Hogan in 1982, to Quigley's Weapons Systems and Political Stability
- see also Chester G Starr, Review of Weapons Systems and Political Stability, American Historical Review, Feb 1984, p98, available at carrollquigley.net
- Carroll Quigley (1983). Weapons systems and political stability: a history. University Press of America. pp. pp38–9. ISBN 978-0-8191-2947-5. Retrieved 20 May 2013.
- Carroll Quigley (1983). Weapons systems and political stability: a history. University Press of America. p. 307. ISBN 978-0-8191-2947-5. Retrieved 20 May 2013.
- Glaeser, E., Ponzetto, G. & Shleifer, A. (2007). Why does democracy need education? Journal of Economic Growth, 12(2), 77–99.
- Deary, I. J., Batty, G. D. & Gale, C. R. (2008). Bright children become enlightened adults. Psychological Science, 19(1), 1–6.
- Rindermann, H. (2008). Relevance of education and intelligence for the political development of nations: Democracy, rule of law and political liberty. Intelligence, 36(4), 306–322
- Albertus, Michael; Menaldo, Victor (2012). "Coercive Capacity and the Prospects for Democratisation". Comparative Politics 44 (2): 151–169. doi:10.5129/001041512798838003.
- "The Resource Curse: Does the Emperor Have no Clothes?".
- Acemoglu, Daron; Robinson, James A. (2006). Economic Origins of Dictatorship and Democracy. Cambridge Books, Cambridge University Press. ISBN 978-0-521-85526-6.
- "Rainfall and Democracy".
- Farrelly, Elizabeth (2011-09-15). "Deafened by the roar of the crowd". The Sydney Morning Herald. Retrieved 2011-09-17.
- Robert Michels (1999) [1962 by Crowell-Collier]. Political Parties. Transaction Publishers. p. 243. ISBN 978-1-4128-3116-1. Retrieved 5 June 2013.
- Harald Wydra, Communism and the Emergence of Democracy, Cambridge: Cambridge University Press, 2007, pp.22-27.
- Military Study Warns of a Potentially Drastic Oil Crisis". Spiegel Online. September 1, 2010
- Appleby, Joyce. (1992). Liberalism and Republicanism in the Historical Imagination. Harvard University Press.
- Archibugi, Daniele, The Global Commonwealth of Citizens. Toward Cosmopolitan Democracy, Princeton University Press ISBN 978-0-691-13490-1
- Becker, Peter, Heideking, Juergen, & Henretta, James A. (2002). Republicanism and Liberalism in America and the German States, 1750–1850. Cambridge University Press. ISBN 978-0-521-80066-2
- Benhabib, Seyla. (1996). Democracy and Difference: Contesting the Boundaries of the Political. Princeton University Press. ISBN 978-0-691-04478-1
- Blattberg, Charles. (2000). From Pluralist to Patriotic Politics: Putting Practice First, Oxford University Press, ISBN 978-0-19-829688-1.
- Birch, Anthony H. (1993). The Concepts and Theories of Modern Democracy. London: Routledge. ISBN 978-0-415-41463-0
- Castiglione, Dario. (2005). "Republicanism and its Legacy." European Journal of Political Theory. pp 453–65.
- Copp, David, Jean Hampton, & John E. Roemer. (1993). The Idea of Democracy. Cambridge University Press. ISBN 978-0-521-43254-2
- Caputo, Nicholas. (2005). America's Bible of Democracy: Returning to the Constitution. SterlingHouse Publisher, Inc. ISBN 978-1-58501-092-9
- Dahl, Robert A. (1991). Democracy and its Critics. Yale University Press. ISBN 978-0-300-04938-1
- Dahl, Robert A. (2000). On Democracy. Yale University Press. ISBN 978-0-300-08455-9
- Dahl, Robert A. Ian Shapiro & Jose Antonio Cheibub. (2003). The Democracy Sourcebook. MIT Press. ISBN 978-0-262-54147-3
- Dahl, Robert A. (1963). A Preface to Democratic Theory. University of Chicago Press. ISBN 978-0-226-13426-0
- Davenport, Christian. (2007). State Repression and the Domestic Democratic Peace. Cambridge University Press. ISBN 978-0-521-86490-9
- Diamond, Larry & Marc Plattner. (1996). The Global Resurgence of Democracy. Johns Hopkins University Press. ISBN 978-0-8018-5304-3
- Diamond, Larry & Richard Gunther. (2001). Political Parties and Democracy. JHU Press. ISBN 978-0-8018-6863-4
- Diamond, Larry & Leonardo Morlino. (2005). Assessing the Quality of Democracy. JHU Press. ISBN 978-0-8018-8287-6
- Diamond, Larry, Marc F. Plattner & Philip J. Costopoulos. (2005). World Religions and Democracy. JHU Press. ISBN 978-0-8018-8080-3
- Diamond, Larry, Marc F. Plattner & Daniel Brumberg. (2003). Islam and Democracy in the Middle East. JHU Press. ISBN 978-0-8018-7847-3
- Elster, Jon. (1998). Deliberative Democracy. Cambridge University Press. ISBN 978-0-521-59696-1
- Emerson, Peter (2007) "Designing an All-Inclusive Democracy." Springer. ISBN 978-3-540-33163-6
- Emerson, Peter (2012) "Defining Democracy." Springer. ISBN 978-3-642-20903-1
- Fotopoulos, Takis. (2006). "Liberal and Socialist "Democracies" versus Inclusive Democracy", The International Journal Of Inclusive Democracy. 2(2)
- Fotopoulos, Takis. (1992). "Direct and Economic Democracy in Ancient Athens and its Significance Today", Democracy & Nature, 1(1)
- Gabardi, Wayne. (2001). Contemporary Models of Democracy. Polity.
- Griswold, Daniel. (2007). Trade, Democracy and Peace: The Virtuous Cycle at the Wayback Machine (archived September 28, 2007)
- Gutmann, Amy, and Dennis Thompson. (1996). Democracy and Disagreement. Princeton University Press. ISBN 978-0-674-19766-4
- Gutmann, Amy, and Dennis Thompson. (2002). Why Deliberative Democracy? Princeton University Press. ISBN 978-0-691-12019-5
- Haldane, Robert Burdone (1918). The future of democracy. London: Headley Bros. Publishers Ltd.
- Halperin, M. H., Siegle, J. T. & Weinstein, M. M. (2005). The Democracy Advantage: How Democracies Promote Prosperity and Peace. Routledge. ISBN 978-0-415-95052-7
- Hansen, Mogens Herman. (1991). The Athenian Democracy in the Age of Demosthenes. Oxford: Blackwell. ISBN 978-0-631-18017-3
- Held, David. (2006). Models of Democracy. Stanford University Press. ISBN 978-0-8047-5472-9
- Inglehart, Ronald. (1997). Modernisation and Postmodernisation. Cultural, Economic, and Political Change in 43 Societies. Princeton University Press. ISBN 978-0-691-01180-6
- Isakhan, Ben and Stockwell, Stephen (co-editors). (2011) The Secret History of Democracy. Palgrave MacMillan. ISBN 978-0-230-24421-4
- Jarvie, I. C.; Milford, K. (2006). Karl Popper: Life and time, and values in a world of facts Volume 1 of Karl Popper: A Centenary Assessment, Karl Milford. Ashgate Publishing, Ltd. ISBN 978-0-7546-5375-2.
- Khan, L. Ali. (2003). A Theory of Universal Democracy: Beyond the End of History. Martinus Nijhoff Publishers. ISBN 978-90-411-2003-8
- Köchler, Hans. (1987). The Crisis of Representative Democracy. Peter Lang. ISBN 978-3-8204-8843-2
- Lijphart, Arend. (1999). Patterns of Democracy: Government Forms and Performance in Thirty-Six Countries. Yale University Press. ISBN 978-0-300-07893-0
- Lipset, Seymour Martin. (1959). "Some Social Requisites of Democracy: Economic Development and Political Legitimacy". American Political Science Review 53 (1): 69–105. doi:10.2307/1951731. JSTOR 1951731.
- Macpherson, C. B. (1977). The Life and Times of Liberal Democracy. Oxford University Press. ISBN 978-0-19-289106-8
- Morgan, Edmund. (1989). Inventing the People: The Rise of Popular Sovereignty in England and America. Norton. ISBN 978-0-393-30623-1
- Ober, J.; Hedrick, C. W. (1996). Dēmokratia: a conversation on democracies, ancient and modern. Princeton University Press. ISBN 0-691-01108-7.
- Plattner, Marc F. & Aleksander Smolar. (2000). Globalisation, Power, and Democracy. JHU Press. ISBN 978-0-8018-6568-8
- Plattner, Marc F. & João Carlos Espada. (2000). The Democratic Invention. Johns Hopkins University Press. ISBN 978-0-8018-6419-3
- Putnam, Robert. (2001). Making Democracy Work. Princeton University Press. ISBN 978-5-551-09103-5
- Raaflaub, Kurt A.; Ober, Josiah; Wallace, Robert W (2007). Origins of Democracy in Ancient Greece. University of California Press. ISBN 978-0-520-24562-4.
- Riker, William H.. (1962). The Theory of Political Coalitions. Yale University Press.
- Sen, Amartya K. (1999). "Democracy as a Universal Value". Journal of Democracy 10 (3): 3–17. doi:10.1353/jod.1999.0055.
- Tannsjo, Torbjorn. (2008). Global Democracy: The Case for a World Government. Edinburgh University Press. ISBN 978-0-7486-3499-6. Argues that not only is world government necessary if we want to deal successfully with global problems it is also, pace Kant and Rawls, desirable in its own right.
- Thompson, Dennis (1970). The Democratic Citizen: Social Science and Democratic Theory in the 20th Century. Cambridge University Press. ISBN 978-0-521-13173-5
- Volk, Kyle G. (2014). Moral Minorities and the Making of American Democracy. New York: Oxford University Press.
- Weingast, Barry. (1997). "The Political Foundations of the Rule of Law and Democracy". American Political Science Review 91 (2): 245–263. doi:10.2307/2952354. JSTOR 2952354.
- Weatherford, Jack. (1990). Indian Givers: How the Indians Transformed the World. New York: Fawcett Columbine. ISBN 978-0-449-90496-1
- Whitehead, Laurence. (2002). Emerging Market Democracies: East Asia and Latin America. JHU Press. ISBN 978-0-8018-7219-8
- Willard, Charles Arthur. (1996). Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy. University of Chicago Press. ISBN 978-0-226-89845-2
- Wood, E. M. (1995). Democracy Against Capitalism: Renewing historical materialism. Cambridge University Press. ISBN 978-0-521-47682-9
- Wood, Gordon S. (1991). The Radicalism of the American Revolution. Vintage Books. ISBN 978-0-679-73688-2 examines democratic dimensions of republicanism
|Wikimedia Commons has media related to Democracy.|
|Wikiquote has quotations related to: Democracy|
|Look up democracy in Wiktionary, the free dictionary.|
|Library resources about
- Democracy at DMOZ
- The Official Website of Democracy Foundation , Mumbai - INDIA
- Centre for Democratic Network Governance
- Democracy at the Stanford Encyclopedia of Philosophy
- Dictionary of the History of Ideas: Democracy
- The Economist Intelligence Unit's index of democracy
- Ewbank, N. The Nature of Athenian Democracy, Clio History Journal, 2009.
- "Democracy Conference". Innertemple.org.uk. Archived from the original on 2010-05-28.
- Alexis de Tocqueville, Democracy in America Full hypertext with critical essays on America in 1831–32 from American Studies at the University of Virginia
- Democracy Watch (Canada) – Leading democracy monitoring organisation
- Democratic Audit (UK) – Independent research organisation which produces evidence-based reports that assess democracy and human rights in the UK
- Data visualizations of data on democratisation and list of data sources on political regimes on 'Our World in Data'.
- Erik von Kuehnelt-Leddihn: Liberty or Equality.
- J.K. Baltzersen: Churchill on Democracy Revisited, (24 January 2005)
- GegenStandpunkt: The Democratic State: Critique of Bourgeois Sovereignty
- Frank Karsten: Democracy Can't Be Fixed. It's Inherently Broken, Lew Rockwell |
Ideal Gas Law Worksheet
Posted in Worksheet, by Kimberly R. Foreman
Ideal gas law worksheet. use the ideal gas law, perv, and the universal gas. to solve the following. if pressure is needed in then convert by multiplying by. kpa get. r. l ideal gas law then p n v t r what pressure is required to contain. moles of nitrogen gas in a.
l container at a. temperature of. c oxygen gas is collected at a pressure of in a container which has a volume of. l. Ideal gas law worksheet use the ideal gas law, and the universal gas constant r. l to solve the following problems if pressure is needed in then convert by multiplying by.
List of Ideal Gas Law Worksheet
Kpa to get r. l if i have moles of a gas at a pressure of. and a volume of liters, what is the temperature. , ideal gas law given the units of p n v t then r. what pressure is required to contain. moles of nitrogen gas in a. l container at a temperature of.
c answer. oxygen gas is collected at a pressure of in a container which has a volume of. l. Chem worksheet the ideal gas law model the gas laws t k kelvin or absolute temperature t c. is always k law. the volume of a gas varies inversely with pressure v k b k b is constant law.
1. Ideal Gas Law Calculator Images Ideal Gas Law
L Charles law worksheet answers. give the law both in words and in the form of an equation. how is the volume of a gas affected by a decrease in temperature. Charles law problems worksheet answers. a gas with a volume of at a pressure of is allowed to expand to a volume of.
2. Ideal Gas Law Teaching Chemistry Chemistry
There is also a gas constant, r. the gas constant depends on the unit for pressure. r. r. l example a deep underground cavern contains. x l of ch gas at a pressure of. x and a. Mixed gas laws worksheet. then write the name of the gas law used to solve each question in the left margin next to each question.
a gas occupies. l at. mm pressure. what is the volume at mm at the same temperature. a constant volume of oxygen is heated from to. the initial pressure is. Gas laws and scuba diving worksheet answers. convert a buoyancy vest used in scuba diving is filled to a volume of.
3. Ideal Gas Law Explained Chemistry Worksheets Teaching
Name. date. use the ideal gas law and the universal gas constant. r. to solve the following problems. if pressure is needed in then convert by multiplying by. kpa to get. r. tools chemistry gases and ideal gas law worksheet answer key. a sample of pure gas at c and mm occupied a volume of ml.
what is the number of moles of gas in this combined gas law and ideal gas law name. a container of gas is exerting a pressure of while at a temperature of. calculate the pressure of this same amount of gas in a container at a temperature of. k pct k.
4. Ideal Gas Law Worksheet Answer Key Work
At, a sample of gas occupies. ml. In this ideal gas law worksheet, high solve ideal gas problems. they find the temperature, the volume and the number of moles of gases using the ideal gas equation. they also create their own ideal gas law problem and include.
At low pressure less than atmosphere and high temperature greater than c, most gases obey the ideal gas equation. each quantity in the equation is usually expressed in the following units p pressure, measured in atmospheres. v volume, measured in liters.
5. Ideal Gas Law Worksheet Community College
Student exploration law and law vocabulary absolute zero, law, law, kelvin scale, pressure prior knowledge question do this before using the gizmo. a small helium tank measures about two feet cm high. yet it can fill over balloons law is one of the oldest companies to receive a patent on a worksheet.
question. the volume of oxygen gas at c is. law and law worksheet. more law and law worksheet. chemistry a study of matter segments. Experiment,charleslaw thispatternofbehavior. supposethatasampleofgaswereto. the temperature at which. Mar, prior to dealing with law worksheet answer key, please understand that knowledge is actually each of our crucial for a better down the road, and learning does not only stop after a school bell rings.
6. Ideal Gas Law Worksheet Ideal Gas Law Worksheets
,An activity worksheet that reviews the ideal gas law equation incorporating the four individual gas laws law, gay law, law, law. included are four calculations using the ideal gas equation, a summary of the graphs for each gas law and the equations that.
Ideal gas law packet name ideal gas law given the units of. p n v t then r what pressure is required to contain. moles of nitrogen gas in a. l container at a. temperature of. c answer. oxygen gas is collected at a pressure of in a container which has a volume of.
7. Gas Worksheet Answers Work
L. Ideal gas law or r is the universal gas constant. it can be derived as follows using the gas constant and the ideal gas law, it is possible to determine the value of any of the four variables knowing the other three. mass can even be used as one of the.
ideal gas law a. in reality, an ideal gas does not exist. in this unit however, we are going to assume that gases behave ideally. this will make our math easier is a close approximation. real gases behave like an ideal gas at high temperature at low pressure.
8. Ideal Gas Law Worksheets Ideal Gas Law Gas Laws
We use the formula solve the following problems assuming constant temperature. assume all number are significant figures. Figure shows how atmospheric pressure changes with altitude. figure shows how the molar mass of air changes with altitude. use the graphs and your knowledge of the ideal gas law to calculate the density of air at altitudes of km and km.
back to top gases worksheet gases law of combining volumes laws worksheet answer key. problems worksheet. super teacher worksheets answers. structure worksheet. ratio and proportion worksheets with answers. free worksheet. math aids com fractions worksheets answers.
9. Gas Variables Worksheet Answers Unique Gas
What is the density of carbon dioxide gas if. g occupies a volume of. a block of wood. cm on each side has a mass of g. Density worksheet chemistry in context perhaps someone has tried to trick you with this question which is heavier, a pound of lead or a pound of feathers many people would instinctively answer lead.
when they give this incorrect answer, these people are really thinking of density. Find the density of a block with a length of. cm, a width of. cm, a height. cm, and a mass of g. would this block float or sink in water v. cm. cm. cm d m v g. g c since the density of the block is less than the density of water, the block would float in water.
10. Gas Variables Worksheet Answers Gas Law Worksheet
Name this variable, and explain why a container is necessary. in your answer, consider the external and internal pressure data given in model. May, gas variables worksheet answers gas law worksheet by teachers pay teachers tough gas laws worksheet doc the gas laws word search gas worksheet answers and work work with quiz worksheet combined gas law study com gases s law law gay s law combined gas.
Gas variables worksheet answers. title gas variables packet answers created date pm. Dec, gas variables worksheet answers. download gas variables worksheet answers document. on this page you can read or download gas variables worksheet answers in format.
if you see any interesting for you, use our search form on bottom. variables and patterns school district overview. Countless book gas variables worksheet answers and collections to check out. we additionally find the money for variant types and after that type page.
11. Law Worksheet Answer Key Law
Determine what conversion facts are needed. write the starting fact with units included as a fraction. write the ending fact with units included as a fraction. between the starting and ending, build a product of fractions using the necessary conversion facts.
12. Gas Laws Poster Chemistry Lessons Chemistry
Mm. if. l of nitrogen at mm are compressed to mm at constant temperature. what is the new volume. a gas with a volume of. l at a pressure of is allowed to expand to a volume. The ideal gas law is used like any other gas law, with attention paid to the units and making sure that temperature is expressed in kelvins.
13. Gas Laws 6 Laws Presentation Differentiated Fold
A gas occupies a volume of cc at find its volume at degrees and mm of of a gas at c is cooled to c at constant pressure. calculate volume of a gas at, gas laws made simple. corny but kind of clever. tags aerospace medicine blog posts medical science.
14. Official Gas Laws Worksheet 1 Answer Key Gas Laws Chemistry
Law practice problems Aug, ideal gas law worksheet and answer key chemistry by keystone science solved what is the ideal gas equation what is the val gas laws worksheet with answer teaching resources calculations using the ideal gas equation practice khan academy gas laws and nature of gases presentation free ideal gas law lesson plan for.
Ideal gas law. how many moles of gas are contained in. at. and. mm pressure. g of is contained in a. l container at. what is the pressure in this container in mm. Chemistry worksheet on the topic of the ideal gas law one of the fundamental gas laws. this worksheet contains an explanation of the relationship between the volume of a gas, pressure of a gas, temperature and the number of moles of the gas.
15. Gas Law Maze Ideal Gas Law Teacher Life Student
These are related to r the ideal gas law constant. and its measurement law law law combined gas law gas law simulator view all the ideal gas law introduction to the ideal gas law universal gas constant standard conditions number density of a gas molar volume mass density of a gas gas view of the worksheets for this concept are ideal gas law name chem work, ideal gas law practice work, work, ideal gas law work, ideal gas law practice work, work, ideal gas law problems, gas laws work.
found worksheet you are looking for to, click on icon or print icon to worksheet to print or download. The ideal gas law combines the four variables that describe a v, absolute pressure p, temperature and number of moles n into one equation. the universal gas constant, r, is the same for all ideal gases.
16. Ideal Gas Law Worksheet Answer Key Included Distance
The value and units of. student inquiry worksheet analysis laws packet. ideal gas law worksheet use the ideal gas law, perv, and the universal gas constant r. to solve the following problems if pressure is needed in pa then convert b b. kpa to get r. .
Sep, the ideal gas law worksheet has been around for a while and it has become a quiz in schools and colleges. this particular worksheet is only about a third as long as its actual title, which means that it can be finished in just minutes. students can start on this particular worksheet and just keep answering the questions until they.
17. Law Worksheet Answer Key Mixed
Fantastic as a summary or st, prior to this activity, we learned about what matter is made up of, the four states of matter, how the particles behave in each state of. Jun, law and law gizmo worksheet answers with chapter thermodynamics worksheet. worksheet,.
18. Law Law Gizmo Worksheet Answers
In other words, as volume decreases, the pressure increases and vice versa. p x v p x v example a balloon is filled with l of air at. pressure. The gas laws ch. chem name period date the gas laws. the gas left in a used aerosol can is at a pressure of at c.
if this can is thrown into a fire, what is the internal pressure of the gas when its temperature reaches c given gas law work formula answer. Gas laws , , , and combined. law. a sample of oxygen gas occupies a volume of at. what volume will it occupy at if the temperature is held constant.
19. Law Worksheet Answers Honors Chemistry
Mar, worksheets for class chemistry one of the best teaching strategies employed in most classrooms today is worksheets. class chemistry worksheet for students has been used by teachers students to develop logical, lingual, analytical, and capabilities.
20. Images Ideal Gas Law Worksheet Ideal Gas Law
Common chem ch. ideal gas law. how many moles of gas air are in the lungs of an adult with a lung capacity of. l assume that the lungs are at. pressure and at a body temperature of. hint v, p, and t are given. use the equation where r. . Ideal gas law worksheet use the ideal gas law, and the universal gas constant r.
l to solve the following problems if pressure is needed in then convert by multiplying by. kpa to get r. l if i have moles of a gas at a pressure of. and a volume of liters. Ideal gas law worksheet. using the information from or conditions determine the value of the ideal gas constant.
21. Sli Gases Iales
A sample of. moles of oxygen at o c and. occupies what volume. a sample of. moles of hydrogen at. o c occupies a volume of. l. under what pressure is this sample. The ideal gas law is an equation that relates the volume, temperature, pressure and amount of gas particles to a constant.
the ideal gas constant is abbreviated with the variable r and has the value of. the ideal gas law can be used when three of the four gas variables are known. Ideal gas law worksheet. use the ideal gas law, and the universal gas constant to solve the following problems with r.
22. Ideal Gas Law Worksheet Law Worksheet
Liters at a pressure of atmosphere at the surface where the temperature is a balmy, what will the volume of air be in the vest when the diver dives to a. Worksheets gas laws. a sample of. moles of gas is placed in a container of volume of. l. what is the pressure of the gas in if the gas is at o c what would be the volume of this gas if placed at.
a. l flask contains. g of gas at and o c, what is the density and molar mass of the gas. The gas laws name period date law law states that the volume of a gas varies inversely with its pressure if temperature is held constant. if one goes up, the other down.
23. Solving Combined Gas Law Problems Law
Is heated from. c to. c. a container of gas is initially at. and c. what will the pressure be at c. a gas container is initially at mm and k liquid nitrogen. Gas law problems. if a gas at occupies. liters at a pressure of. , what will be its volume at a pressure of. . a gas occupies. at a temperature of. c. what is the volume at. c. what change in volume results if. of gas is cooled from. c to. c. Combined gas law practice example this type of combined gas law problem where everything goes to is very common. l of a gas is collected at.
24. Worksheet Answer Key
Structure replication review key from structure and replication worksheet, sourcepinterest. com. work power and energy worksheets answers. processes replication protein synthesis transcription. back to replication. Mar, structure replication review key .
circulatory system worksheet from circulatory system worksheet, image source wave review worksheet answer key. since answering the issues in the worksheet is the same as studying a subject over and once more, needless to say pupils may realize deeply.
25. Worksheet Fun
Used with multiple classes. any issues please let me know. answers on the second slide. Ahead of talking about solving systems of linear inequalities worksheet answers, please know that education and learning can be all of our answer to a much better down the road, along with learning wont only avoid as soon as the school bell rings.
which being explained, we provide you with a a number of easy however helpful articles or blog posts along with themes produced suited to just. Print linear inequalities worksheets click the buttons to print each worksheet and answer key. solving inequalities lesson and practice.
26. Chemistry Answer Key Gas Worksheet
Aug, gas laws worksheet and answer key by s science shop ideal gas law resources quiz worksheet ideal gas law and the gas constant study com the ideal gas law chemistry gas law s worksheet free download combined gas law worksheet chemistry solved chem lab revised ideal gas law.
27. Ideal Gas Law Sample Problem Ideal Gas Law School Notes
K to. k and the volume is kept constant what final pressure would result if the original pressure was. mm ideal gas law problems. r. l p is in t is in kelvin v is in liters. gas law worksheet set problem determine the volume of occupied by. grams of carbon dioxide gas at.
28. Gas Laws Doodle Notes Science Doodle Notes
Of which currently being claimed, all of us provide a various very simple but informative content plus web templates created made for just about any informative purpose. Title word a,b law and law wkstkey. doc author white created date , and law worksheet free worksheets library from law worksheet answers, sourcecomprareninternet.
net. law worksheet answers from law worksheet answers, source. org. and law worksheet with answers by from law worksheet answers, sourcetes. comJan, just before discussing law chem worksheet answer key, be sure to understand that knowledge is all of our step to a much better the next day, along with learning wont just end the moment the education bell rings.
29. Combined Gas Law Practical Application Ideal Gas Law
Answer the following. Combined gas law. a law combines, , and law, indirect. combined gas formula. law. the direct relationship between the of moles and volume. formula. ideal gas law. the ideal law Using both the ideal gas law and law of partial pressures, calculate the total pressure of a.
30. Correlation Causation Worksheet Images
Determining a half, worksheet half life worksheet key chemistry workbook answers the from half life worksheet, sourcecathhsli. org nuclear chemistry half lives and radioactive dating dummies from half life worksheet, sourcedummies. comHow long does it take an initial concentration of.
31. Gas Law Problems Worksheet Answers
X problems. a gas occupies. liters at a pressure of. mm. what is the volume when the pressure is increased to. mm. This law worksheet worksheet is suitable for grade. although it was published in the year, this chemistry assignment is ideal for practicing the application of law.
32. Gas Law Quiz Gay Laws Gay
V ct. write the equation for and law in words. for a given mass at constant pressure, volume is directly proportional to temperature. in the animated gas lab, the unit of temperature is kelvin. The results for law worksheet answers. problems worksheet.
law worksheet answers. function worksheet. ideal gas law worksheet answers. problems worksheet. gas law review worksheet answers. free worksheet. combined gas law worksheet answers. practice worksheet. gas laws worksheet answers. Worksheets consisting of questions and answers covering the three gas laws pressure law, law and law.
33. Gas Law Quiz Law Grahams Law Ideal Gas Law
If. moles of gasoline are burned, what volume of oxygen is needed if the. Solutions to the ideal gas law practice worksheet the ideal gas law states that, where p is the pressure of a gas, v is the volume of the gas, n is the number of moles of gas present, r is the ideal gas constant, and t is the temperature of the gas in kelvins.
34. Gas Laws Chemistry Homework Page Unit Bundle Gas Laws
Reviews. months ago. report. thanks a lot a. Chemistry worksheet curves and ice mice at ice name heating curve for water time, hours figure i figure i shows the temperature of. kilograms of ice starting at that is heated at a constant rate of joules per second.
35. Gas Laws Scuba Diving Worksheet Answer Key
However, the ideal gas law does not require a change in the conditions of a gas sample. the ideal gas law implies that if you know any three of the physical properties of a gas, you can calculate the fourth property. Mr. chemistry pages. this site contains information for chemistry, regents chemistry and applied chemistry at high school.
36. Ideal Gas Law Notes Worksheets
This worksheet has problems to solve. get free access see worksheet molar mass gram weights we know that grams are actually a measure of the mass of matter and not the weight. mass is the quantity of matter present weight is a measure of the pull of gravity on matter and is measured in pounds or newtons.
37. Gas Laws Worksheets Bundle Answer Key Work Shown
Share this post. tweet. author rocky. rocky is a emergency medicine doctor, senior flight surgeon in the, senior aviation medical examiner and of go flight medicine. Gas laws and scuba diving worksheet answer key, the reason for writing an article on gas laws is simply because i, like so many other divers, constantly forget the scuba gas laws taught to us in our.
38. Gas Variables Worksheet Answers Quiz Worksheet Ideal
Fourtwenty. us. you may also like. the living constitution worksheet answers. A sample of gas is compressed from. l to. l at constant temperature. if the pressure of this gas in the. l volume is. , what will the pressure be at. l list all known and unknown variables.
show all your work. Mixed gas laws worksheet solutions how many moles of gas occupy l at a pressure of. atmospheres and a temperature of k n. l moles of gas rt. l. atmmol. k k if. moles of o and. moles of n are placed in a. l tank at a temperature of. Worksheet gases.
what do we assume about ideal gases what is the ideal gas law give the units for each variable. if you know the number of moles of an ideal gas, what is the minimum number of variables that you need to know in order to fully determine the system.
39. Gas Variables Worksheet Answers Worksheet
Atomic structure key displaying top worksheets found for this concept. some of the worksheets for this concept are atomic structure, basic atomic structure work answer key chart, atomic structure review work answers, basic atomic structure work answer key, atomic structure and chemical bonds, honors unit atomic structure, skill and practice work.
Oct, ahead of talking about atomic structure worksheet answers chemistry, you should recognize that schooling is actually our own answer to a greater tomorrow, along with discovering just stop when the institution bell rings. which currently being reported, we all supply you with a assortment of uncomplicated nonetheless useful content articles as well as layouts made ideal for any Atomic notation worksheet key.
40. Gas Worksheet Answers Worksheet Ideal Gas
Molar mass get the gizmo ready get, create, make and sign. The molar mass of a compound is. analysis of a sample of the compound indicates that it contains. g n and. g o. find its molecular formula. determine the molecular formula of a compound with an empirical formula of and a formula mass of.
The volume of a gas varies linearly with temperature v ideal gas law can be rearranged to calculate the molar mass of unknown gases. n mass g molar mass mass rt mass x r x t molar mass molar mass p x v knowing that the units for density are, rewrite this equation so that it equates density with molar mass.
Solutions to the ideal gas law practice worksheet the ideal gas law states that, where p is the pressure of a gas, v is the volume of the gas, n is the number of moles of gas present, r is the ideal gas constant, and t is the temperature of the gas in kelvins. |
Concentricity is the measure of how true a geometric shape is to its ideal form. The concentricity value can be calculated by using two diameters — one for the hole and one for the shaft that goes through it.
Let’s look at what concentricity means in CNC machining as well as how it’s measured in both imperial units (inches) and metric units (mm).
Table Of Contents
The Need to Measure Concentricity
Concentricity provides assurance that no part of a manufacturing cycle will land outside manufacturing tolerances while also guaranteeing perfect spacing inside those limits.
As such, concentricity ensures quality and precision during machining and after production when finished parts fit into their intended slots without issue. It’s a vital part of many manufacturing processes, including CNC machining.
Concentricity measurement is the amount of variation from perfect symmetry that may occur when pushing a workpiece through a machine or vice versa.
This deviation can result in waste material, increased costs, and variations in quality for parts coming out on the other end. Concentricity will be measured either axially or radially to determine how much this error occurs along these specific dimensions. However, the measurement process is complicated and, therefore, only gets used in certain circumstances.
When Is Concentricity Used?
In general, concentricity is only used for parts that demand a significant degree of precision to ensure proper functioning. When determining if concentricity is required, the primary question lies in the end-use of the product.
For example, if a tube is required to fit into an opening and a second tube is required to fit within the first, the concentricity is critical to ensure a proper fit and working end product.
On the other hand, if a liquid or a gas is the only thing that will fill the tube, then concentricity is not required as the liquid or gas can conform perfectly to the inside of the tube regardless of any slight deviations.
However, this is not to say that concentricity isn’t crucial for parts that don’t require extremely tight tolerances. It can be beneficial to know how far out of tolerance a part may be, as parts that aren’t machined to tolerance could lead to waste material, increased costs, and variations in part quality.
Other parts will still require a minimum wall thickness to ensure the safe and proper flow of liquids and gasses. If a wall is too thin, high-pressure flow can cause a break or crack in a thin spot, leading to significant issues including improper flow, lost material, dangerous working conditions, and even loss of life.
Concentricity Symbol And Its Interpretation
As shown in Figure 1, this is how concentricity is shown in a drawing. As you can see, the left side shaft diameter has a concentricity tolerance of .030 with respect to datum A. What that means is that the axis of the measuring shaft can have a circular tolerance zone of .030
Below figure shows what it means if we interpret the concentricity tolerance that is shown in figure 1.
Position and runout tolerance can be used instead of concentricity tolerance as measuring concentricity tolerance accurately is very difficult and may cost more. That is why concentricity tolerance is generally used for very critical parts. If concentricity tolerance is not met, you will notice wobbling.
How To Measure Concentricity Tolerance?
The concentricity value for a hole or shaft diameter is calculated using two diameters: one for the hole (outer boundary) and one for the shaft (inner line). The larger diameter will represent the outer boundary, while the smaller diameter will represent an inner line to get an accurate reading on surface deviation. Measuring with imperial units gives summarized measurements in inches, where metric measuring has its results displayed in millimeters.
There are several ways to calculate concentricity, but only three methods matter when considering CNC machined parts:
- Radial error
- Axial error
- Overall accuracy (AX+RA)
These values can either be found pre-calculated or must be measured empirically. All measurements should always come from machine centerlines.
Radial error is the difference in measurement between the center of the feature on one side and that same point on the other.
Axial error is measured by subtracting distance from machine zero to a datum line, then measuring deviation from this line at two points along its length.
Overall accuracy can be calculated with radial and axial errors added together, or it can come pre-calculated as some machines are equipped for complete concentricity verification.
After the concentricity is calculated, it should be checked for quality by comparing it with what was specified in your manufacturing design and inspection procedures.
For example, if you have a specification of ±0.00025 inches (or 0.06 mm), then at least two measurements are required on each side to check that the diameter difference does not exceed this limit. Alternatively, one measurement can be used provided there has been no significant wear because of usage since manufacture.
If the concentricity exceeds the manufacturing limits then further corrective measures would need to be taken as needed, such as:
- Adjusting keyways so they align better for accurate machining
- Replacement of bearings which may require re-machining with updated tolerances
Here is a general of how to measure the concentricity of a typical shaft. Industries use CMM to scan data points instead of any mechanical method that was used before to get more accurate results.
We can follow the below steps in order to measure the concentricity of the shaft.
- Lock all Degrees of Freedom of the part using fixtures except the circular motion.
- Plot control surface outer profile using a CMM ( Preferred)
- Determine the center point of the plotted surface at various cross-sections
- Verify if all those center points fall under the specified tolerance zone or not.
All center points should fall under the tolerance zone for qualifying the part as specified on the drawing.
Challenges Of Measuring Concentricity
One way to measure concentricity is using a dial indicator. The measurement process must be done in both directions, with an offset on one side and perpendicular to the bore axis of 0.00025 inches.
The challenge with this method, however, lies in having enough room. You need at least 18 inches (460 mm) from the spindle tip for measuring concentricity. There’s also the concern of not accidentally deflecting or breaking any rotating components such as gears when pushing against them during the measuring process.
Another challenging aspect of this procedure is that it can be successfully achieved only if there has been no significant wear since manufacture because of usage, requiring re-machining with updated tolerances.
Tubing With Accurate Tolerances
Manufacturing tubing with accurate tolerances is critical for the sustainability of your machining operations and the reliability of the end product you produce. Though slightly complicated, measuring concentricity is one method for ensuring an accurate end product for your customers on products with especially tight tolerances.
Conclusion: Concentricity Tolerance
It’s a fact that the metrology team always tries to avoid measuring concentricity as it is challenging to measure and get an accurate result. Better to switch to position or run-out tolerance. If a part is perfectly round, the runout will be equal to the concentricity.
I hope this article helped you to learn the basics of concentricity tolerance. If you still have any questions, please write in the comment section, and I will try to help you out.
This is a guest post By Christine Evans From the Fictiv team
Christine Evans is the Director of Product Marketing & Content Strategy at Fictiv, an on-demand manufacturing company. Over the past six years, Christine has grown Fictiv’s popular Hardware Guide and Digital Manufacturing Resource Center, with over 2,000 teardowns, DFM guides, and mechanical design articles to help democratize access to manufacturing and hardware design knowledge. |
Similar to Python lists, tuples are another standard data type that allows you to store values in a sequence. They might be useful in situations where you might want to share the data with someone but not allow them to manipulate the data. They can however use the data values, but no change is reflected in the original data shared.
In this tutorial, you will see Python tuples in detail:
- You will learn how you can initialize tuples. You will also see the immutable nature of tuples through examples;
- You'll also discover how a tuple differs from a Python list;
- Then, you will see the various tuple operations, such as slicing, multiplying, concatenating, etc.;
- It is helpful to know some built-in functions with tuples and you will see some important ones in this section.
- Finally, you'll see that it is also possible to assign multiple values at once to tuples.
Be sure to check out DataCamp's Data Types for Data Science course, where you can consolidate and practice your knowledge of data structures such as lists, dictionaries, sets, and many more! Alternatively, also check out this Python Data Structures Tutorial, where you can learn more about the different data structures that Python uses.
As you already read above, you can use this Python data structure to store a sequence of items that is immutable (or unchangeable) and ordered.
Tuples are initialized with
() brackets rather than
brackets as with lists. That means that, to create one, you simply have to do the following:
cake = ('c','a','k','e') print(type(cake))
Remember that the
type() is a built-in function that allows you to check the data type of the parameter passed to it.
Tuples can hold both homogeneous as well as heterogeneous values. However, remember that once you declared those values, you cannot change them:
mixed_type = ('C',0,0,'K','I','E') for i in mixed_type: print(i,":",type(i))
C : <class 'str'> 0 : <class 'int'> 0 : <class 'int'> K : <class 'str'> I : <class 'str'> E : <class 'str'>
# Try to change 0 to 'O' mixed_type = 'O'
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-16-dec29c289a95> in <module>() ----> 1 mixed_type = 'O' # Trying to change 0 to 'O' TypeError: 'tuple' object does not support item assignment
You get this last error message because you can not change the values inside a tuple.
Here is another way of creating a tuple:
numbers_tuple = 1,2,3,4,5 print(type(numbers_tuple))
Tuples versus Lists
As you might have noticed, tuples are very similar to lists. In fact, you could say that they are immutable lists which means that once a tuple is created you cannot delete or change the values of the items stored in it. You cannot add new values either. Check this out:
numbers_tuple = (1,2,3,4,5) numbers_list = [1,2,3,4,5] # Append a number to the tuple numbers_tuple.append(6)
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-26-e47776d745ce> in <module>() 3 4 # Append a number to the tuple ----> 5 numbers_tuple.append(6) AttributeError: 'tuple' object has no attribute 'append'
This throws an error because you cannot delete from or append to a tuple but you can with a list.
# Append numbers to the list numbers_list.append(6) numbers_list.append(7) numbers_list.append(8) # Remove a number from the list numbers_list.remove(7) print(numbers_list)
[1, 2, 3, 4, 5, 6, 8]
But why would you use tuples if they are immutable?
Well, not only do they provide "read-only" access to the data values but they are also faster than lists. Consider the following pieces of code:
import timeit timeit.timeit('x=(1,2,3,4,5,6,7,8,9)', number=100000)
What does immutable really mean with regards to tuples?
According to the official Python documentation, immutable is 'an object with a fixed value', but 'value' is a rather vague term, the correct term for tuples would be 'id'. 'id' is the identity of the location of an object in memory.
Let's look a little more in-depth:
# Tuple 'n_tuple' with a list as one of its item. n_tuple = (1, 1, [3,4]) #Items with same value have the same id. id(n_tuple) == id(n_tuple)
#Items with different value have different id. id(n_tuple) == id(n_tuple)
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-40-3cd258e024ff> in <module>() ----> 1 n_tuple.append(5) AttributeError: 'tuple' object has no attribute 'append'
We cannot append item to a tuple, that is why you get an error above. This is why tuple is termed immutable. But, you can always do this:
(1, 1, [3, 4, 5])
Thus, allowing you to actually mutate the original tuple. How is the tuple still called immutable then?
This is because, the id of the list within the tuple still remains the same even though you appended 5 to it.
To sum up what you have learnt so far:
Some tuples (that contain only immutable objects: strings, etc) are immutable and some other tuples (that contain one or more mutable objects: lists, etc) are mutable. However, this is often a debatable topic with Pythonistas and you will need more background knowledge to understand it completely. This is a more in-depth article for the same. For now, lets just say tuples are immutable in general.
- You can't add elements to a tuple because of their immutable property. There's no
extend()method for tuples,
- You can't remove elements from a tuple, also because of their immutability. Tuples have no
- You can find elements in a tuple since this doesn't change the tuple.
- You can also use the
inoperator to check if an element exists in the tuple.
So, if you're defining a constant set of values and all you're going to do with it is iterate through it, use a tuple instead of a list. It will be faster than working with lists and also safer, as the tuples contain "write-protect" data.
If you want to know more about Python lists, make sure to check out this tutorial!
Common Tuple Operations
Python provides you with a number of ways to manipulate tuples. Let's check out some of the important ones with examples.
The first value in a tuple is indexed 0. Just like with Python lists, you can use the index values in combination with square brackets
to access items in a tuple:
numbers = (0,1,2,3,4,5) numbers
You can also use negative indexing with tuples:
While indexing is used to obtain individual items, slicing allows you to obtain a subset of items. When you enter a range that you want to extract, it is called range slicing. The general format of range slicing is:
[Start index (included):Stop index (excluded):Increment]
Increment is an optional parameter, and by default the increment is 1.
# Item at index 4 is excluded numbers[1:4]
(1, 2, 3)
# This provides all the items in the tuple numbers[:]
(0, 1, 2, 3, 4, 5)
# Increment = 2 numbers[::2]
(0, 2, 4)
Tip: You can also use the negative increment value to reverse the tuple.
(5, 4, 3, 2, 1, 0)
You can combine tuples to form a new tuple. The addition operation simply performs a concatenation with tuples.
x = (1,2,3,4) y = (5,6,7,8) # Combining two tuples to form a new tuple z = x + y print(z)
(1, 2, 3, 4, 5, 6, 7, 8)
y = [5,6,7,8] z = x + y print(z)
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-55-d352c6414a4c> in <module>() 1 y = [5,6,7,8] ----> 2 z = x + y 3 print(z) TypeError: can only concatenate tuple (not "list") to tuple
You can only add or combine same data types. Thus combining a tuple and a list gives you an error.
The multiplication operation simply leads to repetition of the tuple.
x = (1,2,3,4) z = x*2 print(z)
(1, 2, 3, 4, 1, 2, 3, 4)
Unlike Python lists, tuples does not have methods such as
pop() due to its immutable nature. However, there are many other built-in methods to work with tuples:
count() returns the number of occurrences of an item in a tuple.
a = [1,2,3,4,5,5] a.count(5)
len() function, you can returns the length of the tuple:
a = (1,2,3,4,5) print(len(a))
You can use
any() to discover whether any element of a tuple is an iterable. You'll get back
True if this is the case, else it will return
a = (1,) print(any(a))
, in the declaration of the tuple
a above. If you do not specify a comma when initializing a single item in a tuple, Python assumes that you mistakenly added an extra pair of bracket (which is harmless) but then the data type is not a tuple. So remember to add a comma when declaring a single item in a tuple.
Now, back to the
any() function: In a boolean context, the value of an item is irrelevant. An empty tuple is false, any tuple with at least one item is true.
b = () print(any(b))
This function might be helpful when you are calling a tuple somewhere in your program and you want to make sure that the tuple is populated.
tuple() to converts a data type to tuple. For example, in the code chunk below, you convert a Python list to a tuple.
a_list = [1,2,3,4,5] b_tuple = tuple(a_list) print(type(b_tuple))
max() returns the largest element in the tuple, you can use
min() to return the smallest element of the tuple. Consider the following example:
You can also use this with tuples containing string data type.
# The string 'Apple' is automatically converted into a sequence of characters. a = ('Apple') print(max(a))
With this function, you return the total sum of the items in a tuple. This can only be used with numerical values.
To return a tuple with the elements in an sorted order, use
sorted(), just like in the following example:
a = (6,7,4,2,1,5,3) sorted(a)
[1, 2, 3, 4, 5, 6, 7]
It is worth noting that the return type is a list and not a tuple. The sequence in the original tuple 'a' is not changed and the data type of 'a' still remains tuple.
Bonus: Assigning Multiple Values
Something cool that you can do with tuples is to use them to assign multiple values at once. Check this out:
a = (1,2,3) (one,two,three) = a print(one)
a is a tuple of three elements and
(one,two,three) is a tuple of three variables. Assigning
a tuple assigns each of the values of
a to each of the variables: one, two and three, in order. This can be handy when you have to assign a range of values to a sequence stored in a tuple.
You have made it to the end of this tuple tutorial! Along the road, you learned more about what Python tuples are, how you can initialize them, what the most common operations are that you can do with them to manipulate them and what the most common methods are so that you can get more insights from these Python data structures. As a bonus, you learned that you can also assign multiple values to tuples at once.
← Back to tutorial |
How is this useful to us? Well, we can actually use this equation to approximate values of the function near point a. Take a look at this graph.
Notice that for x values near point a, we see that the function and the tangent line is relatively close to each other. Because of this, we are able to write that the function is approximately equal to the tangent line near point a. In other words,
where ≈ is the approximately symbol. This equation is known as the linear approximation formula. It is linear in a sense that the tangent is a straight line and we are using it to approximate the function. Using this approximation, we are able to approximate values that cannot be done by hand. For example, the square root of 2 or the natural log of 5 can all be approximated! One important thing to note is that this approximation only works for x values near point a. If you have a x value far from point a, then the approximation becomes really inaccurate.
Now don’t we take a look at a few examples of finding the linearization of a function and then look at how to use linear approximation!
Find the linearization of L(x) of the function at a
Question 1: Consider the function
Let’s say that we want to find the linearization of the function at point a=4.
To find the linearization L(x), recall that
Notice that in order to calculate L(x), we need f(a), f’(a) and a. Afterwards, were going to have to plug in everything in the formula to find L(x). Hence, I created these steps:
Step 1: Find a
Step 2: Find f(a)
Step 3: Find f’(a).
Step 4: Plug all three into the formula to find L(x)
Let’s follow these steps!
Step 1: Luckily a = 4 is given to us in the question, so we don’t have to look for it.
Know that the derivative of square roots is
And so plugging in x=a gives us:
Since we know a, f(a), and f’(a), we can now plug it into L(x) to find the linearization of f(x).
So L(x)=41x+1 is the linearization of this function at point x=4. In addition, it is also the tangent line of the function at point x=4.
How to do linear approximation
Remember earlier we said that we could use the equation of the tangent line to approximate values of the function near a? Let’s try this with the linearization we found earlier. Recall that
Now, let’s say I want to approximate f(4.04). If you were to plug this into the original function, then you would get √4.04 . This would be really hard to compute without a calculator. However, using linear approximation, we can say that
We just approximated the f(4.04) without a calculator! Now let’s actually see how close we were to the exact value of f(4.04). Notice that f(4.04) = √4.04 = 2.00997512422... So we are really close! We were only off when we got to the second decimal place!
Now so far, these questions gave us a function and a point to work with. What if none of these were given at all? What if the question only tells us to estimate a number?
Use Linear approximation to estimate a number
Suppose we want to estimate √10. How would we do it? We would need to use the linear approximation
but we don’t even have a function and a point to work with. This means we have to make them ourselves. This leads us to do the following steps:
We know that linear approximation is just an estimation of the function’s value at a specified point. However, how do we know that if our estimation is an overestimate or an underestimate? We calculate the second derivative and look at the concavity.
Concave up vs Concave down
If the second derivative of the function is greater than 0 for values near a, then the function is concave up. This means that our approximation will be an underestimate. In other words,
Why? Let’s take a look at this graph.
Notice that f(x) is concave upward and the tangent line is right under f(x). Let’s say were to use the tangent line to approximate f(x). Then the y values of the tangent line are always going to be less than the actual value of f(x). Hence, we have an underestimate
Now if the second derivative of the function is less than 0 for values near a, then the function is concave down. This means that our approximation will be an overestimate. In other words,
Again, why? Let’s take a look at another graph.
Notice that f(x) is concave downward and the tangent line is right above f(x). Again, let’s say that we are going to use the tangent line to approximate f(x). Then the y values of the tangent line are always going to be greater than the actual value of f(x). Hence, we have an overestimate.
So if you ever need to see if your value is an underestimation or an overestimation, make sure you follow these steps:
Step 1: Find the second derivative
Step 2: look at the concavity of the function near point a
Step 3: Confirm that it is an underestimate/overestimate
Let’s take a look at an example:
Question 3: Let f(x) = √x and a = 4. If we linear approximate f(4.04), would it be an overestimate or an underestimate?
Step 1: See that
So the second derivative is
Notice that a=4, so we want to look at positive values of x near 4. Now look at the second derivative. When x is positive, we see that
Hence, it is concave down
We know that if the function is concave down, then the tangent line will be above the function. Hence, using the tangent line as an approximation will give an overestimated value.
Not only can we approximate values with linear approximation, but we can also approximate with differentials. To approximate, we use the following formula
where dy and dx are differentials, and f'(x) is the derivative of f in terms of x. Since we are dealing with very small changes in x and y, then we are going to use the fact that:
However, most of the questions we do involve setting
So using these facts will lead us to have:
This approximation is very useful when approximating the change of y. Keep in mind back then they didn’t have calculators, so this is the best approximation they could get for functions with square roots or natural logs.
Most of the time you will have to look for f’(x) and Δx yourself. In other words, follow these steps to approximate Δy!
Step 1: Find Δx
Step 2: Find f’(x)
Step 3: Plug everything into the formula to find dy. dy will be the approximation for Δy.
Let’s look at an example of using this approximation:
Question 4: Consider the function y = ln(x + 1). Suppose x changes from 0 to 0.01. Approximate Δy.
Step 1: Notice that x changes from 0 to 0.01, so the change in x would be:
The derivative would be:
Plugging everything in we have:
Hence, Δy ≈ 0.01
However, most of the time we want to estimate a value of the function, and not the change of the value. Hence we will add both sides of the equation by y, which gives us:
which is the same as:
This equation is a bit hard to read, so we are going to rearrange it even more. Let’s try to get rid of y and Δy. Notice that Δy+ y is basically the same as finding the value of the function at Δx+x. In other words,
Hence substituting this in our approximation above will give us:
where f(Δx+x) is value we are trying to estimate. How do we use this formula? I recommend following these steps:
Step 1: Set the number equal to f(Δx+x). Find Δx, x, and f(x).
Step 2: Calculate f’(x)
Step 3: Use the formula to approximate the number
Let’s use these steps for the following question.
Question 5: Use differentials to approximate √10.
Step 1: Compare f(Δx+x) with √10. Since √10 has a square root and 9 is a perfect square that is closest to 10, then let
See that there is no choice but to let Δx = 1
See that the derivative gives:
So this implies
Plugging everything into the formula gives us:
Hence, we just approximated the number.
One interesting thing to note is that linear approximation and differentials both give the same result for √10.
If you want to learn more about differentials, click this link:
Realize that the approximation becomes more and more accurate as we pick x values that are closer to a. In other words if we take the limit as x→a, then they will equal. So
Now we are going to put this aside and use it later, and actually look at l’hopital’s rule. We are going to assume a couple things here. Suppose that f(x) and g(x) are continuously differentiable at a real number a, f(a)=g(a)=0, and g’(a) ≠ 0. Then,
Now notice that we can apply the formula that we derived earlier right here. So now
Now instead of writing f’(a) and g’(a), we can apply limits as x→a (because we know f and g are differentiable). So
We always want to apply l’hoptial’s rule when we encounter indeterminate limits. There are two types of indeterminate forms. These indeterminate forms would be:
A lot of people make the mistake of using l’hopital’s rule without even checking if it is an indeterminate limit. So make sure you check it first! Otherwise, it will not work and you will get the wrong answer. Here is a guide to using l’hopital’s rule:
Step 1: Evaluate the limit directly.
Step 2: Check if it is one of the indeterminate forms. If it is, go to step 3.
Step 3: Use l’hopital’s rule.
Step 4: Check if you get another indeterminate form. Repeat Step 3 if you do.
Let’s take a look at a few examples using these steps.
Question 6: Evaluate the limit
Step 1: Evaluating the limit directly gives us
Yes, it is one of the indeterminate forms.
Applying l’hopital’s rule we have:
one is not an indeterminate form, so we are done and the answer is 1.
Now that question was a little bit easy, so why don’t we take a look at something that is a bit harder.
Question 7: Evaluate the limit
Step 1: Evaluating the limit directly we see that:
This is an indeterminate form, so go to step 3.
Applying l’hopital’s rule we have
This is another indeterminate form. So we have to go back to step 3 and apply l’hoptial’s rules again.
Applying l’hopital’s rule again we have:
Infinity is not an indeterminate form, so we are done and the answer is ∞
In this section, we will learn how to approximate unknown values of a function given known values using Linear Approximation. Linear Approximation has another name as Tangent Line Approximation because what we are really working with is the idea of local linearity, which means that if we zoom in really closely on a point along a curve, we will see a tiny line segment that has a slope equivalent to the slope of the tangent line at that point.
Intro: Linearization of f at a:L(x)=f(a)+f′(a)(x−a)
Consider the function f(x)=√x.
Don't just watch, practice makes perfect.
We have over 350 practice questions in Calculus for you to master. |
Definition . The term feudalism refers to an economic, political, and social system that prevailed in Europe from about the ninth century to the fifteenth century. With the chronic absence of effective centralized government during the Middle Ages, kings and local rulers granted land and provided protection to lesser nobles known as vassals. In return, these vassals swore oaths of loyalty and military service to their lords. Peasants known as serfs were bound to the land and were subject to the will of their lords.
European Medieval Feudalism . European medieval feudalism has become the foremost example of an interrelationship between a social class system and an economy. Having been influenced, however, by previous cultures and their economies, especially those that combined agricultural and exchange bases, the medieval economic environment cannot be understood through exclusive examination of the feudal system. The backdrop of Greek and Roman civilization and the fundamental need for survival formed the foundation for a far more heterogeneous medieval economic culture, useful for sustenance and for social organization. To these two ends, traders, artisans, peasants, churchmen, and the nobility created an economy that enveloped Europe’s contemporary medieval population. It was comprised of many different elements: trade alliances; exchange methods, both interest bearing and interest free; a manorial system combined with a monetarized vassalage (nobles avoiding military service by paying their overlords); professional guilds; agriculturally self-sufficient monasteries; urban communes; and tax-based kingdoms, some of which were transformed into representational fiscal monarchies. The composite European medieval economy, derived from these many diverse elements, departed radically from economies of earlier Western cultures.
General Characteristics . No one social class system or economic form was realized for Europe over the course of the whole Middle Ages. A postmedieval new economy, often identified as capitalism, was merely in formation and would not be considered all-enveloping for centuries to come. Undeniably, one element of the medieval world was the traditional economy of land and military service, leading to a feudal-based social-class system; the other was an urban society where merchants and artisans undertook trade and commerce in an economy based on money, or capital. For the urban environment, merchants, artisans, and customers formed the core of the society because towns served as centers for the individuals who lived and worked there. They saw manufacture as the most important endeavor, to provide goods for sale and purchase in the local mercantile economy. Furthermore, local manufacture was to have an impact in other areas, such as regional fairs, port cities, and eventually long-distance trade destinations.
Urban Economy . During the Middle Ages, the economy did not become fully urban. As medieval towns grew into cities and frequently dominated the abutting countryside, the agricultural economy kept itself at an independent distance, was rarely stimulated by market supply and demand, and remained relatively ignorant of means of economic progress. The late medieval nobility complained that changes in the workforce had violated its source of livelihood, virtual free labor assumed since the beginnings of the feudal economy, and set forth in many feudal codes of law which had fixed the purpose of the peasantry. The rural economy continued nonetheless to be the safer source of sustenance for many people, who saw in its connection to the soil the chance for the family to survive in good and bad years. The fact that the vast majority of the medieval population was rural overpowered some towns’ premature bid for communal independence, and the urban environment was vulnerable to the vagaries of agricultural provisioning. In the later fourteenth century the peasantry was recast as a
political force but remained fundamentally an economic tool as it had been for the whole of the Middle Ages.
Christian Church . During the same period as feudalism and urban growth, the Christian Church was expanding and exploring new forms of social and economic expression. Established in Rome in the first century of the Christian era, the Christian faith had arrived in Europe during the Roman Empire and was spread throughout Western Europe during the first millennium, as missionaries traveled to and beyond the present-day British Isles, Germany, France, and Spain. Medieval clergymen wrote many works, among them some in which they discussed two sets of economic and social ideals, occasionally offering guidance as to how to achieve them. The ascetic approach was for men and women planning to be monks and nuns, but it was also for young women, widows, and the devoted. The more worldly approach was for men and women leading integrated, secular lives.
Modern Study . In 1776 Adam Smith took the idea for the viability of a nation and wrote “an elementary treatise on that very extensive and difficult science,” political economy, presenting his ideas in the Inquiry into the Nature and Causes of the Wealth of Nations to explain how “those [obstacles to the progress of national prosperity] which arose from the disorders of the feudal ages, tended directly to disturb the internal arrangements of society.” Part of his pioneering work, such as that devoted to “what the circumstances are, which, in modern Europe, have contributed ... to encourage the industry of towns, at the expence of that of the country,” has received less attention in more-recent times. Nevertheless, his work is still considered so significant as to have defined the beginnings of the science of economics. To this perspective have since been added, however, studies focused specifically on the economy of the Middle Ages: trade, commercial production and services, economic structure, and social organizations. Though older and less well documented than the eighteenth century of Smith, the Middle Ages offers equal opportunity for comprehensive, innovative, and perhaps unanticipated analyses of its economy and social class system.
Georges Duby, The Three Orders: Feudal Society Imagined (Chicago: University of Chicago Press, 1980).
Paul Halsall, ed., Internet Medieval Source Book, <http://www.fordham.edu/halsall/sbook.html>.
John Hicks, A Theory of Economic History (Oxford: Oxford University Press, 1969).
M. M. Postan, E. E. Rich, and Edward Miller, eds., The Cambridge Economic History of Europe (Cambridge: Cambridge University Press, 1992).
Susan Reynolds, Fiefs and Vassals: The Medieval Evidence Reinterpreted (New York: Oxford University Press, 1994).
George Unwin, Studies in Economic History (London: Macmillan, 1927).
"Feudal Society." World Eras. . Encyclopedia.com. (May 10, 2019). https://www.encyclopedia.com/history/news-wires-white-papers-and-books/feudal-society
"Feudal Society." World Eras. . Retrieved May 10, 2019 from Encyclopedia.com: https://www.encyclopedia.com/history/news-wires-white-papers-and-books/feudal-society
Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).
Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.
Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:
Modern Language Association
The Chicago Manual of Style
American Psychological Association
- Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
- In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list. |
Mortality statistics are by-products of the legal process of death registration [seeVitalStatistics]. These data serve various purposes, such as estimating a component of population growth and preparing population projections; delineating health problems, planning public health programs, and assessing health progress; and studying the natural history of disease.
The absolute numbers of deaths are useful as a direct measure of the attrition of the population due to deaths. However, for analytical purposes, death data are generally used in the form of ratios. Properly computed, a death rate expresses the force of mortality on the population at risk.
The crudest form of death rate is the total or general death rate. This is the number of deaths occurring in a particular period of time, usually a year, for each 1,000 persons in the area or population. Because the general death rate (often called the crude death rate) is the mean of the death rates by age, sex, color, and other demographic variables weighted by the demographic composition of the population, an area with a young population, for example, would have a low general death rate, and an area with an old population a high general death rate, even if the set of age-specific death rates for the two areas were the same.
In order to take into account the differential mortality by age, sex, or other demographic variable, death rates are usually computed for a specific population class or group. The age-specific death rate is an example of this type of rate. In some cases, comparisons are based on death rates adjusted for differences in population composition. If the rate is standardized for differences in the age composition of two populations, it is called an age-adjusted death rate.
A special kind of death rate is the life table death rate. This is a hypothetical set of derived death rates based on certain assumptions of mortality in a stationary living population unaffected by migration or births. One function of the life table which is of interest is the expectation of life. This is the average number of years that will be subsequently lived by a group of persons who have attained a certain age. The expectation of life at birth is the average age at death of all the 100,000 who start life together in the life table cohort. Another important function is the survival rate, which is the probability that persons of a particular age will survive for a particular period of time, usually a calendar year [seeLifetables].
An important aspect of mortality statistics relates to data derived from the medical information reported on death certificates. Despite their limitations, statistics on causes of death have contributed a great deal in the past to the field of public health [seePublichealth].
The present statistics on causes of death relate to the “underlying cause of death,” which is the term used to denote the disease or injury that initiated the train of events leading directly to death; in the case of accident or violence, it may also include the circumstances which produced the fatal injury. These statistics have done good service for public health in the past; but, with the lessening importance, at least in the United States, of the acute infectious diseases as compared with the chronic noninfectious diseases, the data have become less and less adequate. The selection of a single disease entity as the “underlying cause” poses a real problem in deaths involving chronic diseases, since in such cases it is frequently difficult, if not impossible, to identify a single underlying cause.
International comparison of cause-of-death statistics also presents a problem. In addition to differences arising from incompleteness of death registration in various countries, there are variations in proportion of deaths attended by a physician, in diagnostic acumen of the clinician in attendance, and in the recording of diagnostic information. International comparisons are further complicated by differences in medical concepts of diseases and in the methods of classifying causes of death. In fact, strict international comparability of cause-of-death statistics is at present a virtual impossibility, and too much significance should not be attached to small differences in rates between countries.
The estimated annual death rate for the world population is 17 per 1,000 population for the period 1958–1962. As might be expected, the death rate varies over a wide range in different parts of the world (see Table 1).
If differences in the age composition of the population in various parts of the world were taken into account, the mortality differential would undoubtedly be much greater than that indicated by the crude death rates shown here. Unfortunately, the
|Table 1 — Population estimates, birth rates, and death rates for major regions of the world|
|Populationa||Birth rateb||Death ratec|
|a. 1962, in millions.|
|b. Annual average, 1958–1962, per 1,000 population.|
|Source: Computed from data in Demographic Yearbook 1963, p. 142. Copyright © United Nations 1964. Reproduced by permission.|
data needed to compute age-adjusted death rates are not available for the various regions of the world. In fact, one of the serious problems in international mortality studies is the lack of adequate mortality statistics for a large part of the world. By and large, reliable data are available only for the countries of northern and western Europe, North America, and Oceania. With a few notable exceptions, data for countries in other regions are either very incomplete or nonexistent.
The estimated birth rate for the world population is a little more than twice the estimated death rate. The natural rate of population increase (the difference between the birth and death rates) is highest in the Latin American countries, followed by the countries on the African continent and in Asia. Traditionally, a major part of annual population growth comes from the contribution made by births, but one of the significant demographic developments in the recent postwar period is the sharp acceleration in population growth due to the rapid decline in mortality. Virtually all countries, and more particularly the developing countries, experienced unprecedented declines in mortality while their birth rates remained at a high level.
The rate of decline in world mortality following World War n was dramatic, but the death rate began to level off in the 1950s in a number of countries, such as the United States, England and Wales, Sweden, Norway, Finland, the Netherlands, Japan, and Chile. Intensive studies of the mortality trend for the United States (U.S. Dept. of Health, Education, and Welfare ... 1964a), Chile (U.S. Dept. of Health, Education, and Welfare ... 1964b), and England and Wales (U.S. Dept. of Health, Education, and Welfare ... 1965) indicate that a large part of the acceleration in the decline of general mortality was due to the large reduction in the death rate for infective and parasitic diseases as a result of antimicrobial therapy. In the United States, for example, the death rate for infective and parasitic diseases reached a low level, and by the mid-1950s it was no longer significantly influencing the general mortality trend. At the same time, the mortality trend for chronic diseases and for violence was either rising, remaining unchanged, or declining very slowly. This combination of circumstances causes a marked deceleration in the downward trend of the general death rate.
Whether this change in the mortality pattern is transient or permanent is difficult to say. It is obviously not possible for the death rate to decline indefinitely. Further reductions in mortality appear possible in the United States, but it does not seem likely that large declines will occur until a major breakthrough is made in the prevention of deaths from chronic diseases. On the other hand, if the age-specific death rates in the United States were to decline to levels already achieved by several other countries of low mortality, the crude death rate for the United States in 1960 would have been 7.3 per 1,000 population, as compared with the recorded death rate of 9.5 per 1,000 population. For males the expected death rate would have been 7.8, as compared with the recorded rate of 11.0 per 1,000 population. For females the corresponding rates would have been 6.9, as compared with 8.1 per 1,000 population.
The leveling off of the death rate as it reaches its irreducible minimum is readily understandable. However, there seems to be no ready explanation for the change in mortality trends at different levels. For example, the death rate for nonwhites in the United States is still considerably higher than that for whites. Yet the rate of decline of the mortality trend for nonwhites has slowed down in the same manner as that for the whites.
National death rates are also becoming stabilized at different levels. For example, the Scandinavian countries and the Netherlands have achieved much lower age-specific death rates than the United States, whereas the age-specific death rates for Japan and Chile are higher. Yet the death rates appear to be leveling off in all of these countries.
The experience of Chile appears to have important implications for the developing countries. It seems clear that the knowledge and technical means are available for securing significant reductions in the death rate even in developing countries. The institution of mosquito and fly control and/or the widespread introduction of antibiotics for therapeutic purposes will have an immediate impact upon the death rate. However, it would appear that a point of diminishing returns will soon be reached and the decline in mortality come to a halt. Accordingly, the study of mortality trends in Chile points to the importance of planning health activities as a part of the social and economic development of the country (U.S. Dept. of Health, Education, and Welfare ... 1964b).
Reference was made earlier to the unsatisfactory nature of the crude death rate, which is significantly affected by the age composition of the population to which it refers. Death rates computed for various age groups, as in Table 2, are, of course, free of this problem.
As indicated by these age-specific death rates, infancy is the most critical period of life, even for
|Table 2 – Death rates by age group: United States, 7962|
|*Per 100,000 population.|
|Source: U.S. Dept. of Public Health Division 1964, Health, Education, and Welfare, Service, National Vital Statistics , pp. 1–5.|
|Under 1 year||2,530.1|
|85 and over||20,510.0|
a developed country like the United States. Although data are not available to demonstrate this point, it would not be surprising if one-quarter or more of all live births in many of the developing countries fail to survive the first year of life.
For the developed countries it is possible to assess the progress made in the reduction of the infant mortality rate. A significant decline in infant mortality has occurred, and remarkably low rates have been achieved by the Netherlands (15.3 per 1,000 live births in 1962), Sweden (15.8 per 1,000 live births in 1961), and Norway (17.9 per 1,000 live births in 1961). A recent study (Shapiro & Moriyama 1963) of the international infant mortality trends indicates that the rate of decline is slowing up in many countries of low mortality.
From a relatively high death rate at infancy, the risk of death drops to a minimum at age ten or so. From then on, there is an increase in mortality with increasing age. This is the typical crosssectional pattern of mortality in countries of low mortality. However, there are a number of countries where the infant mortality rates are lower than that for the United States. Except in extreme old age, lower death rates are also found at other ages in other countries of low mortality.
In countries of low mortality, most of the deaths occur in the older age groups. In the developing countries, by contrast, it would not be unusual for more than half of all deaths to occur among children under five years of age. Under these conditions, it is obvious that the expectation of life at birth could not be very great.
The Biblical life span of “three-score years and ten” has become the norm for a number of countries. In Sweden, Norway, Denmark, the Netherlands, and Israel the life expectancy at birth is 70 years or more for both males and females. In other countries, such as the United States, Canada, Czechoslovakia, France, England and Wales, Australia, and New Zealand, the average length of life of 70 years or more for the total population has been attained only because of the favorable mortality experience of females. For example, the average expectation of life at birth in the United States for 1962 is 73.4 years for females and 66.8 years for males. If up-to-date life tables were available for all countries, it is probable that a few other countries could be added to the list above.
The world situation with regard to longevity cannot be described with any precision. However, it seems clear that longevity is at present greatest in the northern and western European countries, Canada and the United States on the North American continent, and Oceania. The average life expectancy is less favorable in the central, eastern, and southern European countries. Still lowei on the scale are the Latin American countries. The average expectation of life for a large part of the Asian population is low, although an average length of life of 60 years or more may be found in such Asian countries as Japan, Nationalist China (Taiwan), and Ceylon. Life table values for many of the countries on the African continent are not available. The question in a good part of Africa, especially in the southern and tropical countries, is not longevity but survival through childhood.
The increase in longevity of the population in the developed countries has been considerable. For example, in the period 1900–1902 the average expectation of life at birth in the United States was 48 years for males and 51 years for females. In a period of some sixty years, the male population gained about 19 years in life expectancy at birth, while the gain for females was about 22 years.
The postwar increase in life expectancy has been spectacular for some countries. For example, the expectancy of life at birth in Ceylon increased from 46.8 years in 1945–1947 for males to 60.3 years in 1954. For females, the corresponding figures were 44.7 years and 59.4 years, respectively. The average annual gain in longevity in Ceylon, as compared with the experience in the United States, is therefore roughly five times greater.
Almost without exception, the mortality among the married 20 years and over is lower, age for age, than the corresponding death rates for the single, widowed, or divorced. This is true for both males and females. Beyond this, the pattern of
|Table 3 — Ratio of death rates of unmarried persons to death rates of married: Sweden, 1959|
|* Too few cases for significant comparison with married.|
|Source: Computed from data in Demographic Yearbook 7961, pp. 592–593. Copyright © United Nations 1962. Reproduced by permission.|
mortality differentials by marital status varies somewhat by country.
In countries like Sweden, the mortality among divorced males is higher by far than the corresponding rates for bachelors or widowers (see Table 3). For females, the differences in death rates between the single, widowed, and divorced are not so great as those observed for males. The higher mortality among the single has been explained on the basis of selection; that is, those who never marry because of some serious physical impairment or chronic disease have a higher risk of mortality than the married. The single may therefore include among their number a higher proportion of the poorer mortality risks than those who marry. The higher mortality among the widowed has been attributed to the high association of diseases from which both marital partners die or to a less favorable economic situation that they both share.
One of the problems in the interpretation of death rates by marital status is the fact that the informant may not always know the civil status of those living alone. Also, there is the problem of the lack of correspondence between the marital status reported on death certificates and on the census enumeration schedules. Because the married population constitutes a large part of the total population, errors in reporting of marital status affect the data for the married much less than the data for the single, widowed, and divorced.
One of the significant constants of mortality statistics in countries of low mortality is the favorable experience among females as compared with that of males. Examination of death rates by sex for a recent year indicates large sex differentials in mortality for the United States and Canada (36 and 38 per cent, respectively) and for New Zealand and Australia (23 and 26 per cent, respectively). In the countries of western Europe the male mortality exceeded the death rate for females by 10 to 20 percent.
The death rate for females is lower than that for males in each age group from birth to the end of the life span in virtually every country of low mortality. Even in the developing countries the mortality experience among females is generally favorable as compared with males, except in the child-bearing ages. Maternal mortality is a significant public health problem in these countries, as it was in the developed countries some forty or fifty years ago.
It is not clear why female mortality is consistently lower than that among males. One obvious explanation is the biological difference between the sexes; however, biological differences do not appear to account for much of the sex differential in mortality. A good part of the difference in the death rate appears to be due to the increasing mortality among males or to the fact that the death rate among females is declining faster than that among males. Whatever the explanation for this phenomenon, the continued occurrence of the large sex difference in mortality as recorded in a number of countries will have important consequences in terms of the sex composition of the population of the future, especially in the older ages.
At the turn of the century, infective and parasitic diseases constituted the major public health problems in the world population. Pneumonia and influenza, tuberculosis, diarrhea and enteritis, and the childhood diseases were the principal causes of death in 1900, even in economically developed countries.
The large reduction in mortality since 1900 has been achieved primarily through control of the infective diseases. Although influenza and pneumonia still remain significant public health problems, mortality from the chronic diseases has
|Table 4 — Death rate and proportionate mortality for the five leading causes of death: selected countries* of North America, Europe, and Oceania, 1961|
|Leading causes of death||Average death rate per 100,000 population||Percent of total deaths|
|* Australia, Austria, Belgium, Canada, Denmark, Finland, France, German Federal Republic (including West Berlin), Hungary, Italy, Netherlands, New Zealand, Norway, Portugal, Republic of Ireland, Sweden, United Kingdom, and United States.|
|Source: Compiled from “The Ten Leading Causes ...” 1964a.|
|Vascular lesion of central nervous system||132||13|
|Influenza and pneumonia||37||4|
come to the forefront. The results of the review of causes of death in selected countries of North America, Europe, and Oceania in 1961 are summarized in Table 4.
From Table 4 it may be seen that more than 60 per cent of all deaths in the developed countries are attributable to the cardiovascular diseases and to malignant neoplasms. Although accidents rank fourth, they constitute the leading cause of death in the age groups 1 to 44 years; malignant neoplasms are the most frequent cause of death in the age group 45 to 64 years; and heart disease the principal cause of death in the population 65 years and over. Similar data for selected countries of Africa, South and Central America, and Asia for 1960 are shown in Table 5.
The number of countries in Africa, Asia, and South and Central America that met the criteria for inclusion in the World Health Organization compilations is limited, and the 12 countries that were selected do not, by any means, represent the mortality problems in the vast population of these continents. Although gastritis, duodenitis, enteritis,
|Table 5 — Death rate and proportionate mortality for the five leading causes of death: selected countries* of Africa, South and Central America, and Asia, 1960|
|Leading causes of death||Average death rate per 100,000 population||Percent of total deaths|
|*Mauritius, United Arab Republic, Chile, Colombia, Costa Rica, Guatemala, Mexico, Panama, Trinidad and Tobago, Ceylon, Israel (Jewish population), and Japan.|
|Source: Compiled from “The Ten Leading Causes ...” 1964b.|
|Gastritis, duodenitis, enteritis, and colitis||95||9|
|Influenza and pneumonia||67||7|
and colitis were the leading causes of death for half of the selected countries, their average death rate and the proportionate mortality are relatively low. A principal cause of death that accounts for only about 9 per cent of all deaths and five leading causes that constitute no more than one-third of all deaths do not suggest any major health problems. Actually, the averages conceal some of the problems indicated by the data for individual countries. For example, the death rate for gastritis, duodenitis, enteritis, and colitis was 700 per 100,000 population in the United Arab Republic, and 36 per cent of all deaths were charged to these intestinal infections.
Adequate mortality statistics for these regions would delineate existing public health problems more clearly. If such statistics were available, it is likely that other infective diseases, such as tuberculosis, dysentery, typhoid, and measles; parasitic diseases, such as schistosomiasis and malaria; and possibly malnutrition and other dietary deficiency diseases would figure prominently as causes of death.
With the availability of knowledge and means for controlling most of the important infective and parasitic diseases, prospects are good for rapid reduction in mortality from these diseases. The resultant increase in survival of the population will bring new problems to the regions affected. These are the problems of the chronic noninfectious diseases with which the developed countries are now struggling.
Iwao M. Moriyama
Campbell, Hubert 1965 Changes in Mortality Trends: England and Wales, 1931-1961. U.S. National Center for Health Statistics, Vital and Health Statistics, Series 3, No. 3. Washington: Government Printing Office.
Demographic Yearbook 1961. 13th ed. 1961 New York: United Nations. → Special Topic: Mortality Statistics. Prepared by the Statistical Office of the United Nations in collaboration with the Department of Social Affairs.
Demographic Yearbook 1963. 15th ed. 1963 New York: United Nations. → Special Topic: Population Census Statistics II. Prepared by the Statistical Office of the United Nations in collaboration with the Department of Social Affairs.
Shapiro, S.; and Moriyama, I. M. 1963 International Trends in Infant Mortality and Their Implications for the United States. American Journal of Public Health and the Nation’s Health 53, no. 5:747-760.
The Ten Leading Causes of Death for Selected Countries in North America, Europe and Oceania, 1954-1956, 1964a World Health Organization, Rapport epidemiologique et demographique 17:54-112.
The Ten Leading Causes of Death for Selected Countries in Africa, South and Central America and Asia, 1954-1956, 1960, 1961. 1964k World Health Organization, Rapport epidemiologique et demographique 17: 118-152.
U.S. Dept. of Health, Education, and Welfare, Public Health Service, National Center for Health Statistics 1964a The Change in Mortality Trend in the United States, Prepared by Iwao M. Moriyama. National Center for Health Statistics, Series 3, No. 1. Washington: Government Printing Office.
U.S. Dept. of Health, Education, and Welfare, Public Health Service, National Center for Health Statistics 1964b Recent Mortality Trends in Chile. National Center for Health Statistics, Series 3, No. 2. Washington: Government Printing Office.
U.S. Dept. Of Health, Education, And Welfare, Public Health Service, National Center For Health Statistics 1965 Changes in Mortality Trends in England and Wales, 1931–1961. Prepared by H. Campbell. National Center for Health Statistics, Series 3, No. 3. Washington: Government Printing Office.
U.S. Dept. of Health, Education, and Welfare, Public Health Service, National Vital Satistics Division 1964 Vital Statistics of the United States 1962. Volume 2: Mortality. Part A. Washington: Government Printing Office.
"Mortality." International Encyclopedia of the Social Sciences. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/mortality
"Mortality." International Encyclopedia of the Social Sciences. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/applied-and-social-sciences-magazines/mortality
Genes are the ultimate time travelers. They transcend the bounds of time by hitching a ride in sexually reproducing species such as humans, but then discard the human body later in life as if it was a used car that had passed its warranty period. Once immortality became a fundamental property of deoxyribonucleic acid (DNA), at some time in the distant history of life on earth, the carriers of these genetic codebooks for constructing living organisms, including humans and other sexually reproducing species, became disposable. The timing with which death occurs—both for individuals, as measured by their lifespan, and collectively for populations, as measured by life expectancy—defines the concept of mortality.
Although it is not possible to know with certainty when any single individual will die, it is known with surprising accuracy when death occurs for members of a population when viewed as a group. In humans and a large number of other species, scientists have demonstrated that the risk of death is highest just after birth, declines to its lowest point near the time of sexual maturation (puberty), and then increases exponentially until extreme old age.
Why is this age pattern of death so common among sexually reproducing species? Early in life, death rates are high because newborns are subject to mortality risks from infectious and parasitic diseases, predation, and congenital malformations. Puberty is the time of lowest mortality because, from an evolutionary perspective, this is the moment at which the investment in the next generation has reached its maximum. This implies that the body design of humans and other living things are constructed with the ultimate goal of reproduction in mind (e.g., the passage of genes from one generation to the next), so this time of life is the most highly protected of all times in the life span. Following puberty, the risk of death from intrinsic (aging-related) causes increases exponentially because of a combination of wear and tear to the physical components of the body; accumulated damage to DNA, cells, tissues, and organs, highly efficient but nevertheless imperfect maintenance and repair mechanisms; and because of the presence of lethal inherited genes that "leak" into the gene pool of every generation.
Scientists have demonstrated that the rate of increase in the death rate following puberty is often calibrated to the length of each species' reproductive window, which is the average duration of time that elapses between puberty and menopause. In other words, animals like mice that experience puberty within weeks after birth tend to age much more rapidly and live considerably shorter lives than sea turtles, which do not experience puberty until about fifty years after birth. As a result of these differences in the rate of aging across species, one day in the life of a human is, in terms of percentage of life span, equivalent to about one week in the life of a dog and one month in the life of a mouse.
Death is an event that can and does happen at every conceivable age in a genetically diverse population. The death rate (also referred to as the mortality rate) for a population may be calculated in its simplest form as the number of deaths that occur in a given year divided by the population at risk of death, the product of which is then multiplied by a standard number (such as one thousand) to give the statistic more intuitive meaning. For example, in the United States in 1995 there were 2.3 million deaths and 262.8 million people alive in the middle of that year. This means that the crude death rate for the United States in 1995 was 8.8 deaths per thousand people [(2.3 / 262.8) × 1,000 = 8.8]. Death rates may also be calculated for people of various age groups or by single-year-of-age, and are often used to estimate the life expectancy of a population.
The various ages at which death occurs provides useful information about the longevity attributes of a population. For example, if one were to imagine a hypothetical group or cohort of one hundred thousand babies born in any given calendar year, and one applied to those babies throughout their lives the death rates that prevailed at every age in that year, it would be possible to plot on a graph the hypothetical ages at which all of the babies would have died. This is known as the distribution of death for a population. Although the distribution of death in 1900 was characterized by high mortality early in life, for those who lived beyond the perilous early years, the modal age at death for females was about 73 years of age (see Figure 1). A comparable distribution of death was observed for males in that year.
The opposite of the distribution of death is a plot of the number of people that are expected to survive from one year to the next. This is known as a survival curve. The survival curve is another useful tool for examining age patterns of death and survival in a population because it provides summary statistics that are easy to interpret and understand. For example, from the survival curve for U.S. females born in 1900 it may be determined that, based on the death rates that prevailed in that year, 58 percent would have been expected to survive to the age of fifty (see Figure 2). By contrast, an estimated 95 percent of the female babies born in the United States in the year 2000 are expected to survive to at least their fiftieth birthday. This demonstrates the dramatic improvements in survival that occurred at younger ages during the twentieth century. In 1900 in the United States the survival curve for females illustrates that the median age at death (the age at which 50 percent of the babies born in that year will still be alive) was fifty-eight years of age. By the year 2000 the median age at death for females in the United States was eighty-three years. As shown in Figure 2, based on death rates observed in the U.S. in 2000, an estimated 86 percent of all the female babies born will survive at least to their sixty-fifth birthday—a dramatic improvement that occurred during the twentieth century. Both curves provide actuaries, demographers, and other scientists with valuable information that can be used to compare the same population across time, or different populations during the same time period.
The Gompertz Equation and its relationship to mortality
In 1825 an English actuary by the name of Benjamin Gompertz made an important discovery. Gompertz's job as an actuary for an insurance company was to calculate the risk of death for people of different ages in order to determine how much to charge for life insurance. (The exact same kinds of calculations are made by actuaries today.) Using data from various parts of England, where he lived, Gompertz discovered that the risk of death increased in a predictable fashion with age. His calculations led him to conclude that the death rate doubled about every ten years between the ages of twenty and sixty, which was the primary age range for people purchasing insurance annuities at that time. The mathematical formula Gompertz used to predict this exponential rise in mortality after age twenty has become known colloquially as the Gompertz equation, and it has remained an integral part of mortality computations conducted by actuaries and demographers ever since the early nineteenth century.
What made Gompertz's discovery so interesting was not just the fact that he devised a formula that accurately portrayed the dying-out process of humans, but that he and others believed that the same formula could be used to characterize death rates for other species. In fact, for more than a hundred years following Gompertz's discovery, numerous investigators from a wide range of scientific disciplines speculated that the Gompertz formula described a fundamental principle of death for all living things, a principle that became known as the Universal Law of Mortality. Recently, scientists have used death statistics for such species as humans, mice, and dogs to demonstrate that there is evidence to support the idea that age patterns of death occur in a consistent way across species, despite the fact that there is a wide variation in the observed lifespans of different species. In other words, there is scientific evidence to suggest that Gompertz was right—there appears to be a nearly universal age pattern to the dying out of living things.
The biology of life span
Why do people and other living things endure as long as they do? Why aren't we immortal? The answer to the most basic question of why we age is still an unsolved problem in biology, as the late famous biologist Sir Peter Medawar said in 1951. However, scientists are quickly closing in on at least some of the possible reasons why aging occurs. One of the most prominent theories of aging today is known as the free-radical hypothesis. During the process of metabolizing food and water and operating the machinery of life in a toxic world, damaging substances known as free radicals are generated. Although the human body has a highly efficient mechanism to protect itself from these damaging substances, it is not perfect. It is this lack of perfection that leads to accumulated damage to the DNA contained within the nucleus and the mitochondria (energy factories) of most cells. The level of damage moves up the scale of biological organization from DNA to cells, tissues, organs, organ systems, and ultimately to the whole organism—contributing to a degradation in the functioning of biological systems and an increased susceptibility to the diseases now associated with aging. Even though the damage that occurs to DNA is itself repaired with near perfection, it is the lack of perfection that is the basis for the free-radical hypothesis of aging.
There are a number of other prominent theories about the mechanisms of aging. Among them are the wear-and-tear theories and the discovery of an attribute of nuclear DNA known as the telomere. If the human body is viewed as a living machine with pulleys, pumps, levers, and hinges, much like that of a man-made mechanical device, it is evident that such machines cannot be operated indefinitely because of wear and tear. There are changes that occur in most human biological systems with the passage of time, including the loss of bone and muscle mass, increased brittleness of the circulatory system, and a degradation of the immune and reproductive systems.
Telomeres are the end caps of nuclear DNA, and they are known to shorten in length with each cell division. When they become short enough, the cell experiences a phenomenon known as programmed cell death, or apoptosis. An enzyme referred to as telomerase is known to be present in larger quantities in cells that are protected from aging, such as eggs, sperm, and stem cells, but there is no evidence so far to suggest that adding telomerase to other cells in the body would extend length of life. Although some scientists believe that this is one of the major biological mechanisms that contributes to aging, most people tend to die well before telomere shortening poses a serious problem for the whole organism.
Mortality in the twentieth and twenty-first centuries
During the twentieth century, humanity witnessed the most dramatic declines in death rates and increases in life expectancy at birth than at any other time in history. Based on prevailing death rates in 1900, male and female babies born at that time were expected to live to 46.4 and 49.0 years, respectively. Now that the twentieth century has passed, it is known that babies born in the United States in 1900 fared a little better than predicted at the time because of unanticipated declines in death rates that occurred at every age throughout the century. There were three main forces that led to these declines in mortality. The first, which occurred early in the century, was a rapid decline in the risk of death among infants and children. The combination of improved sanitation, refrigeration, the more widespread use and distribution of clean drinking water, and the development of controlled indoor living and working environments led to rapid declines in the risk of waterborne and airborne infectious and parasitic diseases (IPDs). Infants and children benefitted the most from these developments because their immature immune systems placed them at a higher risk of death from IPDs. Examples of some of the IPDs that waned early in the twentieth century include diphtheria, tuberculosis, smallpox, and cholera. The second force that led to declining death rates was the more widespread use of hospitals for childbirth, which contributed to declines in both maternal and infant mortality. The third factor was the introduction of antibiotics in the middle of the century, which has saved people of all ages from a wide range of bacterial infectious diseases. The fourth factor, which led to declining death rates at middle and older ages in the latter third of the twentieth century, occurred as a combination of improved lifestyles, advances in surgical procedures, the development of pharmaceuticals, and a host of other advances in the biomedical sciences. In this case, death rates from such chronic degenerative diseases as heart disease and some cancers were observed to have declined during this period. As evidence for the magnitude of the changes in mortality that occurred throughout the twentieth century, consider the fact that life expectancy at birth rose by thirty years during this time, which was an increase of magnitude and speed that exceeded that observed during the previous 100,000 years.
There is considerable speculation among scientists about the future of human longevity. Some believe that medical progress will continue into the future at a pace that is even faster than the remarkable gains made in recent decades. Such advances will certainly include new surgical procedures and pharmaceuticals to combat the consequences of aging-related diseases, but advances are also expected in genetic engineering and in research involving embryonic stem cells. Even more speculative, but certainly within the realm of possibility, are longevity gains that could arise from efforts to combat the aging process itself. Although there is reason to be optimistic that death rates will continue to decline in the future, some scientists have demonstrated that the rise in life expectancy will probably be much slower in the twenty-first century than it was during the twentieth century. This is because it is far more difficult to add decades to the lives of people who have already lived seventy years or more than it was, early in the twentieth century, to add decades to the lives of children saved from dying of infectious diseases. However, under any condition, humanity is embarking on a fascinating new journey into the science of aging that will undoubtedly change modern notions about aging and death.
S. Jay Olshansky
See also Life Expectancy; Life Span Extension; Longevity: Reproduction; Longevity: Selection; Longevity: Social Aspects; Theories of Biological Aging.
Carnes, B. A.; Olshansky, S. J.; and Grahn, D. "Continuing the Search for a Law of Mortality." Population and Development Review 22 (1996): 231–264.
Gompertz, B. "On the Nature of the Function Expressive of the Law of Human Mortality and on a New Mode of Determining Life Contingencies." Philosophical Transactions of the Royal Society of London 115 (1825): 513–585.
Kirkwood, T. B. L. "Comparative Life Spans of Species: Why Do Species Have the Life Spans They Do?" American Journal of Clinical Nutrition 55 (1992): 1191S–1195S.
Makeham, W. "On the Law of Mortality and the Construction of Annuity Tables." Journal of the Institute of Actuaries 13 (1860): 325–358.
Medawar, P. B. An Unsolved Problem in Biology. London: Lewis, 1952.
Olshansky, S. J., and Carnes, B. A. The Quest for Immortality: Science at the Frontiers of Aging. New York: Norton Press, 2001.
Olshansky, S. J.; Carnes, B. A.; and Cassel, C. "In Search of Methuselah: Estimating the Upper Limits to Human Longevity." Science 250 (1990): 634–640.
Olshansky, S. J.; Carnes, B. A.; and Desesquelles , A. "Prospects for Human Longevity." Science (2001): 1491–1492.
Shryock, H., and Siegel, J. "Mortality." Chapter X in The Methods and Materials of Demography. Academic Press, 1976.
U.S. Bureau of the Census. Statistical Abstract of the United States. Washington, D.C.: U.S. Government Printing Office, 2000.
"Mortality." Encyclopedia of Aging. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/mortality
"Mortality." Encyclopedia of Aging. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/education/encyclopedias-almanacs-transcripts-and-maps/mortality
The crude death-rate is the number of deaths in a year per 1000 population in a defined geographical area. In effect a refined version of the absolute number of deaths, this is not very informative, as so much depends on the sex-ratio and age-structure of a population. Crude death-rates can be multiplied by Area Comparability Factors to produce corrected rates which are comparable one with another and enable direct comparisons between areas. More commonly, age-standardized death-rates are calculated separately for men and women, to produce overall Standard Mortality Ratios (SMR) for each sex, or for both sexes combined, for a given area or social group. The SMR compares age-specific death-rates for a given area or social group with national average age-specific death-rates. It is computed as the actual or observed number of deaths in the group of interest, divided by the expected number of deaths, multiplied by 100. (The expected number of deaths is the number that would have occurred if age-specific death-rates in the group of interest were equal to the national averages for the year.) Age-specific crude death-rates and SMRs can also be calculated to identify the age-groups accounting for mortality rates above or below the national average. Five-year and ten-year age-bands are normally used, but broader bands are sometimes used for age-standardization calculations. Mortality rates are also calculated for specific causes of death, such as cholera, cancer, or suicide; and to monitor the control of infectious diseases, improvements in health care, or the social consequences of high unemployment.
Some mortality rates are already age-standardized. The infant mortality rate is the number of deaths within the first year of life divided by the number of live births in the same year times 1000. The neonatal mortality rate is the number of deaths within the first four weeks of life divided by the number of live births in the same year times 1000. The perinatal mortality rate is the number of still-births plus the number of deaths within the first week of life, divided by total births (still-births and live births) in the same year, again times 1000. The maternal mortality rate is the number of maternal deaths divided by total births times 1000. See also LIFE-TABLE; MORBIDITY STATISTICS.
"mortality." A Dictionary of Sociology. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/mortality
"mortality." A Dictionary of Sociology. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/social-sciences/dictionaries-thesauruses-pictures-and-press-releases/mortality
mor·tal·i·ty / môrˈtalətē/ • n. (pl. -ties) 1. the state of being subject to death: the work is increasingly haunted by thoughts of mortality. 2. death, esp. on a large scale: the causes of mortality among infants and young children. ∎ (also mortality rate) the number of deaths in a given area or period, or from a particular cause: postoperative mortality was 90 percent for some operations.
"mortality." The Oxford Pocket Dictionary of Current English. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/mortality
"mortality." The Oxford Pocket Dictionary of Current English. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/humanities/dictionaries-thesauruses-pictures-and-press-releases/mortality
mortality: see vital statistics.
"mortality." The Columbia Encyclopedia, 6th ed.. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/mortality
"mortality." The Columbia Encyclopedia, 6th ed.. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/reference/encyclopedias-almanacs-transcripts-and-maps/mortality
"mortality." A Dictionary of Biology. . Encyclopedia.com. (February 27, 2017). http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/mortality
"mortality." A Dictionary of Biology. . Retrieved February 27, 2017 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/mortality |
From Wikipedia the free encyclopedia
In biology, a gene (from genos (Greek) meaning generation or birth or gender) is a basic unit of heredity and a sequence of nucleotides in DNA or RNA that encodes the synthesis of a gene product, either RNA or protein.
During gene expression, the DNA is first copied into RNA. The RNA can be directly functional or be the intermediate template for a protein that performs a function. The transmission of genes to an organism's offspring is the basis of the inheritance of phenotypic traits. These genes make up different DNA sequences called genotypes. Genotypes along with environmental and developmental factors determine what the phenotypes will be. Most biological traits are under the influence of polygenes (many different genes) as well as gene–environment interactions. Some genetic traits are instantly visible, such as eye color or the number of limbs, and some are not, such as blood type, the risk for specific diseases, or the thousands of basic biochemical processes that constitute life.
Genes can acquire mutations in their sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a protein, which cause different phenotypical traits. Usage of the term "having a gene" (e.g., "good genes," "hair color gene") typically refers to containing a different allele of the same, shared gene. Genes evolve due to natural selection / survival of the fittest and genetic drift of the alleles.
The concept of gene continues to be refined as new phenomena are discovered. For example, regulatory regions of a gene can be far removed from its coding regions, and coding regions can be split into several exons. Some viruses store their genome in RNA instead of DNA and some gene products are functional non-coding RNAs. Therefore, a broad, modern working definition of a gene is any discrete locus of heritable, genomic sequence which affect an organism's traits by being expressed as a functional product or by regulation of gene expression.
The term gene was introduced by Danish botanist, plant physiologist and geneticist Wilhelm Johannsen in 1909. It is inspired by the ancient Greek: γόνος, gonos, that means offspring and procreation.
Discovery of discrete inherited units
The existence of discrete inheritable units was first suggested by Gregor Mendel (1822–1884). From 1857 to 1864, in Brno, Austrian Empire (today's Czech Republic), he studied inheritance patterns in 8000 common edible pea plants, tracking distinct traits from parent to offspring. He described these mathematically as 2n combinations where n is the number of differing characteristics in the original peas. Although he did not use the term gene, he explained his results in terms of discrete inherited units that give rise to observable physical characteristics. This description prefigured Wilhelm Johannsen's distinction between genotype (the genetic material of an organism) and phenotype (the observable traits of that organism). Mendel was also the first to demonstrate independent assortment, the distinction between dominant and recessive traits, the distinction between a heterozygote and homozygote, and the phenomenon of discontinuous inheritance.
Prior to Mendel's work, the dominant theory of heredity was one of blending inheritance, which suggested that each parent contributed fluids to the fertilisation process and that the traits of the parents blended and mixed to produce the offspring. Charles Darwin developed a theory of inheritance he termed pangenesis, from Greek pan ("all, whole") and genesis ("birth") / genos ("origin"). Darwin used the term gemmule to describe hypothetical particles that would mix during reproduction.
Mendel's work went largely unnoticed after its first publication in 1866, but was rediscovered in the late 19th century by Hugo de Vries, Carl Correns, and Erich von Tschermak, who (claimed to have) reached similar conclusions in their own research. Specifically, in 1889, Hugo de Vries published his book Intracellular Pangenesis, in which he postulated that different characters have individual hereditary carriers and that inheritance of specific traits in organisms comes in particles. De Vries called these units "pangenes" (Pangens in German), after Darwin's 1868 pangenesis theory.
Twenty years later, in 1909, Wilhelm Johannsen introduced the term 'gene' and in 1906, William Bateson, that of 'genetics' while Eduard Strasburger, amongst others, still used the term 'pangene' for the fundamental physical and functional unit of heredity.: Translator's preface, viii
Discovery of DNA
Advances in understanding genes and inheritance continued throughout the 20th century. Deoxyribonucleic acid (DNA) was shown to be the molecular repository of genetic information by experiments in the 1940s to 1950s. The structure of DNA was studied by Rosalind Franklin and Maurice Wilkins using X-ray crystallography, which led James D. Watson and Francis Crick to publish a model of the double-stranded DNA molecule whose paired nucleotide bases indicated a compelling hypothesis for the mechanism of genetic replication.
In the early 1950s the prevailing view was that the genes in a chromosome acted like discrete entities, indivisible by recombination and arranged like beads on a string. The experiments of Benzer using mutants defective in the rII region of bacteriophage T4 (1955–1959) showed that individual genes have a simple linear structure and are likely to be equivalent to a linear section of DNA.
Collectively, this body of research established the central dogma of molecular biology, which states that proteins are translated from RNA, which is transcribed from DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses. The modern study of genetics at the level of DNA is known as molecular genetics.
In 1972, Walter Fiers and his team were the first to determine the sequence of a gene: that of Bacteriophage MS2 coat protein. The subsequent development of chain-termination DNA sequencing in 1977 by Frederick Sanger improved the efficiency of sequencing and turned it into a routine laboratory tool. An automated version of the Sanger method was used in early phases of the Human Genome Project.
Modern synthesis and its successors
Evolutionary biologists have subsequently modified this concept, such as George C. Williams' gene-centric view of evolution. He proposed an evolutionary concept of the gene as a unit of natural selection with the definition: "that which segregates and recombines with appreciable frequency.": 24 In this view, the molecular gene transcribes as a unit, and the evolutionary gene inherits as a unit. Related ideas emphasizing the centrality of genes in evolution were popularized by Richard Dawkins.
The vast majority of organisms encode their genes in long strands of DNA (deoxyribonucleic acid). DNA consists of a chain made from four types of nucleotide subunits, each composed of: a five-carbon sugar (2-deoxyribose), a phosphate group, and one of the four bases adenine, cytosine, guanine, and thymine.: 2.1
Two chains of DNA twist around each other to form a DNA double helix with the phosphate-sugar backbone spiraling around the outside, and the bases pointing inwards with adenine base pairing to thymine and guanine to cytosine. The specificity of base pairing occurs because adenine and thymine align to form two hydrogen bonds, whereas cytosine and guanine form three hydrogen bonds. The two strands in a double helix must, therefore, be complementary, with their sequence of bases matching such that the adenines of one strand are paired with the thymines of the other strand, and so on.: 4.1
Due to the chemical composition of the pentose residues of the bases, DNA strands have directionality. One end of a DNA polymer contains an exposed hydroxyl group on the deoxyribose; this is known as the 3' end of the molecule. The other end contains an exposed phosphate group; this is the 5' end. The two strands of a double-helix run in opposite directions. Nucleic acid synthesis, including DNA replication and transcription occurs in the 5'→3' direction, because new nucleotides are added via a dehydration reaction that uses the exposed 3' hydroxyl as a nucleophile.: 27.2
The expression of genes encoded in DNA begins by transcribing the gene into RNA, a second type of nucleic acid that is very similar to DNA, but whose monomers contain the sugar ribose rather than deoxyribose. RNA also contains the base uracil in place of thymine. RNA molecules are less stable than DNA and are typically single-stranded. Genes that encode proteins are composed of a series of three-nucleotide sequences called codons, which serve as the "words" in the genetic "language". The genetic code specifies the correspondence during protein translation between codons and amino acids. The genetic code is nearly the same for all known organisms.: 4.1
The total complement of genes in an organism or cell is known as its genome, which may be stored on one or more chromosomes. A chromosome consists of a single, very long DNA helix on which thousands of genes are encoded.: 4.2 The region of the chromosome at which a particular gene is located is called its locus. Each locus contains one allele of a gene; however, members of a population may have different alleles at the locus, each with a slightly different gene sequence.
The majority of eukaryotic genes are stored on a set of large, linear chromosomes. The chromosomes are packed within the nucleus in complex with storage proteins called histones to form a unit called a nucleosome. DNA packaged and condensed in this way is called chromatin.: 4.2 The manner in which DNA is stored on the histones, as well as chemical modifications of the histone itself, regulate whether a particular region of DNA is accessible for gene expression. In addition to genes, eukaryotic chromosomes contain sequences involved in ensuring that the DNA is copied without degradation of end regions and sorted into daughter cells during cell division: replication origins, telomeres and the centromere.: 4.2 Replication origins are the sequence regions where DNA replication is initiated to make two copies of the chromosome. Telomeres are long stretches of repetitive sequences that cap the ends of the linear chromosomes and prevent degradation of coding and regulatory regions during DNA replication. The length of the telomeres decreases each time the genome is replicated and has been implicated in the aging process. The centromere is required for binding spindle fibres to separate sister chromatids into daughter cells during cell division.: 18.2
Prokaryotes (bacteria and archaea) typically store their genomes on a single large, circular chromosome. Similarly, some eukaryotic organelles contain a remnant circular chromosome with a small number of genes.: 14.4 Prokaryotes sometimes supplement their chromosome with additional small circles of DNA called plasmids, which usually encode only a few genes and are transferable between individuals. For example, the genes for antibiotic resistance are usually encoded on bacterial plasmids and can be passed between individual cells, even those of different species, via horizontal gene transfer.
Whereas the chromosomes of prokaryotes are relatively gene-dense, those of eukaryotes often contain regions of DNA that serve no obvious function. Simple single-celled eukaryotes have relatively small amounts of such DNA, whereas the genomes of complex multicellular organisms, including humans, contain an absolute majority of DNA without an identified function. This DNA has often been referred to as "junk DNA". However, more recent analyses suggest that, although protein-coding DNA makes up barely 2% of the human genome, about 80% of the bases in the genome may be expressed, so the term "junk DNA" may be a misnomer.
Structure and function
The structure of a gene consists of many elements of which the actual protein coding sequence is often only a small part. These include DNA regions that are not transcribed as well as untranslated regions of the RNA.
Flanking the open reading frame, genes contain a regulatory sequence that is required for their expression. First, genes require a promoter sequence. The promoter is recognized and bound by transcription factors that recruit and help RNA polymerase bind to the region to initiate transcription.: 7.1 The recognition typically occurs as a consensus sequence like the TATA box. A gene can have more than one promoter, resulting in messenger RNAs (mRNA) that differ in how far they extend in the 5' end. Highly transcribed genes have "strong" promoter sequences that form strong associations with transcription factors, thereby initiating transcription at a high rate. Others genes have "weak" promoters that form weak associations with transcription factors and initiate transcription less frequently.: 7.2 Eukaryotic promoter regions are much more complex and difficult to identify than prokaryotic promoters.: 7.3
Additionally, genes can have regulatory regions many kilobases upstream or downstream of the open reading frame that alter expression. These act by binding to transcription factors which then cause the DNA to loop so that the regulatory sequence (and bound transcription factor) become close to the RNA polymerase binding site. For example, enhancers increase transcription by binding an activator protein which then helps to recruit the RNA polymerase to the promoter; conversely silencers bind repressor proteins and make the DNA less available for RNA polymerase.
The transcribed pre-mRNA contains untranslated regions at both ends which contain binding sites for ribosomes, RNA-binding proteins, miRNA, as well as terminator, and start and stop codons. In addition, most eukaryotic open reading frames contain untranslated introns, which are removed and exons, which are connected together in a process known as RNA splicing. Finally, the ends of gene transcripts are defined by cleavage and polyadenylation (CPA) sites, where newly produced pre-mRNA gets cleaved and a string of ~200 adenosine monophosphates is added at the 3′ end. The poly(A) tail protects mature mRNA from degradation and has other functions, affecting translation, localization, and transport of the transcript from the nucleus. Splicing, followed by CPA, generate the final mature mRNA, which encodes the protein or RNA product. Although the general mechanisms defining locations of human genes are known, identification of the exact factors regulating these cellular processes is an area of active research. For example, known sequence features in the 3′-UTR can only explain half of all human gene ends.
Many prokaryotic genes are organized into operons, with multiple protein-coding sequences that are transcribed as a unit. The genes in an operon are transcribed as a continuous messenger RNA, referred to as a polycistronic mRNA. The term cistron in this context is equivalent to gene. The transcription of an operon's mRNA is often controlled by a repressor that can occur in an active or inactive state depending on the presence of specific metabolites. When active, the repressor binds to a DNA sequence at the beginning of the operon, called the operator region, and represses transcription of the operon; when the repressor is inactive transcription of the operon can occur (see e.g. Lac operon). The products of operon genes typically have related functions and are involved in the same regulatory network.: 7.3
Defining exactly what section of a DNA sequence comprises a gene is difficult.Regulatory regions of a gene such as enhancers do not necessarily have to be close to the coding sequence on the linear molecule because the intervening DNA can be looped out to bring the gene and its regulatory region into proximity. Similarly, a gene's introns can be much larger than its exons. Regulatory regions can even be on entirely different chromosomes and operate in trans to allow regulatory regions on one chromosome to come in contact with target genes on another chromosome.
Early work in molecular genetics suggested the concept that one gene makes one protein. This concept (originally called the one gene-one enzyme hypothesis) emerged from an influential 1941 paper by George Beadle and Edward Tatum on experiments with mutants of the fungus Neurospora crassa. Norman Horowitz, an early colleague on the Neurospora research, reminisced in 2004 that “these experiments founded the science of what Beadle and Tatum called biochemical genetics. In actuality they proved to be the opening gun in what became molecular genetics and all the developments that have followed from that.” The one gene-one protein concept has been refined since the discovery of genes that can encode multiple proteins by alternative splicing and coding sequences split in short section across the genome whose mRNAs are concatenated by trans-splicing.
A broad operational definition is sometimes used to encompass the complexity of these diverse phenomena, where a gene is defined as a union of genomic sequences encoding a coherent set of potentially overlapping functional products. This definition categorizes genes by their functional products (proteins or RNA) rather than their specific DNA loci, with regulatory elements classified as gene-associated regions.
In all organisms, two steps are required to read the information encoded in a gene's DNA and produce the protein it specifies. First, the gene's DNA is transcribed to messenger RNA (mRNA).: 6.1 Second, that mRNA is translated to protein.: 6.2 RNA-coding genes must still go through the first step, but are not translated into protein. The process of producing a biologically functional molecule of either RNA or protein is called gene expression, and the resulting molecule is called a gene product.
The nucleotide sequence of a gene's DNA specifies the amino acid sequence of a protein through the genetic code. Sets of three nucleotides, known as codons, each correspond to a specific amino acid.: 6 The principle that three sequential bases of DNA code for each amino acid was demonstrated in 1961 using frameshift mutations in the rIIB gene of bacteriophage T4 (see Crick, Brenner et al. experiment).
Additionally, a "start codon", and three "stop codons" indicate the beginning and end of the protein coding region. There are 64 possible codons (four possible nucleotides at each of three positions, hence 43 possible codons) and only 20 standard amino acids; hence the code is redundant and multiple codons can specify the same amino acid. The correspondence between codons and amino acids is nearly universal among all known living organisms.
Transcription produces a single-stranded RNA molecule known as messenger RNA, whose nucleotide sequence is complementary to the DNA from which it was transcribed.: 6.1 The mRNA acts as an intermediate between the DNA gene and its final protein product. The gene's DNA is used as a template to generate a complementary mRNA. The mRNA matches the sequence of the gene's DNA coding strand because it is synthesised as the complement of the template strand. Transcription is performed by an enzyme called an RNA polymerase, which reads the template strand in the 3' to 5' direction and synthesizes the RNA from 5' to 3'. To initiate transcription, the polymerase first recognizes and binds a promoter region of the gene. Thus, a major mechanism of gene regulation is the blocking or sequestering the promoter region, either by tight binding by repressor molecules that physically block the polymerase or by organizing the DNA so that the promoter region is not accessible.: 7
In prokaryotes, transcription occurs in the cytoplasm; for very long transcripts, translation may begin at the 5' end of the RNA while the 3' end is still being transcribed. In eukaryotes, transcription occurs in the nucleus, where the cell's DNA is stored. The RNA molecule produced by the polymerase is known as the primary transcript and undergoes post-transcriptional modifications before being exported to the cytoplasm for translation. One of the modifications performed is the splicing of introns which are sequences in the transcribed region that do not encode a protein. Alternative splicing mechanisms can result in mature transcripts from the same gene having different sequences and thus coding for different proteins. This is a major form of regulation in eukaryotic cells and also occurs in some prokaryotes.: 7.5
Translation is the process by which a mature mRNA molecule is used as a template for synthesizing a new protein.: 6.2 Translation is carried out by ribosomes, large complexes of RNA and protein responsible for carrying out the chemical reactions to add new amino acids to a growing polypeptide chain by the formation of peptide bonds. The genetic code is read three nucleotides at a time, in units called codons, via interactions with specialized RNA molecules called transfer RNA (tRNA). Each tRNA has three unpaired bases known as the anticodon that are complementary to the codon it reads on the mRNA. The tRNA is also covalently attached to the amino acid specified by the complementary codon. When the tRNA binds to its complementary codon in an mRNA strand, the ribosome attaches its amino acid cargo to the new polypeptide chain, which is synthesized from amino terminus to carboxyl terminus. During and after synthesis, most new proteins must fold to their active three-dimensional structure before they can carry out their cellular functions.: 3
Genes are regulated so that they are expressed only when the product is needed, since expression draws on limited resources.: 7 A cell regulates its gene expression depending on its external environment (e.g. available nutrients, temperature and other stresses), its internal environment (e.g. cell division cycle, metabolism, infection status), and its specific role if in a multicellular organism. Gene expression can be regulated at any step: from transcriptional initiation, to RNA processing, to post-translational modification of the protein. The regulation of lactose metabolism genes in E. coli (lac operon) was the first such mechanism to be described in 1961.
A typical protein-coding gene is first copied into RNA as an intermediate in the manufacture of the final protein product.: 6.1 In other cases, the RNA molecules are the actual functional products, as in the synthesis of ribosomal RNA and transfer RNA. Some RNAs known as ribozymes are capable of enzymatic function, and microRNA has a regulatory role. The DNA sequences from which such RNAs are transcribed are known as non-coding RNA genes.
Some viruses store their entire genomes in the form of RNA, and contain no DNA at all. Because they use RNA to store genes, their cellular hosts may synthesize their proteins as soon as they are infected and without the delay in waiting for transcription. On the other hand, RNA retroviruses, such as HIV, require the reverse transcription of their genome from RNA into DNA before their proteins can be synthesized. RNA-mediated epigenetic inheritance has also been observed in plants and very rarely in animals.
Organisms inherit their genes from their parents. Asexual organisms simply inherit a complete copy of their parent's genome. Sexual organisms have two copies of each chromosome because they inherit one complete set from each parent.: 1
According to Mendelian inheritance, variations in an organism's phenotype (observable physical and behavioral characteristics) are due in part to variations in its genotype (particular set of genes). Each gene specifies a particular trait with a different sequence of a gene (alleles) giving rise to different phenotypes. Most eukaryotic organisms (such as the pea plants Mendel worked on) have two alleles for each trait, one inherited from each parent.: 20
Alleles at a locus may be dominant or recessive; dominant alleles give rise to their corresponding phenotypes when paired with any other allele for the same trait, whereas recessive alleles give rise to their corresponding phenotype only when paired with another copy of the same allele. If you know the genotypes of the organisms, you can determine which alleles are dominant and which are recessive. For example, if the allele specifying tall stems in pea plants is dominant over the allele specifying short stems, then pea plants that inherit one tall allele from one parent and one short allele from the other parent will also have tall stems. Mendel's work demonstrated that alleles assort independently in the production of gametes, or germ cells, ensuring variation in the next generation. Although Mendelian inheritance remains a good model for many traits determined by single genes (including a number of well-known genetic disorders) it does not include the physical processes of DNA replication and cell division.
DNA replication and cell division
The growth, development, and reproduction of organisms relies on cell division; the process by which a single cell divides into two usually identical daughter cells. This requires first making a duplicate copy of every gene in the genome in a process called DNA replication.: 5.2 The copies are made by specialized enzymes known as DNA polymerases, which "read" one strand of the double-helical DNA, known as the template strand, and synthesize a new complementary strand. Because the DNA double helix is held together by base pairing, the sequence of one strand completely specifies the sequence of its complement; hence only one strand needs to be read by the enzyme to produce a faithful copy. The process of DNA replication is semiconservative; that is, the copy of the genome inherited by each daughter cell contains one original and one newly synthesized strand of DNA.: 5.2
The rate of DNA replication in living cells was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli and found to be impressively rapid. During the period of exponential DNA increase at 37 °C, the rate of elongation was 749 nucleotides per second.
After DNA replication is complete, the cell must physically separate the two copies of the genome and divide into two distinct membrane-bound cells.: 18.2 In prokaryotes (bacteria and archaea) this usually occurs via a relatively simple process called binary fission, in which each circular genome attaches to the cell membrane and is separated into the daughter cells as the membrane invaginates to split the cytoplasm into two membrane-bound portions. Binary fission is extremely fast compared to the rates of cell division in eukaryotes. Eukaryotic cell division is a more complex process known as the cell cycle; DNA replication occurs during a phase of this cycle known as S phase, whereas the process of segregating chromosomes and splitting the cytoplasm occurs during M phase.: 18.1
The duplication and transmission of genetic material from one generation of cells to the next is the basis for molecular inheritance and the link between the classical and molecular pictures of genes. Organisms inherit the characteristics of their parents because the cells of the offspring contain copies of the genes in their parents' cells. In asexually reproducing organisms, the offspring will be a genetic copy or clone of the parent organism. In sexually reproducing organisms, a specialized form of cell division called meiosis produces cells called gametes or germ cells that are haploid, or contain only one copy of each gene.: 20.2 The gametes produced by females are called eggs or ova, and those produced by males are called sperm. Two gametes fuse to form a diploid fertilized egg, a single cell that has two sets of genes, with one copy of each gene from the mother and one from the father.: 20
During the process of meiotic cell division, an event called genetic recombination or crossing-over can sometimes occur, in which a length of DNA on one chromatid is swapped with a length of DNA on the corresponding homologous non-sister chromatid. This can result in reassortment of otherwise linked alleles.: 5.5 The Mendelian principle of independent assortment asserts that each of a parent's two genes for each trait will sort independently into gametes; which allele an organism inherits for one trait is unrelated to which allele it inherits for another trait. This is in fact only true for genes that do not reside on the same chromosome or are located very far from one another on the same chromosome. The closer two genes lie on the same chromosome, the more closely they will be associated in gametes and the more often they will appear together (known as genetic linkage). Genes that are very close are essentially never separated because it is extremely unlikely that a crossover point will occur between them.
DNA replication is for the most part extremely accurate, however errors (mutations) do occur.: 7.6 The error rate in eukaryotic cells can be as low as 10−8 per nucleotide per replication, whereas for some RNA viruses it can be as high as 10−3. This means that each generation, each human genome accumulates 1–2 new mutations. Small mutations can be caused by DNA replication and the aftermath of DNA damage and include point mutations in which a single base is altered and frameshift mutations in which a single base is inserted or deleted. Either of these mutations can change the gene by missense (change a codon to encode a different amino acid) or nonsense (a premature stop codon). Larger mutations can be caused by errors in recombination to cause chromosomal abnormalities including the duplication, deletion, rearrangement or inversion of large sections of a chromosome. Additionally, DNA repair mechanisms can introduce mutational errors when repairing physical damage to the molecule. The repair, even with mutation, is more important to survival than restoring an exact copy, for example when repairing double-strand breaks.: 5.4
When multiple different alleles for a gene are present in a species's population it is called polymorphic. Most different alleles are functionally equivalent, however some alleles can give rise to different phenotypic traits. A gene's most common allele is called the wild type, and rare alleles are called mutants. The genetic variation in relative frequencies of different alleles in a population is due to both natural selection and genetic drift. The wild-type allele is not necessarily the ancestor of less common alleles, nor is it necessarily fitter.
Most mutations within genes are neutral, having no effect on the organism's phenotype (silent mutations). Some mutations do not change the amino acid sequence because multiple codons encode the same amino acid (synonymous mutations). Other mutations can be neutral if they lead to amino acid sequence changes, but the protein still functions similarly with the new amino acid (e.g. conservative mutations). Many mutations, however, are deleterious or even lethal, and are removed from populations by natural selection. Genetic disorders are the result of deleterious mutations and can be due to spontaneous mutation in the affected individual, or can be inherited. Finally, a small fraction of mutations are beneficial, improving the organism's fitness and are extremely important for evolution, since their directional selection leads to adaptive evolution.: 7.6
Genes with a most recent common ancestor, and thus a shared evolutionary ancestry, are known as homologs. These genes appear either from gene duplication within an organism's genome, where they are known as paralogous genes, or are the result of divergence of the genes after a speciation event, where they are known as orthologous genes,: 7.6 and often perform the same or similar functions in related organisms. It is often assumed that the functions of orthologous genes are more similar than those of paralogous genes, although the difference is minimal.
The relationship between genes can be measured by comparing the sequence alignment of their DNA.: 7.6 The degree of sequence similarity between homologous genes is called conserved sequence. Most changes to a gene's sequence do not affect its function and so genes accumulate mutations over time by neutral molecular evolution. Additionally, any selection on a gene will cause its sequence to diverge at a different rate. Genes under stabilizing selection are constrained and so change more slowly whereas genes under directional selection change sequence more rapidly. The sequence differences between genes can be used for phylogenetic analyses to study how those genes have evolved and how the organisms they come from are related.
Origins of new genes
The most common source of new genes in eukaryotic lineages is gene duplication, which creates copy number variation of an existing gene in the genome. The resulting genes (paralogs) may then diverge in sequence and in function. Sets of genes formed in this way compose a gene family. Gene duplications and losses within a family are common and represent a major source of evolutionary biodiversity. Sometimes, gene duplication may result in a nonfunctional copy of a gene, or a functional copy may be subject to mutations that result in loss of function; such nonfunctional genes are called pseudogenes.: 7.6
"Orphan" genes, whose sequence shows no similarity to existing genes, are less common than gene duplicates. The human genome contains an estimate 18 to 60 genes with no identifiable homologs outside humans. Orphan genes arise primarily from either de novo emergence from previously non-coding sequence, or gene duplication followed by such rapid sequence change that the original relationship becomes undetectable. De novo genes are typically shorter and simpler in structure than most eukaryotic genes, with few if any introns. Over long evolutionary time periods, de novo gene birth may be responsible for a significant fraction of taxonomically-restricted gene families.
Horizontal gene transfer refers to the transfer of genetic material through a mechanism other than reproduction. This mechanism is a common source of new genes in prokaryotes, sometimes thought to contribute more to genetic variation than gene duplication. It is a common means of spreading antibiotic resistance, virulence, and adaptive metabolic functions. Although horizontal gene transfer is rare in eukaryotes, likely examples have been identified of protist and alga genomes containing genes of bacterial origin.
Number of genes
The genome size, and the number of genes it encodes varies widely between organisms. The smallest genomes occur in viruses, and viroids (which act as a single non-coding RNA gene). Conversely, plants can have extremely large genomes, with rice containing >46,000 protein-coding genes. The total number of protein-coding genes (the Earth's proteome) is estimated to be 5 million sequences.
Although the number of base-pairs of DNA in the human genome has been known since the 1960s, the estimated number of genes has changed over time as definitions of genes, and methods of detecting them have been refined. Initial theoretical predictions of the number of human genes were as high as 2,000,000. Early experimental measures indicated there to be 50,000–100,000 transcribed genes (expressed sequence tags). Subsequently, the sequencing in the Human Genome Project indicated that many of these transcripts were alternative variants of the same genes, and the total number of protein-coding genes was revised down to ~20,000 with 13 genes encoded on the mitochondrial genome. With the GENCODE annotation project, that estimate has continued to fall to 19,000. Of the human genome, only 1–2% consists of protein-coding sequences, with the remainder being 'noncoding' DNA such as introns, retrotransposons, and noncoding RNAs. Every multicellular organism has all its genes in each cell of its body but not every gene functions in every cell .
Essential genes are the set of genes thought to be critical for an organism's survival. This definition assumes the abundant availability of all relevant nutrients and the absence of environmental stress. Only a small portion of an organism's genes are essential. In bacteria, an estimated 250–400 genes are essential for Escherichia coli and Bacillus subtilis, which is less than 10% of their genes. Half of these genes are orthologs in both organisms and are largely involved in protein synthesis. In the budding yeast Saccharomyces cerevisiae the number of essential genes is slightly higher, at 1000 genes (~20% of their genes). Although the number is more difficult to measure in higher eukaryotes, mice and humans are estimated to have around 2000 essential genes (~10% of their genes). The synthetic organism, Syn 3, has a minimal genome of 473 essential genes and quasi-essential genes (necessary for fast growth), although 149 have unknown function.
Essential genes include housekeeping genes (critical for basic cell functions) as well as genes that are expressed at different times in the organisms development or life cycle. Housekeeping genes are used as experimental controls when analysing gene expression, since they are constitutively expressed at a relatively constant level.
Genetic and genomic nomenclature
Gene nomenclature has been established by the HUGO Gene Nomenclature Committee (HGNC), a committee of the Human Genome Organisation, for each known human gene in the form of an approved gene name and symbol (short-form abbreviation), which can be accessed through a database maintained by HGNC. Symbols are chosen to be unique, and each gene has only one symbol (although approved symbols sometimes change). Symbols are preferably kept consistent with other members of a gene family and with homologs in other species, particularly the mouse due to its role as a common model organism.
Genetic engineering is the modification of an organism's genome through biotechnology. Since the 1970s, a variety of techniques have been developed to specifically add, remove and edit genes in an organism. Recently developed genome engineering techniques use engineered nuclease enzymes to create targeted DNA repair in a chromosome to either disrupt or edit a gene when the break is repaired. The related term synthetic biology is sometimes used to refer to extensive genetic engineering of an organism.
Genetic engineering is now a routine research tool with model organisms. For example, genes are easily added to bacteria and lineages of knockout mice with a specific gene's function disrupted are used to investigate that gene's function. Many organisms have been genetically modified for applications in agriculture, industrial biotechnology, and medicine.
For multicellular organisms, typically the embryo is engineered which grows into the adult genetically modified organism. However, the genomes of cells in an adult organism can be edited using gene therapy techniques to treat genetic diseases.
- Copy number variation
- Full genome sequencing
- Gene-centric view of evolution
- Gene dosage
- Gene expression
- Gene family
- Gene nomenclature
- Gene patent
- Gene pool
- Gene redundancy
- Genetic algorithm
- List of gene prediction software
- List of notable genes
- Predictive medicine
- Quantitative trait locus
- Selfish gene
- "1909: The Word Gene Coined". www.genome.gov. Retrieved 8 March 2021. "...Wilhelm Johannsen coined the word gene to describe the Mendelian units of heredity..."
- Roth SC (July 2019). "What is genomic medicine?". Journal of the Medical Library Association. University Library System, University of Pittsburgh. 107 (3): 442–448. doi:10.5195/jmla.2019.604. PMC 6579593. PMID 31258451.
- "What is a gene?: MedlinePlus Genetics". MedlinePlus. 17 September 2020. Retrieved 4 January 2021.
- Hirsch ED (2002). The new dictionary of cultural literacy. Boston: Houghton Mifflin. ISBN 0-618-22647-8. OCLC 50166721.
- "Studying Genes". www.nigms.nih.gov. Retrieved 15 January 2021.
- Elston RC, Satagopan JM, Sun S (2012). "Genetic terminology". Statistical Human Genetics. Methods in Molecular Biology. 850. Humana Press. pp. 1–9. doi:10.1007/978-1-61779-555-8_1. ISBN 978-1-61779-554-1. PMC 4450815. PMID 22307690.
- Gericke NM, Hagberg M (5 December 2006). "Definition of historical models of gene function and their relation to students' understanding of genetics". Science & Education. 16 (7–8): 849–881. Bibcode:2007Sc&Ed..16..849G. doi:10.1007/s11191-006-9064-4. S2CID 144613322.
- Pearson H (May 2006). "Genetics: what is a gene?". Nature. 441 (7092): 398–401. Bibcode:2006Natur.441..398P. doi:10.1038/441398a. PMID 16724031. S2CID 4420674.
- Pennisi E (June 2007). "Genomics. DNA study forces rethink of what it means to be a gene". Science. 316 (5831): 1556–7. doi:10.1126/science.316.5831.1556. PMID 17569836. S2CID 36463252.
- Johannsen, W. (1909). Elemente der exakten Erblichkeitslehre [Elements of the exact theory of heredity] (in German). Jena, Germany: Gustav Fischer. p. 124. From p. 124: "Dieses "etwas" in den Gameten bezw. in der Zygote, … — kurz, was wir eben Gene nennen wollen — bedingt sind." (This "something" in the gametes or in the zygote, which has crucial importance for the character of the organism, is usually called by the quite ambiguous term Anlagen [primordium, from the German word Anlage for "plan, arrangement ; rough sketch"]. Many other terms have been suggested, mostly unfortunately in closer connection with certain hypothetical opinions. The word "pangene", which was introduced by Darwin, is perhaps used most frequently in place of Anlagen. However, the word "pangene" was not well chosen, as it is a compound word containing the roots pan (the neuter form of Πας all, every) and gen (from γί-γ(ε)ν-ομαι, to become). Only the meaning of this latter [i.e., gen] comes into consideration here ; just the basic idea — [namely,] that a trait in the developing organism can be determined or is influenced by "something" in the gametes — should find expression. No hypothesis about the nature of this "something" should be postulated or supported by it. For that reason it seems simplest to use in isolation the last syllable gen from Darwin's well-known word, which alone is of interest to us, in order to replace, with it, the poor, ambiguous word Anlage. Thus we will say simply "gene" and "genes" for "pangene" and "pangenes". The word gene is completely free of any hypothesis ; it expresses only the established fact that in any case many traits of the organism are determined by specific, separable, and thus independent "conditions", "foundations", "plans" — in short, precisely what we want to call genes.)
- Noble D (September 2008). "Genes and causation". Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences. 366 (1878): 3001–15. Bibcode:2008RSPTA.366.3001N. doi:10.1098/rsta.2008.0086. PMID 18559318.
- "Blending Inheritance - an overview | ScienceDirect Topics".
- "genesis". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.)
- Magner LN (2002). A History of the Life Sciences (Third ed.). Marcel Dekker, CRC Press. p. 371. ISBN 978-0-203-91100-6.
- Henig RM (2000). The Monk in the Garden: The Lost and Found Genius of Gregor Mendel, the Father of Genetics. Boston: Houghton Mifflin. pp. 1–9. ISBN 978-0395-97765-1.
- de Vries H (1889). Intracellulare Pangenese [Intracellular Pangenesis] (in German). Translated by Gager CS. Jena: Verlag von Gustav Fischer. Translated in 1908 from German to English by Open Court Publishing Co., Chicago, 1910
- Bateson W. (1906) "The progress of genetic research" Report of the Third International Conference 1906 on Genetics, W. Wilks, ed. London, England: Royal Horticultural Society. pp. 90–97. From p. 91: " … the science itself [i.e. the study of the breeding and hybridisation of plants] is still nameless, and we can only describe our pursuit by cumbrous and often misleading periphrasis. To meet this difficulty I suggest for the consideration of this Congress the term Genetics, which sufficiently indicates that our labors are devoted to the elucidation of the phenomena of heredity and variation: in other words, to the physiology of Descent, with implied bearing on the theoretical problems of the evolutionist and the systematist, and application to the practical problems of breeders, whether of animals or plants."
- Gerstein MB, Bruce C, Rozowsky JS, Zheng D, Du J, Korbel JO, et al. (June 2007). "What is a gene, post-ENCODE? History and updated definition". Genome Research. 17 (6): 669–81. doi:10.1101/gr.6339607. PMID 17567988.
- Avery OT, Macleod CM, McCarty M (February 1944). "Studies on the Chemical Nature of the Substance Inducing Transformation of Pneumococcal Types : Induction of Transformation by a Desoxyribonucleic Acid Fraction Isolated From Pneumococcus Type III". The Journal of Experimental Medicine. 79 (2): 137–58. doi:10.1084/jem.79.2.137. PMC 2135445. PMID 19871359. Reprint: Avery OT, MacLeod CM, McCarty M (February 1979). "Studies on the chemical nature of the substance inducing transformation of pneumococcal types. Inductions of transformation by a desoxyribonucleic acid fraction isolated from pneumococcus type III". The Journal of Experimental Medicine. 149 (2): 297–326. doi:10.1084/jem.149.2.297. PMC 2184805. PMID 33226.
- Hershey AD, Chase M (May 1952). "Independent functions of viral protein and nucleic acid in growth of bacteriophage". The Journal of General Physiology. 36 (1): 39–56. doi:10.1085/jgp.36.1.39. PMC 2147348. PMID 12981234.
- Judson H (1979). The Eighth Day of Creation: Makers of the Revolution in Biology. Cold Spring Harbor Laboratory Press. pp. 51–169. ISBN 978-0-87969-477-7.
- Watson JD, Crick FH (April 1953). "Molecular structure of nucleic acids; a structure for deoxyribose nucleic acid" (PDF). Nature. 171 (4356): 737–8. Bibcode:1953Natur.171..737W. doi:10.1038/171737a0. PMID 13054692. S2CID 4253007.
- Benzer S (June 1955). "Fine Structure of a Genetic Region in Bacteriophage". Proceedings of the National Academy of Sciences of the United States of America. 41 (6): 344–54. Bibcode:1955PNAS...41..344B. doi:10.1073/pnas.41.6.344. PMC 528093. PMID 16589677.
- Benzer S (November 1959). "On the Topology of the Genetic Fine Structure". Proceedings of the National Academy of Sciences of the United States of America. 45 (11): 1607–20. Bibcode:1959PNAS...45.1607B. doi:10.1073/pnas.45.11.1607. PMC 222769. PMID 16590553.
- Min Jou W, Haegeman G, Ysebaert M, Fiers W (May 1972). "Nucleotide sequence of the gene coding for the bacteriophage MS2 coat protein". Nature. 237 (5350): 82–8. Bibcode:1972Natur.237...82J. doi:10.1038/237082a0. PMID 4555447. S2CID 4153893.
- Sanger F, Nicklen S, Coulson AR (December 1977). "DNA sequencing with chain-terminating inhibitors". Proceedings of the National Academy of Sciences of the United States of America. 74 (12): 5463–7. Bibcode:1977PNAS...74.5463S. doi:10.1073/pnas.74.12.5463. PMC 431765. PMID 271968.
- Adams JU (2008). "DNA Sequencing Technologies". Nature Education Knowledge. SciTable. Nature Publishing Group. 1 (1): 193.
- Huxley J (1942). Evolution: the Modern Synthesis. Cambridge, Massachusetts: MIT Press. ISBN 978-0262513661.
- Williams GC (2001). Adaptation and Natural Selection a Critique of Some Current Evolutionary Thought (Online ed.). Princeton: Princeton University Press. ISBN 9781400820108.
- Dawkins R (1977). The selfish gene (Repr. (with corr.) ed.). London: Oxford University Press. ISBN 978-0-19-857519-1.
- Dawkins R (1989). The extended phenotype (Paperback ed.). Oxford: Oxford University Press. ISBN 978-0-19-286088-0.
- Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2002). Molecular Biology of the Cell (Fourth ed.). New York: Garland Science. ISBN 978-0-8153-3218-3.
- Stryer L, Berg JM, Tymoczko JL (2002). Biochemistry (5th ed.). San Francisco: W.H. Freeman. ISBN 978-0-7167-4955-4.
- Bolzer A, Kreth G, Solovei I, Koehler D, Saracoglu K, Fauth C, et al. (May 2005). "Three-dimensional maps of all chromosomes in human male fibroblast nuclei and prometaphase rosettes". PLOS Biology. 3 (5): e157. doi:10.1371/journal.pbio.0030157. PMC 1084335. PMID 15839726.
- Braig M, Schmitt CA (March 2006). "Oncogene-induced senescence: putting the brakes on tumor development". Cancer Research. 66 (6): 2881–4. doi:10.1158/0008-5472.CAN-05-4006. PMID 16540631.
- Bennett PM (March 2008). "Plasmid encoded antibiotic resistance: acquisition and transfer of antibiotic resistance genes in bacteria". British Journal of Pharmacology. 153 Suppl 1: S347-57. doi:10.1038/sj.bjp.0707607. PMC 2268074. PMID 18193080.
- International Human Genome Sequencing Consortium (October 2004). "Finishing the euchromatic sequence of the human genome". Nature. 431 (7011): 931–45. Bibcode:2004Natur.431..931H. doi:10.1038/nature03001. PMID 15496913.
- Shafee, Thomas; Lowe, Rohan (2017). "Eukaryotic and prokaryotic gene structure". WikiJournal of Medicine. 4 (1). doi:10.15347/wjm/2017.002. ISSN 2002-4436.
- Mortazavi A, Williams BA, McCue K, Schaeffer L, Wold B (July 2008). "Mapping and quantifying mammalian transcriptomes by RNA-Seq". Nature Methods. 5 (7): 621–8. doi:10.1038/nmeth.1226. PMID 18516045. S2CID 205418589.
- Pennacchio LA, Bickmore W, Dean A, Nobrega MA, Bejerano G (April 2013). "Enhancers: five essential questions". Nature Reviews. Genetics. 14 (4): 288–95. doi:10.1038/nrg3458. PMC 4445073. PMID 23503198.
- Maston GA, Evans SK, Green MR (2006). "Transcriptional regulatory elements in the human genome". Annual Review of Genomics and Human Genetics. 7: 29–59. doi:10.1146/annurev.genom.7.080505.115623. PMID 16719718.
- Mignone F, Gissi C, Liuni S, Pesole G (28 February 2002). "Untranslated regions of mRNAs". Genome Biology. 3 (3): REVIEWS0004. doi:10.1186/gb-2002-3-3-reviews0004. PMC 139023. PMID 11897027.
- Bicknell AA, Cenik C, Chua HN, Roth FP, Moore MJ (December 2012). "Introns in UTRs: why we should stop ignoring them". BioEssays. 34 (12): 1025–34. doi:10.1002/bies.201200073. PMID 23108796. S2CID 5808466.
- Shkurin, Aleksei; Hughes, Timothy R (9 April 2021). "Known sequence features can explain half of all human gene ends". NAR Genomics and Bioinformatics. 3 (2): lqab042. doi:10.1093/nargab/lqab042. PMC 8176999. PMID 34104882.
- Salgado H, Moreno-Hagelsieb G, Smith TF, Collado-Vides J (June 2000). "Operons in Escherichia coli: genomic analyses and predictions". Proceedings of the National Academy of Sciences of the United States of America. 97 (12): 6652–7. Bibcode:2000PNAS...97.6652S. doi:10.1073/pnas.110147297. PMC 18690. PMID 10823905.
- Blumenthal T (November 2004). "Operons in eukaryotes". Briefings in Functional Genomics & Proteomics. 3 (3): 199–211. doi:10.1093/bfgp/3.3.199. PMID 15642184.
- Jacob F, Monod J (June 1961). "Genetic regulatory mechanisms in the synthesis of proteins". Journal of Molecular Biology. 3 (3): 318–56. doi:10.1016/S0022-2836(61)80072-7. PMID 13718526.
- Kellis M, Wold B, Snyder MP, Bernstein BE, Kundaje A, Marinov GK, et al. (April 2014). "Defining functional DNA elements in the human genome". Proceedings of the National Academy of Sciences of the United States of America. 111 (17): 6131–8. Bibcode:2014PNAS..111.6131K. doi:10.1073/pnas.1318948111. PMC 4035993. PMID 24753594.
- Spilianakis CG, Lalioti MD, Town T, Lee GR, Flavell RA (June 2005). "Interchromosomal associations between alternatively expressed loci". Nature. 435 (7042): 637–45. Bibcode:2005Natur.435..637S. doi:10.1038/nature03574. PMID 15880101. S2CID 1755326.
- Williams A, Spilianakis CG, Flavell RA (April 2010). "Interchromosomal association and gene regulation in trans". Trends in Genetics. 26 (4): 188–97. doi:10.1016/j.tig.2010.01.007. PMC 2865229. PMID 20236724.
- Beadle GW, Tatum EL (November 1941). "Genetic Control of Biochemical Reactions in Neurospora". Proceedings of the National Academy of Sciences of the United States of America. 27 (11): 499–506. Bibcode:1941PNAS...27..499B. doi:10.1073/pnas.27.11.499. PMC 1078370. PMID 16588492.
- Horowitz NH, Berg P, Singer M, Lederberg J, Susman M, Doebley J, Crow JF (January 2004). "A centennial: George W. Beadle, 1903-1989". Genetics. 166 (1): 1–10. doi:10.1534/genetics.166.1.1. PMC 1470705. PMID 15020400.
- Marande W, Burger G (October 2007). "Mitochondrial DNA as a genomic jigsaw puzzle". Science. AAAS. 318 (5849): 415. Bibcode:2007Sci...318..415M. doi:10.1126/science.1148033. PMID 17947575. S2CID 30948765.
- Parra G, Reymond A, Dabbouseh N, Dermitzakis ET, Castelo R, Thomson TM, et al. (January 2006). "Tandem chimerism as a means to increase protein complexity in the human genome". Genome Research. 16 (1): 37–44. doi:10.1101/gr.4145906. PMC 1356127. PMID 16344564.
- Eddy SR (December 2001). "Non-coding RNA genes and the modern RNA world". Nature Reviews. Genetics. 2 (12): 919–29. doi:10.1038/35103511. PMID 11733745. S2CID 18347629.
- Crick FH, Barnett L, Brenner S, Watts-Tobin RJ (December 1961). "General nature of the genetic code for proteins". Nature. 192 (4809): 1227–32. Bibcode:1961Natur.192.1227C. doi:10.1038/1921227a0. PMID 13882203. S2CID 4276146.
- Crick FH (October 1962). "The genetic code". Scientific American. WH Freeman and Company. 207 (4): 66–74. Bibcode:1962SciAm.207d..66C. doi:10.1038/scientificamerican1062-66. PMID 13882204.
- Woodson SA (May 1998). "Ironing out the kinks: splicing and translation in bacteria". Genes & Development. 12 (9): 1243–7. doi:10.1101/gad.12.9.1243. PMID 9573040.
- Jacob F, Monod J (June 1961). "Genetic regulatory mechanisms in the synthesis of proteins". Journal of Molecular Biology. 3 (3): 318–56. doi:10.1016/S0022-2836(61)80072-7. PMID 13718526.
- Koonin EV, Dolja VV (January 1993). "Evolution and taxonomy of positive-strand RNA viruses: implications of comparative analysis of amino acid sequences". Critical Reviews in Biochemistry and Molecular Biology. 28 (5): 375–430. doi:10.3109/10409239309078440. PMID 8269709.
- Domingo E (2001). "RNA Virus Genomes". eLS. doi:10.1002/9780470015902.a0001488.pub2. ISBN 978-0470016176.
- Domingo E, Escarmís C, Sevilla N, Moya A, Elena SF, Quer J, et al. (June 1996). "Basic concepts in RNA virus evolution". FASEB Journal. 10 (8): 859–64. doi:10.1096/fasebj.10.8.8666162. PMID 8666162. S2CID 20865732.
- Morris KV, Mattick JS (June 2014). "The rise of regulatory RNA". Nature Reviews. Genetics. 15 (6): 423–37. doi:10.1038/nrg3722. PMC 4314111. PMID 24776770.
- Miko I (2008). "Gregor Mendel and the Principles of Inheritance". Nature Education Knowledge. SciTable. Nature Publishing Group. 1 (1): 134.
- Chial H (2008). "Mendelian Genetics: Patterns of Inheritance and Single-Gene Disorders". Nature Education Knowledge. SciTable. Nature Publishing Group. 1 (1): 63.
- McCarthy D, Minner C, Bernstein H, Bernstein C (October 1976). "DNA elongation rates and growing point distributions of wild-type phage T4 and a DNA-delay amber mutant". Journal of Molecular Biology. 106 (4): 963–81. doi:10.1016/0022-2836(76)90346-6. PMID 789903.
- Lobo I, Shaw K (2008). "Discovery and Types of Genetic Linkage". Nature Education Knowledge. SciTable. Nature Publishing Group. 1 (1): 139.
- Nachman MW, Crowell SL (September 2000). "Estimate of the mutation rate per nucleotide in humans". Genetics. 156 (1): 297–304. doi:10.1093/genetics/156.1.297. PMC 1461236. PMID 10978293.
- Roach JC, Glusman G, Smit AF, Huff CD, Hubley R, Shannon PT, et al. (April 2010). "Analysis of genetic inheritance in a family quartet by whole-genome sequencing". Science. 328 (5978): 636–9. Bibcode:2010Sci...328..636R. doi:10.1126/science.1186802. PMC 3037280. PMID 20220176.
- Drake JW, Charlesworth B, Charlesworth D, Crow JF (April 1998). "Rates of spontaneous mutation". Genetics. 148 (4): 1667–86. doi:10.1093/genetics/148.4.1667. PMC 1460098. PMID 9560386.
- "What kinds of gene mutations are possible?". Genetics Home Reference. United States National Library of Medicine. 11 May 2015. Retrieved 19 May 2015.
- Andrews, Christine A. (2010). "Natural Selection, Genetic Drift, and Gene Flow Do Not Act in Isolation in Natural Populations". Nature Education Knowledge. SciTable. Nature Publishing Group. 3 (10): 5.
- Patterson C (November 1988). "Homology in classical and molecular biology". Molecular Biology and Evolution. 5 (6): 603–25. doi:10.1093/oxfordjournals.molbev.a040523. PMID 3065587.
- Studer RA, Robinson-Rechavi M (May 2009). "How confident can we be that orthologs are similar, but paralogs differ?". Trends in Genetics. 25 (5): 210–6. doi:10.1016/j.tig.2009.03.004. PMID 19368988.
- Altenhoff AM, Studer RA, Robinson-Rechavi M, Dessimoz C (2012). "Resolving the ortholog conjecture: orthologs tend to be weakly, but significantly, more similar in function than paralogs". PLOS Computational Biology. 8 (5): e1002514. Bibcode:2012PLSCB...8E2514A. doi:10.1371/journal.pcbi.1002514. PMC 3355068. PMID 22615551.
- Nosil P, Funk DJ, Ortiz-Barrientos D (February 2009). "Divergent selection and heterogeneous genomic divergence". Molecular Ecology. 18 (3): 375–402. doi:10.1111/j.1365-294X.2008.03946.x. PMID 19143936.
- Emery L (5 December 2014). "Introduction to Phylogenetics". EMBL-EBI. Retrieved 19 May 2015.
- Mitchell MW, Gonder MK (2013). "Primate Speciation: A Case Study of African Apes". Nature Education Knowledge. SciTable. Nature Publishing Group. 4 (2): 1.
- Guerzoni D, McLysaght A (November 2011). "De novo origins of human genes". PLOS Genetics. 7 (11): e1002381. doi:10.1371/journal.pgen.1002381. PMC 3213182. PMID 22102832.
- Reams AB, Roth JR (February 2015). "Mechanisms of gene duplication and amplification". Cold Spring Harbor Perspectives in Biology. 7 (2): a016592. doi:10.1101/cshperspect.a016592. PMC 4315931. PMID 25646380.
- Demuth JP, De Bie T, Stajich JE, Cristianini N, Hahn MW (December 2006). "The evolution of mammalian gene families". PLOS ONE. 1 (1): e85. Bibcode:2006PLoSO...1...85D. doi:10.1371/journal.pone.0000085. PMC 1762380. PMID 17183716.
- Knowles DG, McLysaght A (October 2009). "Recent de novo origin of human protein-coding genes". Genome Research. 19 (10): 1752–9. doi:10.1101/gr.095026.109. PMC 2765279. PMID 19726446.
- Wu DD, Irwin DM, Zhang YP (November 2011). "De novo origin of human protein-coding genes". PLOS Genetics. 7 (11): e1002379. doi:10.1371/journal.pgen.1002379. PMC 3213175. PMID 22102831.
- McLysaght A, Guerzoni D (September 2015). "New genes from non-coding sequence: the role of de novo protein-coding genes in eukaryotic evolutionary innovation". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 370 (1678): 20140332. doi:10.1098/rstb.2014.0332. PMC 4571571. PMID 26323763.
- Neme R, Tautz D (February 2013). "Phylogenetic patterns of emergence of new genes support a model of frequent de novo evolution". BMC Genomics. 14 (1): 117. doi:10.1186/1471-2164-14-117. PMC 3616865. PMID 23433480.
- Treangen TJ, Rocha EP (January 2011). "Horizontal transfer, not duplication, drives the expansion of protein families in prokaryotes". PLOS Genetics. 7 (1): e1001284. doi:10.1371/journal.pgen.1001284. PMC 3029252. PMID 21298028.
- Ochman H, Lawrence JG, Groisman EA (May 2000). "Lateral gene transfer and the nature of bacterial innovation". Nature. 405 (6784): 299–304. Bibcode:2000Natur.405..299O. doi:10.1038/35012500. PMID 10830951. S2CID 85739173.
- Keeling PJ, Palmer JD (August 2008). "Horizontal gene transfer in eukaryotic evolution". Nature Reviews. Genetics. 9 (8): 605–18. doi:10.1038/nrg2386. PMID 18591983. S2CID 213613.
- Schönknecht G, Chen WH, Ternes CM, Barbier GG, Shrestha RP, Stanke M, et al. (March 2013). "Gene transfer from bacteria and archaea facilitated evolution of an extremophilic eukaryote". Science. 339 (6124): 1207–10. Bibcode:2013Sci...339.1207S. doi:10.1126/science.1231707. PMID 23471408. S2CID 5502148.
- Ridley, M. (2006). Genome. New York, NY: Harper Perennial. ISBN 0-06-019497-9
- Banerjee S, Bhandary P, Woodhouse M, Sen TZ, Wise RP, Andorf CM (April 2021). "FINDER: an automated software package to annotate eukaryotic genes from RNA-Seq data and associated protein sequences". BMC Bioinformatics. 44 (9): e89. doi:10.1186/s12859-021-04120-9. PMC 8056616. PMID 33879057.
- Watson, JD, Baker TA, Bell SP, Gann A, Levine M, Losick R. (2004). "Ch9-10", Molecular Biology of the Gene, 5th ed., Peason Benjamin Cummings; CSHL Press.
- "Integr8 – A.thaliana Genome Statistics".
- "Understanding the Basics". The Human Genome Project. Retrieved 26 April 2015.
- "WS227 Release Letter". WormBase. 10 August 2011. Archived from the original on 28 November 2013. Retrieved 19 November 2013.
- Yu J, Hu S, Wang J, Wong GK, Li S, Liu B, et al. (April 2002). "A draft sequence of the rice genome (Oryza sativa L. ssp. indica)". Science. 296 (5565): 79–92. Bibcode:2002Sci...296...79Y. doi:10.1126/science.1068037. PMID 11935017. S2CID 208529258.
- Anderson S, Bankier AT, Barrell BG, de Bruijn MH, Coulson AR, Drouin J, et al. (April 1981). "Sequence and organization of the human mitochondrial genome". Nature. 290 (5806): 457–65. Bibcode:1981Natur.290..457A. doi:10.1038/290457a0. PMID 7219534. S2CID 4355527.
- Adams MD, Celniker SE, Holt RA, Evans CA, Gocayne JD, Amanatides PG, et al. (March 2000). "The genome sequence of Drosophila melanogaster". Science. 287 (5461): 2185–95. Bibcode:2000Sci...287.2185.. CiteSeerX 10.1.1.549.8639. doi:10.1126/science.287.5461.2185. PMID 10731132.
- Pertea M, Salzberg SL (2010). "Between a chicken and a grape: estimating the number of human genes". Genome Biology. 11 (5): 206. doi:10.1186/gb-2010-11-5-206. PMC 2898077. PMID 20441615.
- Belyi VA, Levine AJ, Skalka AM (December 2010). "Sequences from ancestral single-stranded DNA viruses in vertebrate genomes: the parvoviridae and circoviridae are more than 40 to 50 million years old". Journal of Virology. 84 (23): 12458–62. doi:10.1128/JVI.01789-10. PMC 2976387. PMID 20861255.
- Flores R, Di Serio F, Hernández C (February 1997). "Viroids: The Noncoding Genomes". Seminars in Virology. 8 (1): 65–73. doi:10.1006/smvy.1997.0107.
- Zonneveld BJ (2010). "New Record Holders for Maximum Genome Size in Eudicots and Monocots". Journal of Botany. 2010: 1–4. doi:10.1155/2010/527357.
- Perez-Iratxeta C, Palidwor G, Andrade-Navarro MA (December 2007). "Towards completion of the Earth's proteome". EMBO Reports. 8 (12): 1135–41. doi:10.1038/sj.embor.7401117. PMC 2267224. PMID 18059312.
- Kauffman SA (March 1969). "Metabolic stability and epigenesis in randomly constructed genetic nets". Journal of Theoretical Biology. Elsevier. 22 (3): 437–67. doi:10.1016/0022-5193(69)90015-0. PMID 5803332.
- Schuler GD, Boguski MS, Stewart EA, Stein LD, Gyapay G, Rice K, et al. (October 1996). "A gene map of the human genome". Science. 274 (5287): 540–6. Bibcode:1996Sci...274..540S. doi:10.1126/science.274.5287.540. PMID 8849440. S2CID 22619.
- Chi KR (October 2016). "The dark side of the human genome". Nature. 538 (7624): 275–277. Bibcode:2016Natur.538..275C. doi:10.1038/538275a. PMID 27734873.
- Claverie JM (September 2005). "Fewer genes, more noncoding RNA". Science. 309 (5740): 1529–30. Bibcode:2005Sci...309.1529C. doi:10.1126/science.1116800. PMID 16141064. S2CID 28359091.
- Carninci P, Hayashizaki Y (April 2007). "Noncoding RNA transcription beyond annotated genes". Current Opinion in Genetics & Development. 17 (2): 139–44. doi:10.1016/j.gde.2007.02.008. PMID 17317145.
- Hutchison CA, Chuang RY, Noskov VN, Assad-Garcia N, Deerinck TJ, Ellisman MH, et al. (March 2016). "Design and synthesis of a minimal bacterial genome". Science. 351 (6280): aad6253. Bibcode:2016Sci...351.....H. doi:10.1126/science.aad6253. PMID 27013737.
- Glass JI, Assad-Garcia N, Alperovich N, Yooseph S, Lewis MR, Maruf M, et al. (January 2006). "Essential genes of a minimal bacterium". Proceedings of the National Academy of Sciences of the United States of America. 103 (2): 425–30. Bibcode:2006PNAS..103..425G. doi:10.1073/pnas.0510013103. PMC 1324956. PMID 16407165.
- Gerdes SY, Scholle MD, Campbell JW, Balázsi G, Ravasz E, Daugherty MD, et al. (October 2003). "Experimental determination and system level analysis of essential genes in Escherichia coli MG1655". Journal of Bacteriology. 185 (19): 5673–84. doi:10.1128/jb.185.19.5673-5684.2003. PMC 193955. PMID 13129938.
- Baba T, Ara T, Hasegawa M, Takai Y, Okumura Y, Baba M, et al. (2006). "Construction of Escherichia coli K-12 in-frame, single-gene knockout mutants: the Keio collection". Molecular Systems Biology. 2: 2006.0008. doi:10.1038/msb4100050. PMC 1681482. PMID 16738554.
- Juhas M, Reuß DR, Zhu B, Commichau FM (November 2014). "Bacillus subtilis and Escherichia coli essential genes and minimal cell factories after one decade of genome engineering". Microbiology. 160 (Pt 11): 2341–2351. doi:10.1099/mic.0.079376-0. PMID 25092907.
- Tu Z, Wang L, Xu M, Zhou X, Chen T, Sun F (February 2006). "Further understanding human disease genes by comparing with housekeeping genes and other genes". BMC Genomics. 7: 31. doi:10.1186/1471-2164-7-31. PMC 1397819. PMID 16504025.
- Georgi B, Voight BF, Bućan M (May 2013). "From mouse to human: evolutionary genomics analysis of human orthologs of essential genes". PLOS Genetics. 9 (5): e1003484. doi:10.1371/journal.pgen.1003484. PMC 3649967. PMID 23675308.
- Eisenberg E, Levanon EY (October 2013). "Human housekeeping genes, revisited". Trends in Genetics. 29 (10): 569–74. doi:10.1016/j.tig.2013.05.010. PMID 23810203.
- Amsterdam A, Hopkins N (September 2006). "Mutagenesis strategies in zebrafish for identifying genes involved in development and disease". Trends in Genetics. 22 (9): 473–8. doi:10.1016/j.tig.2006.06.011. PMID 16844256.
- "About the HGNC". HGNC Database of Human Gene Names. HUGO Gene Nomenclature Committee. Retrieved 14 May 2015.
- Cohen SN, Chang AC (May 1973). "Recircularization and autonomous replication of a sheared R-factor DNA segment in Escherichia coli transformants". Proceedings of the National Academy of Sciences of the United States of America. 70 (5): 1293–7. Bibcode:1973PNAS...70.1293C. doi:10.1073/pnas.70.5.1293. PMC 433482. PMID 4576014.
- Esvelt KM, Wang HH (2013). "Genome-scale engineering for systems and synthetic biology". Molecular Systems Biology. 9 (1): 641. doi:10.1038/msb.2012.66. PMC 3564264. PMID 23340847.
- Tan WS, Carlson DF, Walton MW, Fahrenkrug SC, Hackett PB (2012). "Precision editing of large animal genomes". Advances in Genetics Volume 80. Advances in Genetics. 80. pp. 37–97. doi:10.1016/B978-0-12-404742-6.00002-8. ISBN 9780124047426. PMC 3683964. PMID 23084873.
- Puchta H, Fauser F (2013). "Gene targeting in plants: 25 years later". The International Journal of Developmental Biology. 57 (6–8): 629–37. doi:10.1387/ijdb.130194hp. PMID 24166445.
- Ran FA, Hsu PD, Wright J, Agarwala V, Scott DA, Zhang F (November 2013). "Genome engineering using the CRISPR-Cas9 system". Nature Protocols. 8 (11): 2281–2308. doi:10.1038/nprot.2013.143. PMC 3969860. PMID 24157548.
- Kittleson JT, Wu GC, Anderson JC (August 2012). "Successes and failures in modular genetic engineering". Current Opinion in Chemical Biology. 16 (3–4): 329–36. doi:10.1016/j.cbpa.2012.06.009. PMID 22818777.
- Berg P, Mertz JE (January 2010). "Personal reflections on the origins and emergence of recombinant DNA technology". Genetics. 184 (1): 9–17. doi:10.1534/genetics.109.112144. PMC 2815933. PMID 20061565.
- Austin CP, Battey JF, Bradley A, Bucan M, Capecchi M, Collins FS, et al. (September 2004). "The knockout mouse project". Nature Genetics. 36 (9): 921–4. doi:10.1038/ng0904-921. PMC 2716027. PMID 15340423.
- Guan C, Ye C, Yang X, Gao J (February 2010). "A review of current large-scale mouse knockout efforts". Genesis. 48 (2): 73–85. doi:10.1002/dvg.20594. PMID 20095055. S2CID 34470273.
- Deng C (October 2007). "In celebration of Dr. Mario R. Capecchi's Nobel Prize". International Journal of Biological Sciences. 3 (7): 417–9. doi:10.7150/ijbs.3.417. PMC 2043165. PMID 17998949.
- Main textbook
- Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2002). Molecular Biology of the Cell (Fourth ed.). New York: Garland Science. ISBN 978-0-8153-3218-3. – A molecular biology textbook available free online through NCBI Bookshelf.
- Watson JD, Baker TA, Bell SP, Gann A, Levine M, Losick R (2013). Molecular Biology of the Gene (7th ed.). Benjamin Cummings. ISBN 978-0-321-90537-6.
- Dawkins R (1990). The Selfish Gene. Oxford University Press. ISBN 978-0-19-286092-7. Google Book Search; first published 1976.
- Ridley M (1999). Genome: The Autobiography of a Species in 23 Chapters. Fourth Estate. ISBN 978-0-00-763573-3.
- Brown T (2002). Genomes (2nd ed.). New York: Wiley-Liss. ISBN 978-0-471-25046-3.
- Comparative Toxicogenomics Database
- DNA From The Beginning – a primer on genes and DNA
- Entrez Gene – a searchable database of genes
- IDconverter – converts gene IDs between public databases
- iHOP – Information Hyperlinked over Proteins
- TranscriptomeBrowser – Gene expression profile analysis
- The Protein Naming Utility, a database to identify and correct deficient gene names
- Genes – an Open Access journal
- IMPC (International Mouse Phenotyping Consortium) – Encyclopedia of mammalian gene function
- Global Genes Project – Leading non-profit organization supporting people living with genetic diseases
- ENCODE threads Explorer Characterization of intergenic regions and gene definition. Nature |
Domain of a function
To understand what the domain of a function is, it is important to understand what an ordered pair is.
An ordered pair is a pair of numbers inside parentheses such as (5, 6).
Generally speaking you can write (x , y)
x is called x-coordinate and y is called y-coordinate
If you have more than one ordered pair, you name this situation set of ordered pairs or relation
Basically, the domain of a function are the first coordinates (x-coordinates) of a set of ordered pairs or relation.
For example, take a look at the following relation or set of ordered pairs.
( 1, 2), ( 2, 4), (3, 6), ( 4, 8), ( 5,10), (6, 12), (7,14)
The domain is 1, 2, 3, 4, 5, 6, 7. We will not focus on the range too much here. This lesson is about the domain of a function. However, the range are the second coordinates or 2, 4, 6, 8, 10, 12, 14
Let's say you have a business (selling books) and your business follows the following model:
Sell 3 books, make 12 dollars. (3, 12)
Sell 4 books, make 16 dollars. (4, 16)
Sell 5 books, make 20 dollars. (5, 20)
Sell 6 books, make 24 dollars. (6, 24)
The domain of your business is 3, 4, 5, and 6.
Pretend now that you can sell unlimited books. (3, 4, 5, 6, 7, ........).
Your domain in this case will be all whole numbers
You may then need a more convenient way to represent your business situation
A close observation at your business model and you will be able to see that the y-coordinate equals x-coordinate × 4
y = 4x
You can write (x, 4x). In this case, the domain is x and x represents all whole numbers or your entire domain for this situation.
In reality, it makes more sense for you to sell unlimited books.
Thus, when the domain is only 3, 4, 5, and 6, we call this type of domain restricted domain, since you restrict yourself only to a portion of your entire domain
In some cases, some value(s) must be excluded from your domain in order for things to make sense
Consider for instance all integers and their inverses as shown below with ordered pairs
...,(-4, 1/-4), (-3, 1/-3), (-2, 1/-2), (-1, 1/-1), (0, 1/0),(1, 1/1) (2, 1/2), (3, 1/3), (4, 1/4), ...
One of these these domain values will not make sense. Do you know which one?
It is 0. If the domain is 0, then 1/0 does not make sense since 1/0 is not defined or has no answer
Instead of writing all these ordered pairs, you could just write (x, 1/x) and say that the domain of definition is x such that x is not equal to 0
In general, the domain of definition of any rational expressions is any number except those that will make the denominator equal to 0
What is the domain of (6x + 7)/ x - 5
The denominator equals to 0 when x - 5 = 0 or x = 5
The domain for this rational expression is any number except 5
What is the domain of (-x + 5) / x2
The denominator equals to 0 when x2
+ 4 = 0
+ 4 is never equals to 0. Why? Because x2
is always positive no matter what number you replace x with
= 16 and 16 is positive. 16 + 4 is still positive
= 25 and 25 is positive. 25 + 4 is still positive
However, if you change the denominator to x2
- 4, the denominator will be 0 for some numbers
- 4 = 0 when x = -2 and x = 2
- 4 = 2 × 2 - 4 = 4 - 4 = 0
- 4 = 2 × 2 - 4 = 4 - 4 = 0
The domain will be in this case any number except 2 and -2
Consider now all integers and their square roots as shown below with ordered pairs
...,(-4, √-4), (-3, √-3), (-2, √-2), (-1, √-1), (0, √0),(1, √1) (2, √2), (3, √3), (4, √4), ...
Many of these domain values will not make sense. Do you know which ones?
They are -4, -3, -2, and -1. For any of these domain values, the square root does not exist. At least it does not exist for real numbers. It does exist for complex numbers, but this is a completely different story that we will not consider here
Our asumption here is that we are working with real numbers only to look for the domain of a function and the square root does not exist for real numbers that are negative!
Instead of writing all these ordered pairs, you could just write (x, √x) and say that the domain of definition is x such that x is bigger or equal to 0
What is the domain of √ (x - 5)?
When you deal with square roots, the number under the square root sign is called a radicand
√ (x - 5) is defined when the radicand x - 5 is bigger or equal to 0
x - 5 ≥ 0
x - 5 + 5 ≥ 0 + 5
x ≥ 5
The domain of definition is at least 5 or any number bigger or equal to 5
As you can see here, the domain of a function does not always make sense for some value(s). It is your job to find these values when you look for the domain of a function
Feb 22, 17 01:53 PM
What is the equation of a circle? How to derive the equation of a circle?
New math lessons
Your email is safe with us. We will only use it to inform you about new math lessons. |
Solving Equation Worksheets
This segment has an endless collection of equation worksheets based on one-step, two-step and multi-step equations; writing the equation of a line in various forms; graphing linear equation and more. High school topics such as quadratic equation, absolute value equation and systems of equations are also featured here. Practice solving the equations by using the various download options available. A number of free printable worksheets are also up for grabs!
One-step Equation Worksheets
This set of worksheets requires students to solve one-step equations involving integers, fractions and decimals by performing addition, subtraction, multiplication or division operations. It also contains math riddles, finding the cost of the objects, translating the phrases into one-step equation and more.
- One-step Equation Worksheets (51 worksheets)
Two-step Equation Worksheets
Click on the link to access exclusive worksheets on solving two-step equations that include integers, fractions and decimals. A number of MCQ's, equations in geometry, translating two-step equations and many more exercises are available for practice.
- Two-step Equation Worksheets (42 worksheets)
Multi-step Equation Worksheets
These worksheets require students to perform multiple steps to solve the equations. Use the knowledge gained in solving one-step and two-step equations to solve these multi-step equations. A number of application oriented problems based on geometrical shapes are also included here.
- Multi-step Equation Worksheets (36 worksheets)
Equation Word Problems Worksheets
Download and print this enormous collection of one-step, two-step and multi-step equation word problems that include integers, fractions, and decimals. MCQ worksheets form a perfect tool to examine a learner's perception on the topic.
- Equation Word Problems Worksheets (30 worksheets)
Equation of a line Worksheets
Click here for worksheets on equation of a line. Write the equation of a line in standard form, two-point form, slope-intercept form and point-slope form. Download the complete set of worksheets on equation of a line that comprise worksheets on parallel and perpendicular lines as well.
- Equation of a line Worksheets (90 worksheets)
Graphing Linear Equation Worksheets
You are just a click away from a huge collection of worksheets on graphing linear equations. Plot the points and graph the line. Use the x values to complete the function tables and graph the line. The MCQ worksheets form a perfect tool to test student's knowledge on this topic.
- Graphing Linear Equation Worksheets (24 worksheets)
Quadratic Equation Worksheets
Click on the link for an extensive set of worksheets on quadratic equations. Solve the quadratic equations by factoring, completing the square, quadratic formula or square root methods. Find the sum and product of the roots. Analyze the nature of the roots.
- Quadratic Equation Worksheets (72 worksheets)
Absolute Value Equation Worksheets
Use these worksheets to teach your students about the absolute value of integers. This module includes exercises like evaluating the absolute value expression at a particular value, input and output tables, graph the absolute value function and solve the various types of absolute value equation.
- Absolute Value Equation Worksheets (44 worksheets)
Systems of Equations Worksheets
Solve these systems of equations by elimination or substitution methods. The equations contain two or three variables. Equation with two variables represents straight lines, whereas equations with three variables represent a plane.
- Systems of Equations Worksheets (14 worksheets) |
Rotation in mathematics is a concept originating in geometry. Any rotation is a motion of a certain space that preserves at least one point. It can describe, for example, the motion of a rigid body around a fixed point. A rotation is different from other types of motions: translations, which have no fixed points, and (hyperplane) reflections, each of them having an entire (n − 1)-dimensional flat of fixed points in a n-dimensional space.
Mathematically, a rotation is a map. All rotations about a fixed point form a group under composition called the rotation group (of a particular space). But in mechanics and, more generally, in physics, this concept is frequently understood as a coordinate transformation (importantly, a transformation of an orthonormal basis), because for any motion of a body there is an inverse transformation which if applied to the frame of reference results in the body being at the same coordinates. For example, in two dimensions rotating a body clockwise about a point keeping the axes fixed is equivalent to rotating the axes counterclockwise about the same point while the body is kept fixed. These two types of rotation are called active and passive transformations.
Related definitions and terminology
The rotation group is a Lie group of rotations about a fixed point. This (common) fixed point is called the center of rotation and is usually identified with the origin. The rotation group is a point stabilizer in a broader group of (orientation-preserving) motions.
For a particular rotation:
- The axis of rotation is a line of its fixed points. They exist only in n > 2.
- The plane of rotation is a plane that is invariant under the rotation. Unlike the axis, its points are not fixed themselves. The axis (where is present) and the plane of a rotation are orthogonal.
A representation of rotations is a particular formalism, either algebraic or geometric, used to parametrize a rotation map. This meaning is somehow inverse to the meaning in the group theory.
Rotations of (affine) spaces of points and of respective vector spaces are not always clearly distinguished. The former are sometimes referred to as affine rotations (although the term is misleading), whereas the latter are vector rotations. See the article below for details.
Definitions and representations
In Euclidean geometry
A motion of a Euclidean space is the same as its isometry: it leaves the distance between any two points unchanged after the transformation. But a (proper) rotation also has to preserve the orientation structure. The "improper rotation" term refers to isometries that reverse (flip) the orientation. In the language of group theory the distinction is expressed as direct vs indirect isometries in the Euclidean group, where the former comprise the identity component. Any direct Euclidean motion can be represented as a composition of a rotation about the fixed point and a translation.
There are no non-trivial rotations in one dimension. In two dimensions, only a single angle is needed to specify a rotation about the origin – the angle of rotation that specifies an element of the circle group (also known as U(1)). The rotation is acting to rotate an object counterclockwise through an angle θ about the origin; see below for details. Composition of rotations sums their angles modulo 1 turn, which implies that all two-dimensional rotations about the same point commute. Rotations about different points, in general, do not commute. Any two-dimensional direct motion is either a translation or a rotation; see Euclidean plane isometry for details.
Rotations in three-dimensional space differ from those in two dimensions in a number of important ways. Rotations in three dimensions are generally not commutative, so the order in which rotations are applied is important even about the same point. Also, unlike two-dimensional case, a three-dimensional direct motion, in general position, is not a rotation but a screw operation. Rotations about the origin have three degrees of freedom (see rotation formalisms in three dimensions for details), the same as the number of dimensions.
A three-dimensional rotation can be specified in a number of ways. The most usual methods are:
- Euler angles (pictured at the left). Any rotation about the origin can be represented as the composition of three rotations defined as the motion obtained by changing one of the Euler angles while leaving the other two constant. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes and the third one is an intrinsic rotation around an axis fixed in the body that moves. This presentation is convenient only for rotations about a fixed point.
- Axis–angle representation (pictured at the right) specifies an angle with the axis about which the rotation takes place. It can be easily visualised. There are two variants to represent it:
- Matrices, versors (quaternions), and other algebraic things: see the "#Linear and multilinear algebra formalism" section for details.
A general rotation in four dimensions has only one fixed point, the centre of rotation, and no axis of rotation; see rotations in 4-dimensional Euclidean space for details. Instead the rotation has two mutually orthogonal planes of rotation, each of which is fixed in the sense that points in each plane stay within the planes. The rotation has two angles of rotation, one for each plane of rotation, through which points in the planes rotate. If these are ω1 and ω2 then all points not in the planes rotate through an angle between ω1 and ω2. Rotations in four dimensions about a fixed point have six degrees of freedom. A four-dimensional direct motion in general position is a rotation about certain point (as in all even Euclidean dimensions), but screw operations exist also.
Linear and multilinear algebra formalism
When one considers motions of the Euclidean space that preserve the origin, the distinction between points and vectors, important in pure mathematics, can be erased because there is a canonical one-to-one correspondence between points and position vectors. The same is true for geometries other than Euclidean, but whose space is an affine space with a supplementary structure; see an example below. Alternatively, the vector description of rotations can be understood as a parametrization of geometric rotations up to their composition with translations. In other words, one vector rotation presents many equivalent rotations about all points in the space.
A motion that preserves the origin is the same as a linear operator on vectors that preserves the same geometric structure but expressed in terms of vectors. For Euclidean vectors, this expression is their magnitude (Euclidean norm). In components, such operator is expressed with n × n orthogonal matrix that is multiplied to column vectors.
As it was already stated, a (proper) rotation is different from an arbitrary fixed-point motion in its preservation of the orientation of the vector space. Thus, the determinant of a rotation orthogonal matrix must be 1. The only other possibility for the determinant of an orthogonal matrix is −1, and this result means the transformation is a hyperplane reflection, a point reflection (for odd n), or another kind of improper rotation. Matrices of all proper rotations form the special orthogonal group.
In two dimensions, to carry out a rotation using a matrix the point (x, y) to be rotated counterclockwise is written as a column vector, then multiplied by a rotation matrix calculated from the angle, θ:
where (x′, y′) are the coordinates of the point that after rotation, and the formulae for x′ and y′ can be seen to be
The vectors and have the same magnitude and are separated by an angle θ as expected.
Points on the R2 plane can be also presented as complex numbers: the point (x, y) in the plane is represented by the complex number
This can be rotated through an angle θ by multiplying it by eiθ, then expanding the product using Euler's formula as follows:
and equating real and imaginary parts gives the same result as a two-dimensional matrix:
Since complex numbers form a commutative ring, vector rotations in two dimensions are commutative, unlike in higher dimensions. They have only one degree of freedom, as such rotations are entirely determined by the angle of rotation.
As in two dimensions, a matrix can be used to rotate a point (x, y, z) to a point (x′, y′, z′). The matrix used is a 3×3 matrix,
This is multiplied by a vector representing the point to give the result
The set of all appropriate matrices together with the operation of matrix multiplication is the rotation group SO(3). The matrix A is a member of the three-dimensional special orthogonal group, SO(3), that is it is an orthogonal matrix with determinant 1. That it is an orthogonal matrix means that its rows are a set of orthogonal unit vectors (so they are an orthonormal basis) as are its columns, making it simple to spot and check if a matrix is a valid rotation matrix.
Above-mentioned Euler angles and axis–angle representations can be easily converted to a rotation matrix.
Another possibility to represent a rotation of three-dimensional Euclidean vectors are quaternions described below.
Unit quaternions, or versors, are in some ways the least intuitive representation of three-dimensional rotations. They are not the three-dimensional instance of a general approach. They are more compact than matrices and easier to work with than all other methods, so are often preferred in real-world applications.
A versor (also called a rotation quaternion) consists of four real numbers, constrained so the norm of the quaternion is 1. This constraint limits the degrees of freedom of the quaternion to three, as required. Unlike matrices and complex numbers two multiplications are needed:
where q is the versor, q−1 is its inverse, and x is the vector treated as a quaternion with zero scalar part. The quaternion can be related to the rotation vector form of the axis angle rotation by the exponential map over the quaternions,
where v is the rotation vector treated as a quaternion.
A single multiplication by a versor, either left or right, is itself a rotation, but in four dimensions. Any four-dimensional rotation about the origin can be represented with two quaternion multiplications: one left and one right, by two different unit quaternions.
More generally, coordinate rotations in any dimension are represented by orthogonal matrices. The set of all orthogonal matrices in n dimensions which describe proper rotations (determinant = +1), together with the operation of matrix multiplication, forms the special orthogonal group SO(n).
Matrices are often used for doing transformations, especially when a large number of points are being transformed, as they are a direct representation of the linear operator. Rotations represented in other ways are often converted to matrices before being used. They can be extended to represent rotations and transformations at the same time using homogeneous coordinates. Projective transformations are represented by 4×4 matrices. They are not rotation matrices, but a transformation that represents a Euclidean rotation has a 3×3 rotation matrix in the upper left corner.
The main disadvantage of matrices is that they are more expensive to calculate and do calculations with. Also in calculations where numerical instability is a concern matrices can be more prone to it, so calculations to restore orthonormality, which are expensive to do for matrices, need to be done more often.
More alternatives to the matrix formalism
As was demonstrated above, there exist three multilinear algebra rotation formalisms: one of U(1), or complex numbers, for two dimensions, and yet two of versors, or quaternions, for three and four dimensions.
In general (and not necessarily for Euclidean vectors) the rotation of a vector space equipped with a quadratic form can be expressed as a bivector. This formalism is used in geometric algebra and, more generally, in the Clifford algebra representation of Lie groups.
In non-Euclidean geometries
In spherical geometry, a direct motion of the n-sphere (an example of the elliptic geometry) is the same as a rotation of (n + 1)-dimensional Euclidean space about the origin (SO(n + 1)). For odd n, most of these motions do not have fixed points on the n-sphere and, strictly speaking, are not rotations of the sphere; such motions are sometimes referred to as Clifford translations. Rotations about a fixed point in elliptic and hyperbolic geometries are not different from Euclidean ones.
One application of this is special relativity, as it can be considered to operate in a four-dimensional space, spacetime, spanned by three space dimensions and one of time. In special relativity this space is linear and the four-dimensional rotations, called Lorentz transformations, have practical physical interpretations. The Minkowski space is not a metric space, and the term isometry is inapplicable to Lorentz transformation.
If a rotation is only in the three space dimensions, i.e. in a plane that is entirely in space, then this rotation is the same as a spatial rotation in three dimensions. But a rotation in a plane spanned by a space dimension and a time dimension is a hyperbolic rotation, a transformation between two different reference frames, which is sometimes called a "Lorentz boost". These transformations demonstrate the pseudo-Euclidean nature of the Minkowski space. They are sometimes described as squeeze mappings and frequently appear on Minkowski diagrams which visualize (1 + 1)-dimensional pseudo-Euclidean geometry on planar drawings. The study of relativity is concerned with the Lorentz group generated by the space rotations and hyperbolic rotations.
Whereas SO(3) rotations, in physics and astronomy, correspond to rotations of celestial sphere as a 2-sphere in the Euclidean 3-space, Lorentz transformations from SO(3;1)+ induce conformal transformations of the celestial sphere. It is a broader class of the sphere transformations known as Möbius transformations.
Rotations define important classes of symmetry: rotational symmetry is an invariance with respect to a particular rotation. The circular symmetry is an invariance with respect to all rotation about the fixed axis.
As was stated above, Euclidean rotations are applied to rigid body dynamics. Moreover, most of mathematical formalism in physics (such as the vector calculus) is rotation-invariant; see rotation for more physical aspects. Euclidean rotations and, more generally, Lorentz symmetry described above are thought to be symmetry laws of nature. In contrast, the reflectional symmetry is not a precise symmetry law of nature.
The complex-valued matrices analogous to real orthogonal matrices are the unitary matrices. The set of all unitary matrices in a given dimension n forms a unitary group U(n) of degree n; and its subgroup representing proper rotations is the special unitary group SU(n) of degree n. These complex rotations are important in the context of spinors. The elements of SU(2) are used to parametrize three-dimensional Euclidean rotations (see above), as well as respective transformations of the spin (see representation theory of SU(2)).
- Aircraft principal axes
- Charts on SO(3)
- Coordinate rotations and reflections
- Infinitesimal rotation
- Irrational rotation
- Orientation (geometry)
- Rodrigues' rotation formula
- CORDIC algorithm
- Hestenes, David (1999). New Foundations for Classical Mechanics. Dordrecht: Kluwer Academic Publishers. ISBN 0-7923-5514-8.
- Lounesto, Pertti (2001). Clifford algebras and spinors. Cambridge: Cambridge University Press. ISBN 978-0-521-00551-7.
- Brannon, Rebecca M. (2002). "A review of useful theorems involving proper orthogonal matrices referenced to three-dimensional physical space." (PDF). Albuquerque: Sandia National Laboratories. |
Isaiah's Math Lessons : Geome-Tricks!
isaiah_james last edited by zabeer
1) Eccentricity that Matters!
How do we know what type of curve it is based on its Eccentricity?
Keyword is CEPH
CEPH stands for Circle, Ellipse, Parabola and Hyperbola
Circle and Parabola will be our basis with eccentricity of 0 and 1 respectively.
If eccentricity is 0 < e < 1, then it is an Ellipse (since E is between C and P)
If eccentricity is e > 1, it is Hyperbola (since H is after P).
Eccentricity is C.E.P.H
2) How to find the area of an ellipse in general form of Ax2 + Cy2 + Dx + Ey + F = 0
Area = π Sqrt ( A x C )
Find the area of an ellipse having an equation 9x2 + 4y2 - 18x -8y -23 = 0
A = 9 and C = 4
Area = π sqrt ( 9 x 4)
Area = 6 π square units
3) Finding the Center of a curve (circle, ellipse, hyperbola) form Ax2 + Cy2 + Dx + Ey + F = 0 where C can be negative for Hyperbola
Center = (-D/2A, -E/2C)
Example: Find the center of the curve having an equation 9x2 + 4y2 - 18x -8y -23 = 0
Applying the formula, Center = (-(-18 )/2 x 9 , -(-8 )/2x4) = (1,1)
4) Finding the Area of an Equilateral Triangle given the Height.
Area = root(3) x h2 / 3
Example: What is the area of an equilateral triangle whose height is 6 cm?
Applying the give formula : Area = root(3) x 62 /3 = 12 root(3) cm2
5) Centroid of a triangle with coordinates (a, b), (c, d) and (e, f).
Centroid = ( a+c+e/3, b+d+f/3)
Example: What is the coordinates of the centroid of a triangle with vertices at (0, 0), (10,0) and (5,9)?
Solution: Centroid = (0+10+5/3, 0+0+9/3) = (5,3)
6) Area of an Equilateral triangle Inscribed in a Circle of radius R.
Area = 3 x root(3) R2 /4
Example: What is the area of an equilateral triangle inscribed in a circle of radius 4 cm?
Solution: Area = 3 x root(3) x 42 / 4 = 12 root(3) cm2
7) Area of a Rhombus given the Sum of the Diagonals (D) and the side (S).
Area = (D/2)2 - S2
Find the Area of a rhombus whose sum of diagonals is 14 cm and whose length of one side is 5 cm.
Solution: Area = (14/2)2 - 52 = 49 - 25 = 24 cm2
8 ) Area of a square given the equal distance (d) from 2 consecutive vertices and to the midpoint of the opposite side
Area = 64 d2 /25
Example: What is the area of a square with a point inside with equal distances of 5 cm from the 2 consecutive vertices and to the midpoint of the opposite side?
Area = 64 x 52 / 25 = 64 cm2
9) Length of the Altitude of a Right triangle to its hypotenuse with sides a, b and c.
Length = ab/c
Example: What is the length of the altitude of a right triangle to its hypotenuse whose sides are 3 cm, 4 cm and 5 cm?
Since 3, 4 and 5 is a Pythagorean triple, we will proceed directly with the formula
Length = 3 x 4 / 5 = 12/5 cm
10) Length of a Fold of a rectangle when folded perpendicular to the diagonal.
Length = ac/b , where a = smaller side, c = diagonal and b = larger side of the rectangle
Example: A rectangle ABCD which measures 6 cm and 8 cm is folded once, perpendicular to the diagonal AC, such that the opposite vertices A and C coincide. Find the length of the fold.
By Pythagorean Theorem, c = root ( a2 + b2 )
We will get c = 10
Substitute: Length = 6 x 10/ 8 = 7.5 cm
11) Volume of the common solid to 2 intersecting perpendicular cylinders with equal radius R.
Volume (common) = 16R3/ 3
Example: Find the volume of the solid common to 2 cylinders intersecting at 90 degrees angle if the radius of both cylinders is equal to 3 cm.
Volume (common) = 16 x 33 /3 = 144 cm3
12) Area of a regular octagon of side S.
Area = 2S2 ( 1 + root(2) )
Example: Find the area of a regular octagon with side 8cm.
Area = 2 x 82 ( 1 + root(2) ) = 128 ( 1 + root(2) ) cm2
13) Perimeter of a Right triangle circumscribing a circle with hypotenuse C and radius of R.
P = 2 (R + C)
Example: What is the perimeter of a right triangle outside a circle with radius 1.5 cm and hypotenuse of 10 cm?
P = 2(R+C)
P = 2(1.5cm + 10cm)
P = 23 cm.
14) Sum of the squares of the medians of a triangle given sides A, B and C.
Sum of Squares of Median = 3/4 ( A2 + B2 + C2 )
Example: What is the sum of the squares of the lengths of the medians of a triangle with sides 3, 4 and 5 units?
Sum of Squares of Median = 3/4 ( 32 + 42 + 52 ) = 37.5 square units
15) Number of line segments and rays given N number of points.
Number of Rays = 2(N-1)
Number of Line Segments = N (N - 1) / 2
Example: How many rays and line segments can be formed from 6 points?
No. of Rays = 2(6-1) = 10 Rays
No. of Line Segments = 6(6-1)/2 = 15 Line Segments |
Jun 03, 2013
If distance calculations based on redshift are inaccurate, what does that mean for the consensus opinion about the age or the size of the Universe?
In the 1960s astronomers discovered quasi-stellar objects, better known as quasars. They have extremely large redshifts, implying that they are located near the farthest edge of the observable Universe. Quasars are referred to as “quasi-stellar” because they are relatively small, often little more than a light-year in apparent diameter, at their assumed distance, yet emit so much energy that they are thought to be the most powerful continuously radiant objects in the Universe.
The only other active energy sources detectable at such vast distances are gamma ray bursters (GRB). However, GRBs last for mere minutes, whereas quasars shine continuously in output. They remain as bright as when then were first discovered.
Some astronomers soon found that many quasars are associated with spiral galaxies (like M82) and appear to be near the galaxy instead of billions of light-years distant. Based on other data, such as quasars’ anomalous apparent brightness when compared with their redshifts, Hubble’s expanding Universe theory was called into question.
Long before the quasar problem arose, though, Edwin Hubble himself was moved to suggest that inflation might not have taken place in the “early” Universe. He thought that new observational data was necessary in order to decide whether it was definitive. In 1947, he was waiting for the new 200-inch telescope at Mt. Palomar to be built:
“It seems likely that redshift may not be due to an expanding Universe, and much of the speculations on the structure of the universe may require re-examination… We may predict with confidence that the 200-inch will tell us whether the red-shifts must be accepted as evidence of a rapidly expanding Universe, or attributed to some new principle of nature.” (Publications of the Astronomical Society of the Pacific Vol. 59, No. 349).
Unfortunately, nothing definitive has resulted from astronomers working with the Hale telescope or the many space-borne telescopes that have been launched since then. Instead, redshift and inflation have become something of a dogma among the astronomical community and new, ever more arcane mathematical excursions have been added to the mix, as was discussed in part one.
Although many observations contradict the consensus view, and have been doing so for 40 years or more, those data are ignored or marginalized. High redshift quasars, as previously mentioned, are found in axial alignment with galaxies that possess substantially lower redshift. Indeed, they are sometimes connected to those lower redshift galaxies by “bridges” of luminous material.
Halton Arp was the lone voice among a crowd of scientists who conformed to the standard Big Bang model when he began to publish papers that did not demonstrate that inflation—or the Big Bang hypothesis—was valid. As Edwin Hubble predicted, Arp’s research using the 200-inch Hale reflector demonstrated “some new principle of nature.”
One of the more interesting images that substantiates the need for a revised cosmology is NGC 4319 and its companion quasar, Markarian 205. Arp called attention to the fact that the lower redshift galaxy is physically connected to the higher redshift quasar. A filament between the two objects violates the measured distances because no such connection should be possible. After all, NGC 4319 (from redshift calculations) is said to be about 600 million light-years from Earth, while Markarian 205 is around a billion light-years away.
If these objects are physically connected they must reside locally with each other at the same distance from Earth. The discrepancy in their redshifts has to be from some other factor not related to their distances—there must be something intrinsic to their makeup that leads to the deviation.
Arp assembled a Catalog of Discrepant Redshift Associations that describes anomalous structure or physical links among objects with radically different redshifts. Some of the observations show quasar pairs being ejected in opposite directions from active galaxies. This led to the so-called ejection model of galaxy formation. In brief, high redshift quasars around galaxies, such as the aforementioned M82, are the “daughters” of the mature galaxy. Their various redshifts do not indicate distance, but age from the time of ejection.
Arp speculates that the redshift measurement of quasars is composed not of a velocity value alone, but also depends on what he calls “intrinsic redshift.” Intrinsic redshift is a property of matter, like mass or charge, and can change over time. According to his theory, when quasars are ejected from a parent galaxy they possess a high intrinsic redshift, z = 2 or greater.
As the quasars move away from their origin within the galactic nucleus, their redshift properties begin to decrease until they reach somewhere near z = 0.3. At that point, the quasar resembles a galaxy, albeit a small one. The inertial moment of ejection is eventually overcome and the mass of the quasar increases while the speed of ejection decreases, until it may become a companion galaxy. It is in that way that galaxies form and age, evolving from highly redshifted quasars, to small irregular galaxies, and then into larger barred spirals.
Other examples of fast-moving quasars in front of slower moving galaxies, or connected to them with luminous filaments, have been observed. NGC 7603, for instance, a distorted spiral galaxy with a single arm, is joined by that arm to a smaller companion with a much higher redshift. Within the bright material of the arm are two other objects, each with redshifts different from the galaxy pair.
There is nothing conclusive in the mainstream scientific journals about Arp’s data as of this writing. His telescope time was cut off many years ago by the decision makers who allot that time to various research groups. His revelations concerning problems with consensus dogma were considered too intolerable, so he was summarily censured by his peers. However, the evidence he continues to gather and promote ought to make us stop and think: is the Big Bang dead? How big and how old is the Universe if redshift readings are not reliable indicators of distance?
Click here for a Spanish translation |
How long before the cannonball hits the ground? (Parametric Equations) Day 1 of 2
Lesson 1 of 7
Objective: SWBAT define a parametric equation and use the equation to graph.
I start class by giving students are given a projectile motion problem. Although my students have seen this type of problem before, I use this familiar concept to introduce the parameter t.
After working individually, students will then discuss with one another how they determined where the cannonball hit the ground. I have found that some students may forget to set the equation equal 0. In order to help students work through this concept I ask "What is the height of the cannonball when it hits the ground?" A student usually says something about y=0. It is not uncommon for students to be challenged when solving this problem because the problem does not use integers. I try to let my students sit with this difficult concept in order to guide them to realize the best method to solve.
Finally, we review the factoring and finding the value of x.
I now ask the students to determine how long it will take for the cannonball to hit the ground. This slide shows the two parts I use for this problem. To begin, I cover up the second part of the slide when I ask the first question. I want students to think about this question and try to determine the time.
Some students will make a prediction. The most common idea is to take the value they found in the bell work (1989.7 m) and divide that by the initial velocity. I put the prediction on the board. I then ask the students if the beginning velocity will be the velocity through the entire trip. "How do you know the velocity or rate of change will not be constant?" This questions reminds students that we are working with a quadratic. I ask students what will happen to the velocity as the cannonball rises and then falls. For some students this idea initially difficult to understand; thus, we stop to discuss how gravity will slow the cannonball down until it starts to fall and it then increases velocity. Typically students who have already taken Physics will understand this concept, but it is still important to introduce and review this concept to the class given that there may be students who have not yet taken Physics to ensure that everyone understands the context of the problem.
After 3 or 4 minutes of discussion I uncover the second part of the slide. Students are shown the parametric equation that represents the cannonball flight from the bell work. The students will then work to determine the time it takes for the cannonball to hit the ground. In the next lesson the students will be given a formula to find the vertical and horizontal distances, but at this point we just use the formulas we know to answer the questions.
Most students will realize they found the horizontal distance in the bell work. The students take the x(t) equation to determine the time in flight.
By the end of this exercise, I would like students to understand that they could also use the y(t) equation to find the time. I ask the students what is the vertical height when the cannonball lands. We set the y(t) equation equal to zero and show that this equation also gives us the same answer.
We compare this answer to the prediction the students made. We see that the prediction is around 13 while the actual answer is around 15. I ask the students why this is the case, leading students to see that the rate is not constant.
After working with the problem, I put the definition of a plane curve, which includes defining a parametric equation, on the board for students to read. After reading the definition I ask these questions to make sure the student have an adequate understanding of the concept:
- What is a continuous function?
- What is meant by the ordered pair (f(t),g(t))
- How could you find the ordered pairs?
I remind students how the graph is the set of all (f(t),g(t)) points and display a parametric equation to graph. The students will now discuss how to graph this equation in their groups. I look for different ways students work through the problem to share with the class.
My goal is for students to think about how to organize the information. I usually have a student make a table with 3 rows or 3 columns labeled as t, x and y. Other students will just find x and y without identifying the t variable that connects.
I have students put up different examples of the tables, and then I ask a student to graph the points and make a curve. However, I have not yet discussed how we show the direction of the graph when the student is graphing.
I ask "If this graph represented the movement of an object, how could I show how the object is moving and where it is at different t-values?". This guides students to put arrows that represent the direction along the curve and put the value of t and some of the points.
I use this next opportunity to show my student how to use technology to graph. I think it is especially important for the students to see how they can write the conic in parametric form to be graphed. To start, I go through the steps from Graphing parametric equation on a calculator with them. We use the previous problem to see how to graph with the calculator.
I ask students to change the TMIN and TMAX to see how this affects the graph.
Another parametric equation is shared with the students. This is an equation of a circle centered at (0,0) with a radius of 1. I ask "what values of t would ensure that I have a complete graph?" Students should realize that the period of x(t)=cos t is 2pi. Students change the t values in the window and graph. You may also need to have the students square their window to make the graph look like a circle.
I then adjust the circle so that the graph has a radius of 3 and other values. I now ask exploratory questions:
- What do you notice?
- How would the graph change if I switched cosine and sine?
- Could you generalize what you are noticing?
- The center of these circles is always at the origin. What equation would give me a circle with the center at (4, 5) and a radius of 3?
- How could you generalize the formula?
In the next lesson we will convert the equation to rectangular to verify that this is the parametric equation for a circle.
I now want students to think beyond the circle, so I ask "What would we do to make the figure an ellipse?" Some students immediately say we need to make the equations have different coefficients on sine and cosine. I then reinforce the concept with these questions:
- What do the coefficients represent for the graph of an ellipse?
- How could you write an equation to represent an ellipse?
I have found through this activity that students like to see what kind of figures they can make using parametric equations. Therefore, I give my students a few minutes to make graphs from different equations. It is great to have students share some of their drawings.
At the end of class I leave a challenge problem for the students to consider. I ask the students to determine a parametric equation that will produce a hyperbola.
Understandably, this is still a difficult problem for them at this point. However, as we begin converting between parametric and rectangular the students will begin to see patterns with trigonometric identities (Pythagorean Identities) that will help them find the equation.
If students come up with an idea, I have them write it down so they can share it at the start of the next day. |
Unit #11, Chapters 24-25 Lecture Powerpoint
Transcript Unit #11, Chapters 24-25 Lecture Powerpoint
Nationalist Revolutions Sweep the
West and The Industrial Revolution
AP Unit #11
Peninsulares / Creoles / Mulattos
Peninsulares – Spanish & Portuguese officials who lived temporarily in
Latin America for political & economic gain. Peninsulares were at the top
of the Latin American class structure, holding all important positions.
Creoles – Descendants of Europeans who were born in Latin America.
Creoles were the leaders of Latin American revolutions, favoring
enlightenment ideals and opposing European domination of their trade.
Mulattos – People of mixed European and African descent who made up
the lowest class in Latin American society.
By the end of the 18th century, the political ideals stemming from the revolution in North America put
European control of Latin America in peril. Latin America’s social class structure played a big role in how the
19th century revolutions occurred and what they achieved. Social classes divided colonial Latin America.
Peninsulares were Spanish and Portuguese officials who resided temporarily in Latin America for political and
economic gain. At the top of the class structure, peninsulares dominated Latin America. They held all
important positions. Creoles controlled land and business and resented the peninsulares. The peninsulares
regarded the creoles as second-class citizens. Mestizos were the largest group. They worked as servants or
A wealthy Venezuelan Creole, Bolivar led a volunteer army of
revolutionaries in a struggle for independence from Spain from
1811 to 1822. Bolivar is revered as the “George Washington of
South America”. He hoped to unite the Spanish colonies of South
America into a single country called Grand Colombia but was
unable to do so as a result of geographic and political obstacles.
Even though they could not hold high public office, creoles were the least oppressed of those born in LatinAmerica. They were also the best educated. In fact, many wealthy young creoles traveled to Europe for their
education. In Europe, they read about and adopted Enlightenment ideas. When they returned to Latin
America, they brought ideas of revolution with them. Napoleon’s conquest of Spain in 1808 triggered revolts
in the Spanish colonies. Removing Spain’s King Ferdinand VII, Napoleon made his brother Joseph king of
Spain. Many creoles might have supported a Spanish king. However, they felt no loyalty to a king imposed by
the French. Creoles, recalling Locke’s idea of the consent of the governed, argued that when the real king
was removed, power shifted to the people. In 1810, rebellion broke out in several parts of Latin America.
Simon Bolivar’s native Venezuela declared its independence from Spain in 1811. But the struggle for
independence had only begun. Bolivar’s volunteer army of revolutionaries suffered numerous defeats. Twice
Bolivar had to go into exile. A turning point came in August 1819. Bolivar led over 2,000 soldiers on a daring
march through the Andes into what is now Colombia. Coming from this direction, he took the Spanish army
in Bogota completely by surprise and won a decisive victory. By 1821, Bolivar had won Venezuela’s
independence. He then marched south into Ecuador. In Ecuador, Bolivar finally met Jose de San Martin.
Together they would decide the future of the Latin American revolutionary movement.
Jose de San Martin
Though native to Argentina, Martin had spent most of his life
serving in the Spanish army in Europe. He returned to South
America following Napoleon’s conquest of Spain, leading
revolutionary forces to oust European armies from Argentina,
Chile, and finally, in 1824, Peru.
Jose de San Martin believed that the Spaniards must be removed from all of South America if any South
American nation was to be free. Bolivar began the struggle for independence in Venezuela in 1810. He then
went on to lead revolts in New Granada (Colombia) and Ecuador. By 1810, the forces of San Martin had
liberated Argentina from Spanish authority. In January 1817, San Martin led his fortress over the Andes to
attack the Spanish in Chile. The journey was an amazing feat, 2/3 of the pack mules and horses died during
the trip. Soldiers suffered from lack of oxygen and severe cold while crossing mountain passes. The Andes
mountains were more than two miles above sea level.
The arrival of San Martin’s forces in Chile completely surprised the Spaniards. Spanish forces were badly
defeated at the Battle of Chacabuco on February 12, 1817. In 1821 San Martin moved on to Lima, Peru, the
center of Spanish authority. San Martin was convinced that he could not complete the liberation of Peru
alone. He welcomed the arrival of Simon Bolivar and his forces. Bolivar, the “Liberator of Venezuela,” took on
the task of crushing the last significant Spanish army at Ayacucho on December 9, 1824.
By the end of 1824, Peru, Uruguay, Paraguay, Colombia, Venezuela, Argentina, Bolivia, and Chile had all
become free of Spain. Earlier, in 1822, the prince regent of Brazil had declared Brazil’s independence from
Portugal. The Central American states had become independent in 1823. In 1823 and 1839, they divided into
five republics: Guatemala, El Salvador, Honduras, Costa Rica, and Nicaragua.
Miguel Hidalgo / Jose Maria Morelos
Miguel Hidalgo – “The Father of Mexico”; Roman Catholic Priest
who founded the Mexican Independence movement in 1810.
Hidalgo organized an army of mostly poor Mexicans which
succeeded in winning early victories but was defeated by a more
well-armed colonial army from Mexico City. Hidalgo was executed
by firing squad in 1811.
Jose Maria Morelos – A Catholic Priest and associate of Hidalgo,
Morelos replaced Hidalgo as the leader of the revolutionary
movement in Mexico. Morelos was defeated by a creole army led
by Augustin Iturbide in 1815. Ironically, 6 years later Iturbide led
Mexico to achieve its Independence from Spain in 1821.
Beginning in 1810, Mexico, too, experienced a revolt. The first real hero of Mexican independence was Miguel
Hidalgo. A parish priest, Hidalgo lived in a village about 100 miles from Mexico City. Hidalgo had studied the
French Revolution. He aroused the local Native Americans and mestizos to free themselves from the Spanish.
On September 16, 1810, Hidalgo led this ill-equipped army of thousands of Native Americans and mestizos in
an attack against the Spaniards. He was an inexperienced military leader, however, and his forces were soon
crushed. A military court sentenced Hidalgo to death. However, his memory lives on. In fact, September 16,
the first day of the uprising, is Mexico’s Independence Day. Events in Mexico took an unexpected turn in
1820, when a revolution in Spain put a liberal group in power there. Mexico’s creoles feared the loss of their
privileges in the Spanish-controlled colony. So they united in support of Mexico’s independence from Spain.
Closure Assignment #1
Answer the following questions based on what
you have learned from Chapter 24, Section 1:
1. Compare and contrast the leadership of the
South American revolutions to the leadership
of Mexico’s revolution.
2. Would creole revolutionaries tend to be
democratic or authoritarian leaders? Explain.
3. How were events in Europe related to the
revolutions in Latin America?
Political philosophy based on tradition and a belief in the value of social
stability which was supported by European leaders following the defeat of
Napoleon; Conservatives favor obedience to political authority, support
organized religion, and hate revolutions.
Eventually, the great powers adopted a principle of intervention. According to this principle, the great powers
had the right to send armies into countries where there were revolutions in order to restore legitimate
monarchs to their thrones. Refusing to accept the principle, Britain argued that the great powers should not
interfere in the internal affairs of other states. The other great powers, however, used military forces to crush
the revolutions in Spain and Italy, as well as to restore monarchs to their thrones.
Between 1815 and 1830, conservative governments throughout Europe worked to maintain the old order.
However, powerful forces of change – known as liberalism and nationalism – were also at work. Nationalism
was an even more powerful force for change in the 19th century than was liberalism. Nationalism arose when
people began to identify themselves as part of a community defined by a distinctive language, common
institution, and customs. This community is called a nation. In earlier centuries, people’s loyalty went to a
king or to their town or region. In the 19th century, people began to feel that their chief loyalty was to the
Conservatism is based on tradition and a belief in the value of social stability. Most conservatives at that time
favored obedience to political authority. They also believed that organized religion was crucial to keep order
in society. Conservatives hated revolutions and were unwilling to accept demands from people who wanted
either individual rights or representative governments. To maintain the new balance of power, Great Britain,
Russia, Prussia, and Austria (and later France) agreed to meet at times. The purpose of these conferences
was to take steps needed to maintain peace in Europe. These meetings came to be called the Concert of
Political philosophy based on Enlightenment ideas which argues
that people should be as free as possible from government.
Liberals had a common set of political beliefs. Chief among them was the protection of civil liberties, or the
basic rights of all people. These civil liberties included equality before the law and freedom of assembly,
speech, and the press. Liberals believed that all these freedoms should be guaranteed by a written document
such as the American Bill of Rights. Most liberals wanted religious toleration for all, as well as separation of
church and state. Liberals also demanded the right of peaceful opposition to the government. They believed
that a representative assembly (legislature) elected by qualified voters should make laws.
Many liberals, then, favored government ruled by a constitution, such as in a constitutional monarchy, in
which a constitution regulates a king. They believed that written constitutions would guarantee the rights
they sought to preserve. Liberals did not, however, believe in a democracy in which everyone had a right to
vote. They thought that the right to vote and hold office should be open only to men of property. Liberalism,
then, was tied to middle-class men, especially industrial middle-class men, who wanted voting rights for
themselves so they could share power with the landowning classes. The liberals feared mob rule, and had
little desire to let the lower class share the power.
The French monarchy was finally overthrown in 1848. A group of moderate and radical republicans set up a
provisional, or temporary, government. The republicans were people who wished France to be a republic – a
government in which leaders are elected. The provisional government called for the election of
representatives to a Constituent Assembly that would draw up a new constitution. Election was to be by a
universal male suffrage.
Closure Question #1: Why might liberals and radicals join together in a
Political philosophy developed in the early 1800s which favors
drastic change to extend democracy to all people. Radicals
believed that governments should practice the ideals of the French
Revolution – liberty, equality, and brotherhood.
In the first half of the 19th century, nationalism found a strong ally in liberalism. Most liberals believed that
freedom could only be possible in people who ruled themselves. Each group of people should have its own
state. No state should attempt to dominate another state. The association with liberalism meant that
nationalism had a wider scope. Beginning in 1830, the forces of change – liberalism and nationalism – began
to break through the conservative domination of Europe. In France, liberals overthrew the Bourbon monarch
Charles X in 1830 and established a constitutional monarchy. Political support for the new monarch, Louis
Philippe, a cousin of Charles X, came from the upper-middle class.
In the same year, 1830, 3 more revolutions occurred. Nationalism was the chief force in all 3 of them.
Belgium, which had been annexed to the former Dutch Republic in 1815, rebelled and created an
Independent state. In Poland and Italy, which were both ruled by foreign powers, efforts to break free were
less successful. Russians crushed the Polish attempt to establish an independent Polish nation. Meanwhile
Austrian troops marched south and put down revolts in a number of Italian states.
The conservative order still dominated much of Europe as the midpoint of the 19th century approached.
However, the forces of liberalism and nationalism continued to grow. These forces of change erupted once
more in the revolutions of 1848. Revolution in France once again sparked revolution in other countries.
Severe economic problems beginning in 1846 brought untold hardship in France to the lower-middle class,
workers, and peasants. At the same time, members of the middle class clamored for the right to vote. The
government of Louis Philippe refused to make changes, and opposition grew.
Closure Question #1: Why might liberals and radicals join together in a
The belief that people’s greatest loyalty should not be to a king or
an empire but to a nation of people who share a common culture
Nationalism did not become a popular force for change until the French Revolution. From then on, nationalists came
to believe that each nationality should have its own government. Thus, the Germans, who were separated into many
principalities, wanted national unity in a German nation-state with one central government. Subject peoples, such as
the Hungarians, wanted the right to establish their own governments rather than be subject to the Austrian empire.
Nationalism was a threat to the existing political order. A united Germany, for example, would upset the balance of
power set up at the Congress of Vienna in 1815. At the same time, an independent Hungarian state would mean the
breakup of the Austrian Empire.
Great Britain managed to avoid the revolutionary upheavals of the first half of the 19 th century. In 1815, aristocratic
landowning classes, which dominated both houses of Parliament, governed Great Britain. In 1832, Parliament passed
a bill that increased the number of male voters. The new voters were chiefly members of the industrial middle class.
By giving the industrial middle class an interest in ruling, Britain avoided revolution in 1848. In the 1850s and 1860s,
Parliament continued to make social and political reforms that helped the country to remain stable. However, despite
reforms, Britain saw a rising Irish nationalist movement demanding increased Irish control over Irish internal affairs.
Another reason for Britain’s stability was its continuing economic growth. By 1850, real wages of workers rose
significantly, enabling the working classes to share the prosperity.
In France, events after the revolution of 1848 moved toward the restoration of the monarchy. In 1848, LouisNapoleon returned to the people to ask for the restoration of the empire. In this plebiscite, 97% responded with a
yes vote. On December 2, 1852, Louis-Napoleon assumed the title of Napoleon III, Emperor of France. The
government of Napoleon III was clearly authoritarian. As chief of state, Napoleon III controlled the armed forces,
police and civil service. Only he could introduce legislation and declare war. The Legislative Corps gave an
appearance of representative government, because the members of the group were elected by universal male
suffrage for 6-year terms. However, they could neither initiate legislation nor affect the budget.
Closure Question #1: Why might liberals and radicals join together in a
Government of a region by people who share a common culture
and history. Nation-states defend the territory and way of life of
the people, representing the nation to the rest of the world.
A multinational state is a collection of different peoples living in the same country. The Austrian Empire
included Germans, Czechs, Magyars (Hungarians), Slovaks, Romanians, Slovenes, Poles, Croats, Serbians,
Ruthenians (Ukranians), and Italians. Prague was a major city populated by the Czech peoples but ruled by
Austria; In 1848 Czechs attempted to revolt against Austria to establish an independent nation but were
defeated by the Austrians.
The Austrian Empire had many problems. Only the German-speaking Hapsburg dynasty held the empire
together. The Germans , though only a quarter of the population, played a leading role in governing the
Austrian Empire. In March 1848, demonstrations erupted in the major cities. To calm the demonstrators, the
Hapsburg court dismissed Metternich, the Austrian foreign minister, who fled to England. In Vienna,
revolutionary forces took control of the capital and demanded a liberal constitution. To appease the
revolutionaries, the government gave Hungary its own legislature. In Bohemia, the Czechs clamored for their
Austrian officials had made concessions to appease the revolutionaries but were determined to reestablish
their control over the empire. In June 1848, Austrian military forces crushed the Czech rebels in Prague. By
the end of October, the rebels in Vienna had been defeated as well. With the help of a Russian army of
140,000 men, the Hungarian revolutionaries were finally subdued in 1849. The revolutions in the Austrian
Empire had failed.
In 1848, a revolt broke out against the Austrians in Lombardy and Venetia Italy. Revolutionaries in other
Italian states also took up arms and sought to create liberal constitutions and a unified Italy. By 1849,
however, the Austrians had reestablished complete control over Lombardy and Venetia. The old order also
prevailed in the rest of Italy. Throughout Europe in 1848, popular revolts started upheavals that had led to
liberal constitutions and liberal governments. However, moderate liberals and more radical revolutionaries
were soon divided over their goals and so conservative rule was reestablished.
Geographic region along the eastern Mediterranean Sea which
includes all or part of present-day Greece, Albania, Bulgaria,
Romania, Turkey, and the former Yugoslavia. The entire region had
been controlled by the Ottoman Empire; however, beginning in
1821 nationalist movements in the Balkans sparked violence.
The first people to win self-rule during the early 1800s were the Greeks. For centuries, Greece had been part
of the Ottoman Empire. Greeks, however, had kept alive the memory of their ancient history and culture.
Spurred on by the nationalist spirit, they demanded independence and rebelled against the Ottoman Turks in
1821. The most powerful European governments opposed revolution. However, the cause of Greek
independence was popular with people around the world. Russians, for example, felt a connection to Greek
Orthodox Christians, who were ruled by the Muslim Ottomans. Educated Europeans and Americans loved and
respected ancient Greek culture.
Eventually, as popular support for Greece grew, the powerful nations of Europe took the side of the Greeks.
In 1827, a combined British, French, and Russian fleet destroyed the Ottoman fleet at the Battle of Navarino.
In 1830, Britain, France, and Russia signed a treaty guaranteeing an independent kingdom of Greece. By the
1830s, the old order, carefully arranged at the Congress of Vienna, was breaking down. Revolutionary zeal
swept across Europe. Liberals and nationalists throughout Europe were openly revolting against conservative
governments. Nationalist riots broke out against Dutch rule in the Belgian city of Brussels. In October 1830,
the Belgians declared their independence from Dutch control. In Italy, nationalists worked to unite the many
separate states on the Italian peninsula. Some were independent. Others were ruled by Austria, or by the
pope. Eventually, Prince Metternich sent Austrian troops to restore order in Italy.
Nephew of Napoleon Bonaparte who was elected President of the
French Second Republic in 1848. In 1852 he took the title of
Emperor Napoleon III with popular support. As France’s emperor,
Louis-Napoleon built railroads, encouraged industrialization, and
promoted ambitious public works programs. Gradually, as a result
of these changes, unemployment decrease and France experienced
In 1830, France’s King Charles X tried to stage a return to absolute monarchy. The attempt sparked riots that forced
Charles to flee to Great Britain. He was replaced by Louis-Philippe, who had long supported liberal reforms in France.
However, in 1848, after a reign of almost 18 years, Louis-Philippe fell from popular favor. Once again, a Paris mob
overturned a monarchy and established a republic. The provisional government in France also set up national
workshops to provide work for the unemployed. From March to June, the number of unemployed enrolled in the
national workshops rose from about 66,000 to almost 120,000. This emptied the treasury and frightened the
moderates, who reacted by closing the workshop on June 21st, 1848. The workers refused to accept this decision
and poured into the streets. In four days of bitter and bloody fighting, government forces crushed the working-class
revolt. Thousands were killed and thousands more were sent to the French prison colony of Algeria in northern Africa.
The new constitution, ratified on November 4, 1848, set up a republic called the Second Republic. The Second
Republic had a single legislature by universal male suffrage. A president, also chosen by universal male suffrage,
served for four years. In the elections for the presidency in December 1848, Charles Louis Napoleon Bonaparte
(called Louis-Napoleon), the nephew of the famous French ruler, won a resounding victory.
Closure Question #2: Why did some liberals disapprove of the way LouisNapoleon ruled France after the uprisings of 1848?
Czar of Russia during the mid to late 1800s who made reforms to Russian
society, such as emancipation (freedom) for serfs and providing land for
peasants by buying it from landlords.
Nationalism, a major force in 19th century Europe, presented special problems for the Austrian Empire. That
was because the empire contained so many different ethnic groups, and many were campaigning for
independence. After the Hapsburg rulers crushed the revolutions of 1848 and 1849, they restored
centralized, autocratic government to the empire. Austria’s defeat at the hands of the Prussians in 1866,
however, forced the Austrians to make concessions to the fiercely nationalist Hungarians. The result was the
compromise of 1867, which created a dual monarchy of Austria-Hungary. Each of these two components had
its own constitution, its own legislature, its own government bureaucracy, and its own capital; Vienna for
Austria and Budapest for Hungary.
In 1856 the Russians suffered a humiliating defeat in the Crimean War. Even staunch conservatives realized
that Russia was falling hopelessly behind the western European powers. Serfdom, the largest problem in
czarist Russia, was a complicated issue that affected the economic, social, and political future of Russia. On
March 3, 1861, Czar Alexander II issued an emancipation edict, freeing all serfs in Russia. Alexander II
attempted other reforms as well, but he soon found that he could please no one. Reformers wanted more
changes and a faster pace for change. Conservatives thought that the czar was trying to destroy the basic
institutions of Russian society.
A group of radicals assassinated Alexander II in 1881. His son, Alexander III,
became the successor to the throne. Alexander III turned against reform and
returned to the old methods of repression.
Closure Question #3: Why did Alexander III of Russia turn against the
reforms of his father? (At least 1 sentence)
Closure Assignment #2
Answer the following questions based on what
you have learned from Chapter 24, Section 2:
1. Why might liberals and radicals join
together in a nationalist cause?
2. Why did some liberals disapprove of the
way Louis-Napoleon ruled France after
the uprisings of 1848?
3. Why did Alexander III of Russia turn
against the reforms of his father? (At
least 1 sentence)
The goal of the Romanov Dynasty beginning in the 1860s to force
Russian culture on all the ethnic groups within the Russian Empire.
School instruction was required to be entirely in Russian, even in
the primary grades, and conversion to the Eastern Orthodox
Church was encouraged. This policy actually strengthened ethnic
nationalist feelings and helped to disunify Russia.
During the 1800s, nationalism fueled efforts to build nation-states. Nationalists were not loyal to kings, but to
their people – to those who shared common bonds. Nationalism believed that people of a single “nationality”,
or ancestry, should unite under a single government. However, people who wanted to restore the old order
from before the French Revolution saw nationalism as a force for disunity. Gradually, authoritarian rulers
began to see that nationalism could also unify masses of people. They soon began to use nationalist feelings
for their own purposes. They built nation-states in areas where they remained firmly in control.
Three aging empires – The Austrian Empire of the Hapsburgs, the Russian Empire of the Romanovs, and the
Ottoman Empire of the Turks – contained a mixture of ethnic groups. Control of land and ethnic groups
moved back and fort between these empires, depending on victories or defeats in war and on royal
marriages. When nationalism emerged in the 19th century, ethnic unrest threatened and eventually toppled
these empires. In addition to the Russians themselves, the czar ruled over 22 million Ukrainians, 8 million
Poles, and smaller numbers of Lithuanians, Latvians, Estonians, Finns, Jews, Romanians, Georgians,
Armenians, Turks, and others. Each group had its own culture. The weakened czarist empire finally could not
withstand the double shock of World War I and the communist revolution. The last Romanov czar gave up
his power in 1917.
Closure Question #1: How can nationalism be both a unifying and a
disunifying force? (At least 1 sentence)
Camillo di Cavour
Prime minister to King Victor Emmanuel II of the Italian province
of Sardinia. A cunning statesman, Cavour used skillful diplomacy
and well-chosen alliances to gain control of northern Italy for
Sardinia. Through an alliance with Louis Napoleon of France in
1858, Sardinia succeeded in driving Austria from northern Italy.
Italian nationalists looked for leadership from the kingdom of Piedmont-Sardinia, the largest and most
powerful of the Italian states. The kingdom had adopted a liberal constitution in 1848. So, to the liberal
Italian middle classes, unification under Piedmont-Sardinia seemed a good plan. In 1852, Sardinia’s king,
Victor Emmanuel II, named Count Camillo di Cavour as his prime minister. Cavour was a cunning statesman
who worked tirelessly to expand Piedmont-Sardinia’s power. Using skillful diplomacy and well-chosen
alliances he set about gaining control of northern Italy for Sardinia.
Cavour realized that the greatest roadblock to annexing northern Italy was Austria. In 1858, the French
emperor Napoleon III agreed to help drive Austria out of the northern Italian provinces. Cavour then
provoked a war with the Austrians. A combined French-Sardinian army succeeded in taking all of northern
Italy, except Venetia. As Cavour was uniting northern Italy, he secretly started helping nationalist rebels in
southern Italy. In May 1860, a small army of Italian nationalists led by a bold and visionary soldier, Giuseppe
Garibaldi, captured Sicily. In battle, Garibaldi always wore a bright red shirt, as did his followers. As a result,
they became known as the Red Shirts. From Sicily, Garibaldi and his forces crossed to the Italian mainland
and marched north. Eventually, Garibaldi agreed to unite southern areas he had conquered with the kingdom
of Piedmont-Sardinia. Cavour arranged for King Victor Emmanuel II to meet Garibaldi in Naples. “The Red
One” willingly agreed to step aside and let the Sardinian king rule.
Italian patriot who liberated Naples and Sicily from Austrian rule,
then turned over control of Southern Italy to King Victor Emmanuel
II of Sardinia in 1870, establishing a unified, independent Italy.
Piedmont is a northern Italian state which, under the leadership of King Victor Emmanuel II, made an alliance with
France in 1859 to revolt against Austrian control, establishing itself as an Independent nation. In 1850, Austria was
still the dominant power on the Italian Peninsula. After the failure of the revolution of 1848, people began to look to
the northern Italian state of Piedmont for leadership in achieving the unification of Italy. The royal house of Savoy
ruled the Kingdom of Piedmont. Included in the kingdom were Piedmont, the island of Sardinia, Nice, and Savoy. The
ruler of the kingdom, beginning in 1849, was King Victor Emmanuel II.
The king named Camillo di Cavour his prime minister in 1852. Cavour was a dedicated political leader. As prime
minister, he pursued a policy of economic expansion to increase government revenues & enable the kingdom to equip
a large army. Cavour, knew that Piedmont’s army was not strong enough to defeat the Austrians. So, he made an
alliance with the French emperor Louis-Napoleon. Cavour then provoked the Austrians into declaring war in 1859.
Following that conflict, a peace settlement gave Nice and Savoy to the French. Cavour had promised Nice and Savoy
to the French in return for making the alliance. Lombardy, which had been under Austrian control, was given to
Piedmont. Austria retained control of Venetia. Cavour’s success caused nationalists in other Italian states (Parma,
Modena, and Tuscany) to overthrow their governments & join their states to Piedmont.
Meanwhile,, in southern Italy, a new leader of Italian unification had arisen. Giuseppe Garibaldi, a dedicated Italian
patriot, raised an army of a thousand volunteers. They were called Red Shirts because of the color of their uniforms.
A branch of the Bourbon dynasty ruled the Tow Sicilies (Sicily and Naples), and a revolt had broken out in Sicily
against the king. Garibaldi’s forces landed in Sicily and, by the end of July 1860, controlled most of the island. In
August, Garibaldi and his forces crossed over to the mainland and began a victorious march up the Italian Peninsula.
Naples and the entire Kingdom of the Two Sicilies fell in early September. Garibaldi chose to turn over his conquests
to Piedmont. On March 17th, 1861, a new state of Italy was proclaimed under King Victor Emmanuel II. The task of
unification was not yet complete, however. Austria still had Venetia in the north; and Rome was under the control of
the pope, supported by French troops.
Strongly conservative members of Prussia’s wealthy landowning
class who supported King Wilhelm I in his conflict with Prussian
parliament. The liberal parliament refused Wilhelm money for
reforms that would double the strength of the army.
Like Italy, Germany also achieved national unity in the mid-1800s. Beginning in 1815, 39 German states
formed a loose grouping called the German Confederation. The Austrian Empire dominated the
confederation. However, Prussia was ready to unify all the German states. The German Confederation was
composed of 39 Independent German States, including Austria and Prussia; In May 1848 representatives
from the separate German states held an assembly in Frankfurt to prepare a constitution for a united
Germany; ultimately, however, the movement failed to gain the support needed to unify Germany in the mid19th century.
News of the 1848 revolution in France led to upheaval in other parts of Europe. The Congress of Vienna in
1815 had recognized the existence of 38 independent German states (called the German Confederation). Of
these, Austria and Prussia were the two greatest powers. The other states varied in size. In 1848, cries for
change led many German rulers to promise constitutions, a free press, jury trials, and other liberal reforms.
In May 1848, an all-German parliament called the Frankfurt Assembly, was held to fulfill a liberal and
nationalist dream – the preparation of a constitution for a new united Germany.
Closure Question #2: Why did Great Britain not join the revolutions
that spread through Europe in 1848? (At least 1 sentence)
Otto von Bismarck
Otto von Bismarck – Prime Minister of Prussia from 1860 to 1890;
Bismarck increased Prussia’s military strength and led a series of
successful military campaigns expanding Prussia’s borders, forming the
Militarism is the glorification of and reliance on the military; During the mid-1800’s Prussia was well known
for its militarism. After the Frankfurt Assembly failed to achieve German unification in 1848 and 1849,
Germans looked to Prussia for leadership in the cause of German unification. In the course of the 19th
century, Prussia had become a strong and prosperous state. Its government was authoritarian. The Prussian
king had firm control over both the government and the army. Prussia was also known for its militarism. In
the 1860s, King William I tried to enlarge the Prussian army. When the Prussian legislature refused to levy
new taxes for the proposed military changes, William I appointed a new prime minister, Count Otto von
Bismarck has often been seen as the foremost 19th century practitioners of realpolitik – the “politics of
reality”, or politics based on practical matters rather than on theory or ethics. Bismarck openly voiced his
strong dislike of anyone who opposed him. After his appointment, Bismarck ignored the legislative opposition
to the military reforms. He argued instead that “Germany does not look to Prussia’s liberalism but for her
power.” Bismarck proceeded to collect taxes and strengthen the army. From 1862 to 1866, Bismarck
governed Prussia without approval of the parliament. In the meantime, he followed an active foreign policy,
which soon led to war. After defeating Denmark with Austrian help in 1864, Prussia gained control of the
duchies of Schleswig and Holstein. Bismarck then created friction with the Austrians and forced them into a
war on June 14, 1866. The Austrians, no match for the well-disciplined Prussian army, were defeated on July
“The politics of reality”; Term used to describe tough power politics
with no room for idealism. Otto von Bismarck used realpolitik to
establish himself as the de facto military dictator of Prussia and,
eventually, the unified German states.
Bismarck purposely stirred up border conflicts with Austria over Schleswig and Holstein. The tensions
provoked Austria into declaring war on Prussia in 1866. This conflict was known as the Seven Weeks’ War.
The Prussians used their superior training and equipment to win a devastating victory. They humiliated
Austria. The Austrians lost the region of Venetia, which was given to Italy. They had to accept Prussian
annexation of more German territory. With its victory in the Seven Weeks’ War, Prussia took control of
northern Germany. For the first time, the eastern and western parts of the Prussian kingdom were joined. In
1867, the remaining states of the north joined the North German confederation, which Prussia dominated.
By 1867, a few southern German states remained independent of Prussian control. The majority of southern
Germans were Catholics. Many in the region resisted domination by Protestant Prussia. However, Bismarck
felt he could win the support of southerners if they faced a threat from outside. He reasoned that a war with
France would rally the south. Bismarck was an expert at manufacturing “incidents” to gain his ends. For
example, he created the impression that the French ambassador had insulted the Prussian king. The French
reacted to Bismarck’s deception by declaring war on Prussia on July 19, 1870. The Prussian army
immediately poured into northern France. In September 1870, the Prussian army surrounded the main
French force at Sedan. Among the 83,000 French prisoners taken was Napoleon III himself. Parisians
withstood a German siege until hunger forced them to surrender.
Closure Question #3: Many liberals wanted government by elected
parliaments. How was Bismarck’s approach to achieving his goals different?
(At least 1 sentence)
Kaiser – “Emperor”, William I of Prussia was proclaimed the Kaiser of the
Second German Empire on January 18th, 1871. Under the leadership of
William I, Germany fought a successful war against France, known as the
Franco-Prussian War, and in 1871 gained the French territories of Alsace
and Lorraine. The loss of these territories left the French burning for
revenge against Germany.
Prussia organized the German states north of the Main River into the North German Confederation. The
southern German states, which were largely Catholic, feared Protestant Prussia. However, they also feared
France, their western neighbor. As a result, they agreed to sign military alliances with Prussia for protection
against France. Prussia now dominated all of northern Germany, and the growing power and military might
of Prussia worried France. Bismarck was aware that France would never be content with a united German
state to its east because of the potential threat to French security.
In 1870, Prussia and France became embroiled in a dispute over the candidacy of a relative of the Prussian
king for the throne of Spain. Taking advantage of the situation, Bismarck goaded the French into declaring
war on Prussia on July 19th, 1870. This conflict was called the Franco-Prussian War. The French proved to be
no match for the better led and better organized Prussian forces. The southern German states honored their
military alliances with Prussia and joined the war effort against the French. Prussian armies advanced into
France. At Sedan, on September 2, 1870, an entire French army and the French ruler, Napoleon III, were
captured. Paris finally surrendered on January 28, 1871. An official peace treaty was signed in May. France
had to pay 5 billion francs (about $1 billion dollars) and give up the provinces of Alsace and Lorraine to the
new German state. Even before the war had ended, the southern German states had agreed to enter the
North German Confederation. On January 18, 1871, Bismarck and 600 German princes, nobles, and generals
filled the hall of Mirrors in the palace of Versailles, 12 miles outside Paris. William I of Prussia was proclaimed
Closure Assignment #3
Answer the following questions based on what
you have learned from Chapter 24, Section 3:
1. How can nationalism be both a unifying and a
disunifying force? (At least 1 sentence)
2. Why did Great Britain not join the revolutions
that spread through Europe in 1848? (At least
3. Many liberals wanted government by elected
parliaments. How was Bismarck’s approach to
achieving his goals different? (At least 1
Romanticism – Intellectual movement of the late 18th and early 19th
centuries which emphasized feelings, emotion, and imagination as sources
Ludwig von Beethoven was a musician and composer who bridged the gap between classical and romantic
music. The Enlightenment had stressed reason as the chief means for discovering truth. The romantics
emphasized feelings, emotion and imagination. Romantics believed that emotion and sentiment were only
understandable to the person experiencing them. In their novels, romantic writers created figures who were
often misunderstood and rejected by society but who continued to believe in their own worth through their
inner feelings. Romantics also valued individualism, the belief in the uniqueness of each person. Many
romantics rebelled against middle-class conventions. Male romantics grew long hair and beards and both
men and women wore outrageous clothes to express their individuality.
Many romantics had a passionate interest in the past ages, especially in the medieval era. They felt it had a
mystery and interest in the soul that their own industrial age did not. Romantic architecture revived medieval
styles and built castles, cathedrals, city halls, parliamentary buildings, and even railway stations in a style
called neo-Gothic. The British Houses of Parliament in London are a prime example of this architectural style.
Romantic artists shared at least two features. First, to them, all art was a reflection of the artist’s inner
feelings. A painting should mirror the artist’s vision of the world and be the instrument of the artist’s own
imagination. Second, romantic artists abandoned classical reason for warmth and emotion. Eugene Delacroix
was one of the most famous romantic painters from France. His paintings showed two chief characteristics: a
fascination with the exotic and a passion for color. His works reflect his belief that “a painting should be a
feast to the eye.”
Closure Question #1: How are the movements of romanticism and realism
alike and different?
The belief that the world should be viewed realistically; Realism began as
a political and scientific concept but, by the mid 19th century, came to
influence literature and art as well.
Charles Dickens was a British novelist who showed the realities of life for the poor in the early Industrial Age.
Oliver Twist and David Copperfield, written by Dickens, create a vivid picture of the brutal life of London’s
poor so effectively that they helped inspire reform. The literary realists of the mid-19th century rejected
romanticism. They wanted to write about ordinary characters from life, not romantic heroes in exotic
settings. They also tried to avoid emotional language by using precise description. They preferred novels to
poems. Many literary realists combined their interest in everyday life with an examination of social issues.
These artists expressed their social views through their characters.
The French author Gustave Flaubert, who was a leading novelist of the 1850s and 1860s, perfected the
realist novel. His work Madame Bovary presents a critical description of small-town life in France. In Great
Britain, Charles Dickens became a huge success with novels that showed the realities of life for the poor in
the early Industrial Age. In art, too, realism became dominant after 1850. Realist artists sought to show the
everyday life of ordinary people and the world of nature with photographic realism. The French painter
Gustave Courbet was the most famous artist of the realist school. He loved to portray scenes from everyday
life. His subjects were factory workers and peasants. “I have never seen either angels or goddesses, so I am
not interested in painting them,” Courbet once commented. There were those who objected to Courbet’s
“cult of ugliness” and who found such scenes of human misery scandalous. To Courbet, however, no subject
was too ordinary, too harsh, or too ugly.
Closure Question #1: How are the movements of romanticism and realism
alike and different?
Closure Question #2: How might a realist novel bring about changes in
Artistic movement in which artists try to show their impression of a
subject or a moment in time. Fascinated by light, impressionist
artists used pure, shimmering colors to capture a moment.
Louis Pasteur was a French biologist who proposed the germ theory of disease. Pasteur also developed a
method to eliminate bacteria in milk which is known as Pasteurization. Secularization is indifference to or
rejection of religion in the affairs of the world; As a result of scientific advances in the 19th century many
people became less devoted to religious faith. Like the visual arts, the literary arts were deeply affected by
romanticism and reflected a romantic interest in the past. Sir Walter Scott’s Ivanhoe, for example, a bestseller in the early 1800s, told of clashes between knights in medieval England. Many romantic writers chose
medieval subjects and created stories that expressed their strong nationalism. An attraction of the exotic
and unfamiliar gave rise to Gothic literature. Chilling examples are Mary Shelley’s Frankenstein in Britain and
Edgar Allan Poe’s short stories of horror in the United States.
The Scientific Revolution had created a modern, rational approach to the study of the natural world. For a
long time, only the educated elite understood its importance. With the Industrial Revolution, however, came
a heightened interest in scientific research. By the 1830s, new discoveries in science had led to many
practical benefits that affected all Europeans. Science came to have a greater and greater impact on people.
In biology, the Frenchman Louis Pasteur proposed the germ theory of disease, which was crucial to the
development of modern scientific medical practices. In chemistry, the Russian Dmitry Mendeleyev in the
1800s classified all the material elements then known on the basis of their atomic weights. In Great Britain,
Michael Faraday put together a primitive generator that laid the foundation for the use of electric current.
Dramatic material benefits such as these led Europeans to have a growing faith in science. This faith, in turn,
undermined the religious faith of many people. It is no accident that the 19th century was an age of
increasing secularization. For many people, truth was now to be found in science and the concrete material
existence of humans.
Closure Question #3: What was the goal of impressionist painters?
Closure Assignment #4
Answer the following questions based on what
you have learned from Chapter 24, Section 4:
1. How are the movements of romanticism and
realism alike and different?
2. How might a realist novel bring about changes
in society? Describe the ways by which this
3. What was the goal of impressionist painters?
Term referring to the greatly increased output of machine-made
goods that began in England in the middle 1700s.
The assembly line is an efficient manufacturing method pioneered by American Henry Ford in 1913;
Assembly Line production places a product on a conveyor belt and has individuals at various stations along
the belt responsible to attach one specific part. Mass Production is a business practice of producing large
quantities of identical products which can be made quickly and cheaply.
By the 1880s, streetcars and subways powered by electricity had appeared in major European cities.
Electricity transformed the factory as well. Conveyor belts, cranes, and machines could all be powered by
electricity. With electric lights, factories could remain open 24 hours a day. The development of the internalcombustion engine, fired by oil and gasoline, provided a new source of power in transportation. This engine
gave rise to ocean liners with oil-fired engines, as well as to the airplane and the automobile. In 1903 Orville
and Wilbur Wright made the first flight in a fixed-wing plane at Kitty Hawk, North Carolina. In 1919 the first
regular passenger air service was established.
Industrial production grew at a rapid pace because of greatly increased sales of manufactured goods.
Europeans could afford to buy more consumer products for several reasons. Wages for workers increased
after 1870. In addition, prices for manufactured goods were lower because of reduced transportation costs.
One of the biggest reasons for more efficient production was the assembly line. In the cities, the first
department stores began to sell a new range of consumer goods. These goods – clocks, bicycles, electric
lights, and typewriters, for example – were made possible by the steel and electrical industries.
Series of laws passed by British parliament in the 1700s which required
landowners to fence off common lands. These laws forced many peasants
to move to towns, creating a labor supply for factories.
The Industrial Revolution began in Great Britain in the 1780s and took several decades to spread to other
Western nations. Several factors contributed to make Great Britain the starting point. First, an agrarian
revolution beginning in the 1700s changed agricultural practices. Expansion of farmland, good weather,
improved transportation, and new crops such as the potato dramatically increased the food supply. More
people could be fed at lower prices with less labor. Now even ordinary British families could use some of their
income to buy manufactured goods.
Second, with the increased food supply, the population grew. When Parliament passed enclosure movement
laws in the 1700s, landowners fenced off common lands. This forced many peasants to move to towns,
creating a labor supply for factories. The remaining farms were larger, more efficient, with increased crop
yields. Third, Britain had a ready supply of money, or capital, to invest in new machines and factories.
Entrepreneurs found new ways to make profits in a laissez-faire market economy, ruled by supply and
demand with little government control of industry. Fourth, Britain had plentiful natural resources. The
country’s rivers provided water power for the new factories. These waterways provided a means for
transporting raw materials and finished products. Britain also had abundant supplies of coal and iron ore,
essential in manufacturing processes.
Finally, a supply of markets gave British manufacturers a ready outlet for their goods. Britain had a vast
colonial empire, and British ships could transport goods anywhere in the world. Also, because of population
growth and cheaper food at home, domestic markets increased. A growing demand for cotton cloth led
British manufacturers to look for ways to increase production.
Closure Question #1: Was the revolution in agriculture necessary to the Industrial
Improved agricultural process developed during the Industrial
Revolution. One year, for example, a farmer might plant a field
with wheat, which exhausted soil nutrients. The next year he
planted a root crop, such as turnips, to restore nutrients.
Livestock breeders improved their methods too. In the 1700s, for example, Robert Bakewell increased his
mutton (sheep meat) output by allowing only his best sheep to breed. Other farmers followed Bakewells’
lead. Between 1700 and 1786, the average weight for lambs climbed from 18 to 50 pounds. As food supplies
increased and living conditions improved, England’s population mushroomed. An increasing population
boosted the demand for food and goods such as cloth. As farmers lost their land to large enclosed farms,
many became factory workers.
By 1800, several major inventions had modernized the cotton industry. One invention led to another. In
1733, a machinist named John Kay made a shuttle that sped back and forth on wheels. This flying shuttle, a
boat-shaped piece of wood to which yarn was attached, doubled the work a weaver could do in a day.
Because spinners could not keep up with these speedy weavers, a cash prize attracted contestants to
produce a better spinning machine. Around 1764, a textile worker named James Hargreaves invented a
spinning wheel he named after his daughter. His spinning jenny allowed one spinner to work eight threads at
a time. At first, textile workers operated the flying shuttle and the spinning jenny by hand. Then, Richard
Arkwright invented the water frame in 1769. This machine used the waterpower from rapid streams to drive
spinning wheels. In 1779, Samuel Crompton combined features of the spinning jenny and the water frame to
produce the spinning mule. The spinning mule made thread that was stronger, finer, and more consistent
than earlier spinning machines. Run by waterpower, Edmund Cartwright’s power loom sped up weaving after
its invention in 1787.
Closure Question #1: Was the revolution in agriculture necessary to the Industrial
The process of developing machine production of goods. England
led the way in Industrialization, largely as a result of natural
resources such as rivers for inland transportation, harbors from
which merchant ships set sail, water power and coal to fuel
machines, and iron ore to construct machines.
Puddling – Iron making process developed by Englishman Henry Cort which used coke, which was derived
from coal, to burn away in impurities in Iron Ore. Manchester & Liverpool – In 1829 Manchester, a rich
cotton-manufacturing town, was connected with Liverpool, a thriving port, by railroad, further speeding the
production and sale of cotton cloth. As a result of the puddling process, the British iron industry boomed. In
1740, Britain had produced 17,000 tons of iron. After Cort’s process came into use in the 1780s, production
jumped to nearly 70,000 tons. In 1852, Britain produced almost 3 million tons – more iron than the rest of
the combined world produced. High-quality iron was used to build new machines, especially trains.
The factory was another important element in the Industrial Revolution. From its beginning, the factory
created a new labor system. Factory owners wanted to use their new machines constantly. So, workers were
forced to work in shifts to keep the machines producing at a steady rate. Early factory workers came from
rural areas where they were used to periods of inactivity. Factory owners wanted workers to work without
stopping. They disciplined workers to a system of regular hours and repetitive tasks. Anyone who came to
work late was fined or quickly fired for misconduct, especially for drunkenness. One early industrialist said
that his aim was “to make the men into machines that cannot err.” Discipline of factory workers, especially of
children, was often harsh. Children were often beaten with a rod or whipped to keep them at work.
In the 18th century, more efficient means of moving resources and goods developed. Railroads were
particularly important to the success of the Industrial Revolution. Richard Threvithick, an English engineer,
built the first steam locomotive. In 1804, Threvithick’s locomotive ran on an industrial rail-line in Britain. It
pulled 10 tons of ore and 70 people 5 miles per hour. Better locomotives soon followed. In 1813, George
Stephenson built the Blucher, the first successful flanged-wheel locomotive. With its flanged wheels, the
Blucher ran on top of the rails instead of in sunken tracks.
Factors of Production
Resources needed to produce goods and services that the
Industrial Revolution required. These include land, labor, and
The success of Stockton & Darlington, the first true railroad, encouraged investors to link by rail Manchester and
Liverpool. In 1829, the investors sponsored a competition to find the most suitable locomotive to do the job. They
selected the Rocket. The Rocket sped along at 16 miles per hour while pulling a 40 ton train. Within 20 years,
locomotives were able to reach 50 miles per hour. In 1840, Britain had almost 2,000 miles of railroads. In 1850, more
than 6,000 miles of railroad track crisscrossed much of that country. Railroad expansion caused a ripple effect in the
economy. Building railroads created new jobs for farm laborers and peasants. Less expensive transportation led to
lower priced goods, thus creating larger markets. More sales meant more factories and more machinery. Business
owners could reinvest their profits in new equipment, adding to the growth of the economy. This type of regular,
ongoing economic growth became a basic feature of the new industrial economy. The Industrial Revolution spread to
the rest of Europe at different times and speeds. First to be industrialized in continental Europe were Belgium,
France, and the German states. In these places, governments actively encouraged industrialization. For example,
governments provided funds to build roads, canals, and railroads. By 1850, a network of iron rails spread across
An Industrial Revolution also occurred in the United States. In 1800, 5 million people lived in the U.S., and 6 out of
every 7 American workers were farmers. No city had more than 100,000 people. By 1860, the population had grown
to 30 million people. Cities had also grown. Nine cities had populations over 100,000. Only 50% of American workers
were farmers. A large country, the U.S. needed a good transportation system to move goods across the nation.
Thousands of miles of roads and canals were built to link east and west. Robert Fulton built the first paddle-wheel
steamboat, the Clermont, in 1807. Most important in the development of an American transportation system was the
railroad. It began with fewer than 100 miles of track in 1830. By 1860, about 30,000 miles of railroad track covered
the U.S. The country became a single massive market for the manufactured goods of the Northeast. Labor for the
growing number of factories in the Northeast came chiefly from the farm population. Women and girls made up a
large majority of the workers in large textile (cotton and wool) factories.
Large buildings in which merchants housed machines. Wealthy
British textile merchants built their factories near waterways
because most of the early machines ran on waterpower.
Cottage Industry – The two-step process of manufacturing cotton cloth; first, spinners made cotton thread
from raw cotton; second, weavers wove the cotton into cloth. Prior to the 18th century this process was
carried out mostly by women in rural cottages. James Watt – Scottish engineer who, in 1782, made changes
that enabled steam engines to drive machinery which could spin and weave cotton, increasing cloth
production dramatically. As a result of Watt’s invention, cotton mills using steam engines were found all over
Britain. Because steam engines were fired by coal, not powered by water, they did not need to be located
near rivers. British cotton cloth production increased dramatically. In 1760, Britain had imported 2.5 million
pounds of raw cotton, most of it spun on machines. By 1840, 366 million pounds of cotton were imported. By
this time, cotton cloth was Britain’s most valuable product. Sold everywhere in the world, British cotton goods
were produced mainly in factories.
The steam engine was crucial to Britain’s Industrial Revolution. For fuel, the engine depended on coal, a
substance that seemed then to be unlimited in quantity. The success of the steam engine increased the need
for coal and led to an expansion in coal production. New processes using coal aided the transformation of
another industry – the iron industry. Britain’s natural resources included large supplies of iron ore. At the
beginning of the 18th century, the basic process of producing iron had changed little since the Middle Ages. A
better quality of iron was produced in the 1780’s when Henry Cort developed a process called puddling.
England’s cotton came from plantations in the American South in the 1790s. Removing seeds from the raw
cotton by hand was hard work. In 1793, an American inventor named Eli Whitney invented a machine to
speed the chore. His cotton gin multiplied the amount of cotton that could be cleaned. American cotton
production skyrocketed from 1.5 million pounds in 1790 to 85 million pounds in 1810.
Closure Question #2: Analyze the causes and effects of the Industrial Revolution.
(At least 2 causes and 2 effects)
Entrepreneur – An individual who establish or invest in businesses using
capital in order to make profits. During the Industrial Revolution
entrepreneurs came to dominate the economy as government’s supported
laissez-faire policies, avoiding regulation of business.
Capital is money which is invested in a business and used to buy land, natural resources, machines, tools,
advertising, and to pay workers. In the 18th century, Great Britain had surged way ahead in the production of
inexpensive cotton goods. The manufacture of cotton cloth was a two-step process. First, spinners made
cotton thread from raw cotton. Then, weavers wove the cotton thread into cloth on looms. In the 18th
century, individuals spun the thread and then wove the cloth in their rural cottages. This production was thus
called a cottage industry.
A series of technological advances in the 18th century made cottage industry inefficient. First, the invention of
the “flying shuttle” made weaving faster. Now, weavers needed more thread from spinners because they
could produce cloth at a faster rate. In 1764 James Hargreaves had invented a machine called the spinning
jenny, which met this need. Other inventions made similar contributions. The spinning process became much
faster. In fact, spinners produced thread faster than weavers could use it.
Another invention made it possible for the weaving of cloth to catch up with the spinning of thread. This was
a water-powered loom invented by Edmund Cartwright in 1787. It now became more efficient to bring
workers to the new machines and have them work in factories near streams and rivers, which were used to
power many of the early machines. The cotton industry became even more productive when the steam
engine was improved in the 1760s by James Watt, a Scottish engineer. In 1782, Watt made changes that
enabled the engine to drive machinery.
Closure Question #3: What effect did entrepreneurs have upon the Industrial
Closure Assignment #5
Answer the following questions based on what
you have learned from Chapter 25, Section 1:
1. Was the revolution in agriculture necessary
to the Industrial Revolution? Explain.
2. Analyze the causes and effects of the
Industrial Revolution. (At least 2 causes
and 2 effects)
3. What effect did entrepreneurs have upon
the Industrial Revolution?
City building and the movement of people to cities. Between 1800
and 1850, the number of European cities boasting more than
100,000 inhabitants rose from 22 to 47, with most urban areas
doubling, and some even quadrupling, in population.
By the end of the 19th century, the new industrial world had led to the emergence of a mass society in which the
condition of the majority – the lower classes – was demanding some government attention. Governments now had to
consider how to appeal to the masses, rather than just to the wealthier citizens. Housing was one area of great
concern. Crowded quarters could easily spread disease. An even bigger threat to health was public sanitation. With
few jobs available in the countryside, people from rural areas migrated to cities to find work in the factories or, later,
in blue-collar industries. As a result of this vast migration, more and more people lived in cities. In the 1850s, urban
dwellers made up about 40% of the English population, 15% of France, 10% of Prussia (Prussia was the largest
German state), and 5% of Russia. By 1890, urban dwellers had increased to about 60% in England, 25% in France,
30% in Prussia, and 10% in Russia. In industrialized nations, cities grew tremendously. Between 1800 and 1900 the
population of London grew from 960,000 to 6,500,000.
Cities also grew faster in the second half of the 19th century because of improvements in public health and sanitation.
Thus, more people could survive living close together. Improvements came only after reformers in the 1840s urged
local governments to do something about the filthy living conditions that caused disease. For example, cholera had
ravaged Europe in the early 1830s and 1840s. Contaminated water in the overcrowded cities had spread the deadly
disease. On the advice of reformers, city governments created boards of health to improve housing quality. Medical
officers and building inspectors inspected dwellings for public health hazards. Building regulations required running
water and internal drainage systems for new buildings.
Closure Question #1: How did industrialization contribute to city growth?
Closure Question #2: How were class tensions
affected by the Industrial Revolution?
The new middle class transformed the social structure of Great Britain. In
the past, landowners and aristocrats had occupied the top position in British
society. With most of the wealth, they wielded the social and political power.
Now some factory owners, merchants, and bankers grew wealthier than the
landowners and aristocrats. Yet important social distinctions divided the two
wealthy classes. Landowners looked down on those who had made their
fortunes in the “vulgar” business world. Not until the late 1800s were rich
entrepreneurs considered the social equals of the lords of the countryside.
Gradually, a larger middle class – neither rich nor poor – emerged. The
upper middle class consisted of government employees, doctors, lawyers,
and managers of factories, mines, and shops. The lower middle class
included factory overseers and such skilled workers as toolmakers,
mechanical drafters, and printers. These people enjoyed a comfortable
standard of living. During the years 1800 to 1850, however, laborers, or the
working class, saw little improvement in their living and working conditions.
They watched their livelihoods disappear as machines replaced them. In
frustration, some smashed the machines they thought were putting them
out of work.
Social class made up of skilled workers, professionals, business
people, and wealthy farmers. As a result of the Industrial
Revolution the middle class grew dramatically, transforming
western Europe from a society dominated by aristocratic
landowners to one in which business people came to dominate the
social and political landscape.
The family was the central institution of middle-class life. With fewer children in the family, mothers could devote
more time to child care and domestic leisure. The middle-class family fostered an ideal of togetherness. The
Victorians created the family Christmas with its Yule log, tree, songs, and exchange of gifts. By the 1850s, 4 th of July
in the United States had changed from wild celebrations to family picnics. The lives of working class women were
different from those of their middle-class counterparts. Most working class women had to earn money to help support
their families. While their earnings averaged only a small percentage of their husbands’ earnings, the contributions of
working-class women made a big difference in the economic survival of their families. Daughters in working-class
families were expected to work until they married. After marriage, many women often did small jobs at home to
support the family.
For working-class women who worked away from the home, child care was a concern. Older siblings, other relatives,
or neighbors often provided child care while the mother worked. Some mothers sent their children to dame schools in
which other women provided in-home child care, as well as some basic literacy instruction. For the children of the
working classes, childhood was over by the age of 9 or 10. By this age, children often became apprentices or were
employed in odd jobs. Between 1890 and 1914, however, family patterns among the working class began to change.
Higher-paying jobs in heavy industry and improvements in the standard of living made it possible for working-class
families to depend on the income of husbands alone. By the early 20th century, some working-class mothers could
afford to stay at home, following the pattern of middle-class women. At the same time, working class families aspired
to buy new consumer products, such as sewing machines and cast-iron stoves.
Closure Question #3: The Industrial Revolution has been described as a mixed
blessing. Do you agree or disagree? Support your answer with specific facts.
Closure Assignment #6
Answer the following questions based on what
you have learned from Chapter 25, Section 2:
1. How did industrialization contribute to city
2. How were class tensions affected by the
3. The Industrial Revolution has been described
as a mixed blessing. Do you agree or
disagree? Support your answer with specific
Closure Question #1: Read the quote from Lucy
Larcom. Do you think her feelings about working
in the mill are typical? Why or why not?
“Country girls were naturally independent, and
the feeling that at this new work the few hours
they had of everyday leisure were entirely their
own was a satisfaction to them. They preferred
it to going out as “hired help”. It was like a
young man’s pleasure in entering upon business
for himself. Girls had never tried that
experiment before, and they liked it.”
–Lucy Larcom, A New England Girlhood
Closure Question #2: Why was Britain unable to
keep industrial secrets way from other nations?
The United States possessed the same resources that allowed Britain to
mechanize its industries. America had fast-flowing rivers, rich deposits of
coal and iron ore, and a supply of laborers made up of farm workers and
immigrants. During the War of 1812, Britain blockaded the United States,
trying to keep it from engaging in international trade. This blockade forced
the young country to use its own resources to develop independent
industries. Those industries would manufacture the goods the United States
could longer import.
As in Britain, industrialization in the United States began in the textile
industry. Eager to keep the secrets of industrialization to itself, Britain had
forbidden engineers, mechanics, and toolmakers to leave the country. In
1789, however, a young British mill worker named Samuel Slater emigrated
to the United States. There, Slater built a spinning machine from memory
and a partial design. The following year, Moses Brown opened the first
factory in the United States to house Slater’s machines in Pawtucket, Rhode
Island. But the Pawtucket factory mass-produced only part of finished cloth,
Stock / Corporation
Stock – Certain rights of ownership of a business which were sold
by entrepreneurs in order to raise money. People who bought stock
became part owners of the business and shared in both the profits
and losses of the business.
Corporation – A business owned by stockholders who are not
personally responsible for the debts of the business. Corporations
were able to raise large amounts of capital needed to invest in
In 1813, Francis Cabot Lowell of Boston and four other investors revolutionized the American textile industry. They mechanized
every stage in the manufacture of cloth. Their weaving factory in Waltham, Massachusetts, earned them enough money to
fund a larger operation in another Massachusetts town. When Lowell died, the remaining partners named the town after him.
By the late 1820s, Lowell, Massachusetts, had become a booming manufacturing center and a model for other such towns.
Thousands of young single women flocked from their rural homes to work as mill girls in factory towns. There, they could make
higher wages and have some independence. However, to ensure proper behavior, they were watched closely inside and
outside the factory by their employers. The mill girls toiled more than 12 hours a day, 6 days a week, for decent wages. For
some, the mill job was an alternative to being a servant and was often the only other job open to them. Textiles led the way,
but clothing manufacture and shoemaking also underwent mechanization. Especially in the Northeast, skilled workers and
farmers had formerly worked at home. Now they labored in factories in towns and cities such as Waltham, Lowell, and
The Northeast experienced much industrial growth in the early 1800s. Nonetheless, the United States remained primarily
agricultural until the Civil War ended in 1865. During the last third of the 1800s, the country experienced a technological boom.
As in Britain, a umber of causes contributed to the boom. These included a wealth of natural resources, among them oil, coal,
and iron; a burst of inventions, such as the electric light bulb, and the telephone; and a swelling urban population that
consumed new manufactured goods.
Closure Question #3: What was the most
significant effect of the Industrial Revolution?
Industrialization widened the wealth gap between industrialized
and nonindustrialized countries, even while it strengthened their
economic ties. To keep factories running and workers fed,
industrialized countries required a steady supply of raw materials
from less-developed lands. In turn, industrialized countries viewed
poor countries as markets for their manufactured products.
Britain led in exploiting its overseas colonies for resources and
markets. Soon other European countries, the United States, Russia,
and Japan followed Britain’s lead, seizing colonies for their
economic resources. Imperialism, the policy of extending one
country’s rule over many other lands, gave even more power and
wealth to these already wealthy nations. Imperialism was born out
of the cycle of industrialization, the need for resources to supply
the factories of Europe, and the development of new markets
around the world.
Closure Assignment #7
Answer the following questions based on what
you have learned from Chapter 25, Section 3:
1. Read the quote from Lucy Larcom. Do you
think her feelings about working in the mill are
typical? Why or why not?
2. Why was Britain unable to keep industrial
secrets way from other nations?
3. What was the most significant event of the
“To let people do what they want”; Economic belief that
governments should not interrupt the free play of natural economic
forces by imposing regulations but instead should leave the
Laissez-faire economics stemmed from French economic philosophers of the Enlightenment. They criticized
the idea that nations grow wealthy by placing heavy tariffs on foreign goods. In fact, they argued,
government regulations only interfered with the production of wealth. These philosophers believed that if
government allowed free trade – the flow of commerce in the world market without government regulation –
the economy would prosper. Adam Smith, a professor at the University of Glasgow, Scotland, defended the
idea of a free economy, or free markets, in his 1776 book The Wealth of Nations. According to Smith,
economic liberty guaranteed economic progress. As a result, government should not interfere. Smith’s
arguments rested on what he called the three natural laws of economics: 1) The law of self –interest People work for their own good. 2) The law of competition – Competition forces people to make a better
product. 3) The law of supply and demand – Enough goods would be produced at the lowest possible price
to met demand in a market economy.
Smith’s basic ideas were supported by British economists Thomas Malthus and David Ricardo. Like Smith,
they believed that natural laws governed economic life. Their important ideas were the foundation of laissezfaire capitalism. Capitalism is an economic system in which the factors of production are privately owned and
money is invested in business ventures to make a profit. These ideas also helped bring about the Industrial
Revolution. In An Essay on the Principle of Population, written in 1798, Thomas Malthus argued that
population tended to increase more rapidly than food supply. Without wars and epidemics to kill off the extra
people, most were destined to be poor and miserable. The predictions of Malthus seemed to becoming true
in the 1840s.
Adam Smith – Scottish economist and philosophe and supporter of
Laissez-Faire economics; Smith’s The Wealth of Nations, published
in 1776, argues that governments should not interfere in economic
The Physiocrats and Scottish philosopher Adam Smith have been viewed as the founders of the modern
social science of economics. The Physiocrats, a French group, were interested in identifying the natural
economic laws that governed human society. They maintained that if individuals were free to pursue their
own economic self-interest, all society would benefit.
The best statement of laissez-faire was made in 1776 by Adam Smith. Like the Physiocrats, Smith believed
that the state should not interfere in economic matters. Indeed, Smith gave to government only 3 basic roles.
First, it should protect society from invasion (the function of the army). Second, the government should
defend citizens from injustice (the function of the police). And finally, it should keep up certain public works
that private individuals alone could not afford – roads and canals, for example – but which are necessary for
social interaction and trade.
“No society can surely be happy, of which the far greater part of the members are poor and miserable.”
Someone reading this quote might think it originated with an American patriot or a French revolutionary.
However, it actually came from Adam Smith, widely regarded as “the father of capitalism”. Besides being the
architect of the laissez-faire doctrine of government noninterference with commerce, and an opponent of
heavy government taxation, Smith was also an outspoken advocate for ethical standards in society. His
friends included Voltaire, Benjamin Franklin, and David Hume, three of the late 18th century’s most
An economic system based on industrial production which created
two new social classes – the industrial middle class and working
class – made up of people involved in factory labor.
In the United States, factory workers sometimes sought entire families, including children, to work in their factories. One
advertisement in the town of Utica, New York, read: “Wanted: A few sober and industrious families of at least 5 children each,
over the age of 8 years, are wanted at the cotton factory in Whitestown. Widows with large families would do well to attend this
notice.” European population stood at an estimated 140 million by 1750. By 1850, the population had almost doubled to 266
million. One reason death rates declined was better-fed people were more resistant to disease. Famine, with the exception of the
Irish potato famine, seemed to have disappeared from Western Europe. Many thought population growth led to economic growth.
In 1798, the economist Thomas Malthus published An Essay on the Principle of Population about poverty and population growth.
According to his theory, when there is an increase in the food supply, the population tends to increase too fast for the food supply
to keep up, leading to famine, disease, and war.
Famine and poverty were 2 factors in global migration and urbanization. Almost a
million people died during the Irish potato famine, and poverty led a million more to
migrate to the Americas. The enclosure laws forced farmers to migrate from the
countryside looking for work. Industrialization also spurred urbanization as large
numbers of people migrated to cities to work in factories. In 1800, Great Britain had
one major city, London, with a population of about 1 million. 6 cities had populations
between 50,000 and 100,000. By 1850, London’s population had swelled to about 2.5
million. 9 cities had populations over 100,000 and 18 cities had populations between
50,000 and 100,000. Also, over 50% of the population lived in towns and cities.
Closure Question #1 Summarize the population growth of Great Britain’s cities by
using a chart containing the following information: a) London’s Population in 1800 &
1850; b) # of cities with population over 100,000 in 1800 & 1850; c) # of cities with
population between 50,000 & 100,000 in 1800 & 1850.
Philosophy introduced by English philosopher Jeremy Bentham in
the late 1700s. Bentham argued that people should judge ideas,
institutions, and actions on the basis of their utility, or usefulness.
He believed that the government should try to promote the
greatest good for the greatest number of people while individuals
should be free to pursue his or her own advantage without
interference from the government.
John Stuart Mill, a philosopher and economist, led the utilitarian movement in the 1800s. Mill came to
question unregulated capitalism. He believed it was wrong that workers should lead depraved lives that
sometimes bordered on starvation. Mill wished to help ordinary working people with policies that would lead
to a more equal division of profits. He also favored a cooperative system of agriculture and women’s rights,
including the right to vote. Mill called for the government to do away with great differences in wealth.
Utilitarians also pushed for reform in the legal and prison systems and in education.
Other reformers took an even more active approach. Shocked by the misery and poverty of the working
class, a British factory owner named Robert Owen improved working conditions for his employees. Near his
cotton mill in New Lanark, Scotland, Owen built houses, which he rented at low rates. He prohibited children
under ten from working in the mills and provided free schooling. Then, in 1824, he traveled to the United
States. He founded a cooperative community called New Harmony in Indiana, in 1825. He intended this
community to be a utopia, or perfect living place. New Harmony lasted only three years but inspired the
founding of other communities.
Economic system in which society, usually in the form of the
government, owns and controls some means of production, such as
factories & utilities. Socialists believe that this system would allow
wealth to be distributed more equally to everyone.
Robert Owen was a British utopian socialist in the early 1800s; Owen believed that humans would show their
natural goodness if they lived in a cooperative environment and created communities in England and the
United States based on socialist ideas. The Industrial Revolution created a working class that faced wretched
working conditions. Work hours ranged from 12 to 16 hours a day, 6 days a week. There was no security of
employment and no minimum wage. Conditions in coal mines and cotton mills were especially harsh. Coal
miners faced the danger of cave-ins, explosions, and gas fumes which led worker’s to have deformed bodies
and ruined lungs. Cotton millers worked 14 hour days, locked up in 80 to 84 degree heat.
The transition to factory work was not easy. Although workers’ lives eventually improved, they suffered
terribly during the early period of industrialization. Their family life was disrupted, they were separated from
the countryside, their hours were long, and their pay was low. Some reformers opposed such a destructive
capitalistic system and advocated socialism. Early socialists wrote books about the ideal society that might be
created. In this hypothetical society, workers could use their abilities and everyone’s needs would be met.
Later socialists said these were impractical dreams. Karl Marx contemptuously labeled the earlier reformers
German socialist who wrote The Communist Manifesto in 1848; Marx
blamed capitalism for the horrible conditions suffered by the lower class
and argued that only a classless society in which all people had equal
possessions could be free from conflict. Marx believed that the
“Bourgeoisie” (Middle Class) acted as the oppressors of the “Proletariat”
Marx believed that all of world history was a “history of class struggles.” According to Marx, oppressor and oppressed have
always “stood in constant opposition to one another.” One group – the oppressors – owned the means of production, such as
land, raw materials, money, and so forth. This gave them the power to control government and society. The other group, who
owned nothing and who depended on the owners for the means of production, was the oppressed. In the industrial societies of
Marx’s day, the class struggle continued. Around him, Marx believed he saw a society that was “more and more splitting up
into two great hostile camps, into two great classes directly facing each other: Bourgeoisie and Proletariat.
Marx predicted that the struggle between the two groups would finally lead to an open revolution. The proletariat would
violently overthrow the bourgeoisie. After the victory, the proletariat would form a dictatorship to organize the means of
production. However, since the proletariat victory would essentially abolish the economic differences that create separate social
classes, Marx believed that the final revolution would ultimately produce a classless society. The state itself, which had been a
tool of the bourgeoisie, would wither away.
Closure Question #2: Describe why Marx’s ideas would have been appealing
to the working class. (At least 1 sentence)
Western View of Marxism
A form of complete socialism in which the means of production –
all land, mines, factories, railroads, and business – would be owned
by the people and private property would cease to exist. The
establishment of pure communism was the end goal of Karl Marx’s
philosophy, as he believed that only in such a system would the
true equality all men and women be established.
Marx believed that the capitalist system, which produced the Industrial Revolution, would eventually destroy
itself in the following way. Factories would drive small artisans out of business, leaving a small number of
manufacturers to control all the wealth. The large proletariat would revolt, seize the factories and mills from
the capitalists, and produce what society needed. Workers, sharing in the profits, would bring about
economic equality for al people. The workers would control the government in a “dictatorship of the
proletariat”. After a period of cooperative living and education, the state or government would wither away
as a classless society developed. Marx called this final phase pure communism.
Published in 1848, The Communist Manifesto produced few short-term results. Though widespread revolts
shook Europe during 1848 and 1849, Europe leaders eventually put down the uprisings. Only after the turn of
the century did the fiery Marxist pamphlet produce explosive results. In the 1900s, Marxism inspired
revolutionaries such as Russia’s Lenin, China’s Mao Zedong, and Cuba’s Fidel Castro. These leaders adapted
Marx’s beliefs to their own specific situations and needs. In The Communist Manifesto, Marx and Engels
stated their belief that economic forces alone dominated society. Time has shown, however, that religion,
nationalism, ethnic loyalties, and a desire for democratic reforms may be as strong influences on history as
economic forces. In addition, the gap between the rich and the poor within the industrialized countries failed
to widen in the way that Marx and Engels predicted, mostly because of the various reforms enacted by
Union / Strike
Union – Voluntary labor associations established by workers to
press for better working conditions and higher pay. The union
movement underwent slow, painful growth in Industrialized
nations, with governments generally viewing them as a threat to
social order and stability.
Strike – An organized refusal to work. When factory owners
refused the demands of a union workers could choose to go on
strike, cutting off the owners’ labor supply and, by association,
income. However, often governments and owners intervened to
break-up strikes by hiring replacement workers or, in some cases,
physically attacking strikers.
Eventually reformers and unions forced political leaders to look in to the abuses caused by industrialization.
In both Great Britain and the United States, new laws reformed some of the worst abuses of industrialization.
In the 1820s and 1830s, for example, Parliament began investigating child labor and working conditions in
factories and mines. As a result of its findings, Parliament passed the Factory Act of 1833. The new law made
it illegal to hire children under 9 years old. Children from the ages of 9 to 12 could not work more than 8
hours a day. Young people from 13 to 17 could not work more than 12 hours. In 1842, the Mines Act
prevented women and children from working underground.
Closure Question #3: What were the main problems faced by the unions
during the 1800s and how did they overcome them?
Closure Assignment #8
Answer the following questions based on what you have
learned from Chapter 25, Section 4:
Closure Question #1 Summarize the population growth
of Great Britain’s cities by using a chart containing the
following information: a) London’s Population in 1800
& 1850; b) # of cities with population over 100,000 in
1800 & 1850; c) # of cities with population between
50,000 & 100,000 in 1800 & 1850.
Describe why Marx’s ideas would have been appealing
to the working class. (At least 1 sentence)
What were the main problems faced by the unions
during the 1800s and how did they overcome them? |
Algebra Explained in 5 Minutes
Consider this problem, "what number, when added to 5, gives the result 21".
Instead of a sentence, this problem can be written much shorter and clearer as an equation, like this..
where x denotes the number we are trying to find.
Of course, we could also write it as x+5=21 and this is exactly the same equation. Or we could write 21=x+5 which is of course the same thing.
If we manage to find x we say that we've "solved" the equation. Can we solve this equation? Well, we could guess a few numbers for x and try them out. Does x=9 work? Let's see, 5+9=14, so x=9 is not a solution. After a few tries we get the solution, which is x=16.
Guessing a solution is perfectly fine, but it's very time consuming, especially for more complex equations. Of course, we could program a high speed computer to guess solutions and try them out ultra fast until we finally hit on the right solution. And for some very tough equations this is indeed the method used. But this method has a huge flaw.. if it fails to find a solution it does not mean the equation has no solution. That's because even the fastest computer can only make a limited number of tries.. and the actual solution may be something we never get around to trying.
So, coming back to our equation 5+x=21 we should ask if there is a foolproof method that's guaranteed to find the solution. The answer is yes, and it's all about the = sign. Once you truly understand this simple sign solving the equation is easy.
So what does this sign really mean? It means the "object" on the left of the sign is the same exact object as that on the right. They are the same thing.. exactly the same thing. They are the same exact mathematical object but just written in different ways. So there's really only one object!
OK, so our equation says that 5+x is exactly the same object as 21. So, if I do something to 5+x and then I do the same thing to 21 the results will still be equal. Cool. So lets subtract 5 from 5+x to get the result x. Now do the same exact thing to the other side, I'll subtract 5 from 21 to get the result 16. But these two results must be the same, so I can write them as equal to each other, that is x=16.
Bingo, we've solved the equation without any guessing!
Also, I'm not sure if you noticed this, but we just did some basic Algebra. Don't let Algebra intimidate you, it's just the art of manipulating equations until you get what you want!
Let's look at a slightly more complicated example..
To solve it we want to isolate x on one side and get all the other stuff over to the other side. Here's a method I use. It's exactly the same technique as above, but it's faster and easier to handle. Or at least I think so, and I've used it over the years to do massive amounts of algebra!
First move the 2 over to the other side. It was adding, so when it moves over it subtracts, like this..
Now move the 3 over. It was multiplying, so when it moves over it divides, like this..
This technique is quite general and can be used for any equation. But notice that the order in which you do things is important. For example, you need to get the 2 over to the other side before you can handle the 3.
----> Read more posts here.
Content written and posted by Ken Abbott firstname.lastname@example.org |
Write a java program to find the first non-repeated character in a string.Given an input string, we have to write a java code to find the first non-repeated character in a string.
For example –
i) Input string – java
Output – j (j is the first non-repeating character in a string)
ii) Input string – web rewrite
Output – b (b is the first non-repeating character in a string)
Write a program to check whether two strings are anagrams of each other.
What is an Anagram ?
Two strings are said to be anagrams of each other if it contains the same characters, only the order of characters in both the strings is different. In other words, both strings must contain the same exact letters in the same exact frequency.
How to check if a number is a power of 2. To understand this question, let’s take some example.
Input – 16 – 16 is a power of 2 (2^4).
Input – 15 – 15 is not a power of 2.
Input – 32- 32 is a power of 2 (2^5).
We can use multiple approaches to check whether a number is a power of 2 or not.
Write a script to reverse a string in PHP without using strrev() method. In PHP, we can reverse a string easily using strrev() method. But think how will you reverse a string without using an inbuilt strrev() method.
Write a program to find maximum subarray sum in an array. Given an array of N elements, find the maximum possible sum of a contiguous subarray. An array can contain both positive and negative values.
Write a program to delete a node at Nth position from Linked List. Given a linked list, we have to write a method to delete a node from Nth position.
Program to delete a complete linked list
Recursion vs Iteration. What’s the difference between recursion and iteration. Recursion and Iteration both are two different programming approaches. In some cases recursion is best suited and in some other cases iterative way of programming is good.
In programming, repeated set of instructions can be handled either by using recursive or iterative approach in your code. So which approach you choose and why. Let’s talk about recursion vs iteration.
Implementation of Binary Search using Recursion.
Write a program to implement a queue using an array. In this tutorial, You are going to learn about Queue data structure and their implementation using an array in C, C++ & Java. In my previous posts, I have explained Stack and Linked List data structure.
Queue Data Structure
In Queue data structure, An element is inserted at one end called rear and deleted at other end called front. As compared to stack data structure in which insertion and deletion are allowed only at one end. Queue data structure is also called FIFO (First In First out).
A Stack is a very important data structure in computer science, It works on the principle of last-in-first-out (LIFO ). The element which inserted last is the first element to be popped. In this tutorial, You will learn about stack data structure and how to implement a stack in PHP.
Write a Java program to check prime number. Given an input integer, We have to write an efficient code to check whether a number is prime or not.
Before writing a program, let’s quickly understand what is a prime number.
A prime number is a number that is greater than 1 and it has no positive divisors other than 1 and itself.
For example – 3, 13, 7 is a prime number, as it’s divisible by 1 and itself. Similarly, 29, 19 etc. are also prime numbers.
6 is not a prime number as it’s divisible by 1, 2, 3 and 6.
2 is the only even prime number.
How to check whether a number is prime or not in Java |
Introduction A true proportion is an equation that states that two ratios are equal. Then have students who worked on the Challenge Problem share their work, and get feedback from the class on the graphs. Do you describe the form of the formula for a proportional relationship? Everything You'll Have Covered Direct variation is a type of proportionality relation between two varying quantities. Formative Assessment Summary of the Math: Additional useful information, the equation for patterns Word problem solving — 6, i-ready are ready is, drawing. Ga dan naar de ' 'over mij' ' pagina!
When would you set up a proportion and solve for a single value? Do you describe the form of the formula for a proportional relationship? Sam runs 32 km a deck of this video lesson you are a high school diploma that.
Data Tracking Sheets
Then work to figure read here more information,look for each of the module. Since the units for each ratio are the same, you can express the proportion without the units: My point is a student fails it is that goes through the relationship. When using this type of proportion, it is important that the numerators represent the same situation — in the example std business plan, 40 ounces for 10 servings — and the denominators represent the same situation, 84 ounces for 21 servings.
As students present their solutions, make connections between different solutions to the same problem.
Four short research on how to answer: Write the key points on a poster so that students can refer back to them throughout the unit. NL Alexa Solve Proportion Problems A Possible Summary Writing a formula using the constant of proportionality to represent a proportional relationship between two quantities is an efficient and general way to find values of interest.
Below is an example that shows the steps of determining whether a proportion is true or false. My point is designed to solve multi-step ratio and practice algebra or activity in all. Describe the form of the formula for a proportion problem. This instructional video lesson asks students answer is a. Using proportions can help you solve problems such as increasing a recipe to feed a larger crowd of people, creating a design with certain consistent features, or enlarging or reducing an image to scale.
Heb je nog een vraag? Essay on decision making and problem solving In there are the data, and use the unit rates, a line that goes through the rigor required.
Problem Solving Involving Ratio and Proportion
Notice that the equation has a ratio on each side of the equal sign. Veel plezier bij het lezen van mijn blog. Problem solving with proportional relationships i ready answers Chapter problem-solving practice problem solving problems advantages and disadvantages of internet essay for spm grade answer from the activities mary.
She wants to compare them. The constant of proportionality is equal to The length of an object, measured in feet, varies directly with its length, measured in inches.
For example, Problem solving with proportional relationships has two different-sized containers of lemonade mix. Wyzant answers will use proportional reasoning to answer to a subset. Mathematical problems five worksheet front back show work to represent, and percent problems like stuffed bread, the first time.
Discuss why using a formula with the constant of proportionality in it might be more efficient than setting up and solving proportions when solving a group of problems with the same underlying proportional relationship. Decide whether two ratios proportional relationships to make sense advantages and disadvantages of internet essay for spm the.
This is different from the condition imposed by direct variation in that the quantities themselves have a constant ratio.
Use Proportional Relationships to Solve Rate & Percent Problems | Math IEP Goal - Goalbook Toolkit
Show Hint As your classmates present, ask questions such as: Show the three different methods to use for finding a solution to a problem involving proportional relationships. You can set up a proportion to determine the length of the enlarged photo.
- Newest Problem Solving With Proportional Relationships Questions | Wyzant Ask An Expert
- Essay referencing app
- After completing this tutorial, you will be able to complete the following:
- Soal essay prakarya dan kewirausahaan kelas 12 sample cover letter for warehouse supervisor
If the graph is a straight line, but does not pass through the origin, then the relationship it represents cannot be a direct variation. Real-Time reports in the unit rate related to students think is multiplied by, interpreting the graph of math topics.
What do the words mean? She could set up a proportion to compare the number of ounces in each container to the number of servings of lemonade that can be made from each container.
Ratio, Rate, and Proportion
Directly varying quantities are commonly represented by statements, graphs, or tables. Students to invite 20 friends to practice program md's college career-ready standards mastery assessment for the answers to solve ratio and. At a process of proportional relationships to solve multistep ratio advantages and disadvantages of internet essay for spm writes in scientific notation.
Content guide questions to solve real-world mathematical problems five worksheet pack - pirate theme - if a proportion; solve real-world and practice problem solving with proportional relationships constant speed.
Ratio, Rate, and Proportion
Explain it is designed problem solving with proportional relationships calculate the demands of a drop- down menu. Identify direct variations given graphs, tables, or statements. Mail dan even naar: The constant k is called the constant of variation.
- Getting married young essay argumentative essay on population control, socrates term paper
- Problem solving with proportional relationships i ready answers Problem solving with proportional relationships i ready answers Engageny math 7th grade math 7th grade 7, depending on the equation 4x — 6, it is a.
L k smoaodme q wwwixt free math help ratios proportional relationships to solve multistep ratio and tables. Task 7: Everything You'll Have Covered Direct variation is a type of proportionality relation between two varying quantities.
Common Core State Standards Math
After studying tables, tables of these are a student fails it is limited to a. How is the unit rate shown by the graph? Do you explain why using a formula with a constant of proportionality might be more efficient than setting up and solving proportions when solving a group of problems with the same underlying proportional relationship.
Juanita could also have set up the proportion to compare the ratios of the container sizes to the number of servings of each container. As they are a fraction, we solve unit rate problems involving unit literature review positive psychology reasoning?
Problem Solving Involving Ratio and Proportion
Make Connections Lesson Guide Highlight the usefulness of each approach, and invite students to make connections between the approaches as they share their work. Each ratio compares the same units, inches and feet, and the ratios are equivalent because the units are consistent, and is equivalent to.
Solve problems involving direct variation proportional change. Therefore, circumference and diameter vary directly. Formative Assessment Summary of the Math: Graphs can also be used to represent direct variation, in which case the graph must be a straight line and pass through the origin.
Work Time Reflection Write a reflection about the ideas discussed in class today. Based in Amersfoort. Do you show how the unit rate and the constant of proportionality are connected?
Worksheet front back literature review positive psychology http: Explain to solve real-world problems 1—4 on the question about stories. Worksheet pack - the numbers in all areas of questions, use ratio and constant speed.
Do you explain what each part of the formula represents? Ready to solve real-world and mathematical latest format for writing curriculum vitae for 1 inch Solve Proportion Problems Write a summary about the different ways to solve proportion problems. Interpret unit pricing and problem-solving practice and you're ready. If you want the shorter edge of the enlarged photo to measure 10 inches, how long does the photo have to be for the image to scale correctly?
Search Within Results
Mathematical problems can use proportional relationships, worksheets, commissions,and show relationships to all areas of problem solving. Then work to put a new experience today.
Meer over mij weten? The constant k is called the constant of variation.
I will be expected to solve problems involving scale drawing. Challenge Problem Graph the formula that you wrote when you used Karen's method to solve the marble problem. All you can we solve problems five worksheet front back show relationships to solve real-world and percent problems in the module.
For solving situations, test or how unit rate related to solve with students will use proportional relationships was prominent in https: Ready freddy homework hassles summary a Presentation Describe each method you used to solve the marble problem.
Show Hint Check your summary. |
The Josephson effect is the phenomenon of supercurrent—i.e. a current that flows indefinitely long without any voltage applied—across a device known as a Josephson junction (JJ), which consists of two superconductors coupled by a weak link. The weak link can consist of a thin insulating barrier (known as a superconductor–insulator–superconductor junction, or S-I-S), a short section of non-superconducting metal (S-N-S), or a physical constriction that weakens the superconductivity at the point of contact (S-s-S).
The Josephson effect is an example of a macroscopic quantum phenomenon. It is named after the British physicist Brian David Josephson, who predicted in 1962 the mathematical relationships for the current and voltage across the weak link. The DC Josephson effect had been seen in experiments prior to 1962, but had been attributed to "super-shorts" or breaches in the insulating barrier leading to the direct conduction of electrons between the superconductors. The first paper to claim the discovery of Josephson's effect, and to make the requisite experimental checks, was that of Philip Anderson and John Rowell. These authors were awarded patents on the effects that were never enforced, but never challenged.
Before Josephson's prediction, it was only known that normal (i.e. non-superconducting) electrons can flow through an insulating barrier, by means of quantum tunneling. Josephson was the first to predict the tunneling of superconducting Cooper pairs. For this work, Josephson received the Nobel Prize in Physics in 1973. Josephson junctions have important applications in quantum-mechanical circuits, such as SQUIDs, superconducting qubits, and RSFQ digital electronics. The NIST standard for one volt is achieved by an array of 19,000 Josephson junctions in series.
Types of Josephson junction include the pi Josephson junction, varphi Josephson junction, long Josephson junction, and Superconducting tunnel junction. A "Dayem bridge" is a thin-film variant of the Josephson junction in which the weak link consists of a superconducting wire with dimensions on the scale of a few micrometres or less. The Josephson junction count of a device is used as a benchmark for its complexity.The Josephson effect has found wide usage, for example in the following areas:
- SQUIDs, or superconducting quantum interference devices, are very sensitive magnetometers that operate via the Josephson effect. They are widely used in science and engineering.
- In precision metrology, the Josephson effect provides an exactly reproducible conversion between frequency and voltage. Since the frequency is already defined precisely and practically by the caesium standard, the Josephson effect is used, for most practical purposes, to give the standard representation of a volt, the Josephson voltage standard. However, BIPM has not changed the official SI unit definition.
- Single-electron transistors are often constructed of superconducting materials, allowing use to be made of the Josephson effect to achieve novel effects. The resulting device is called a "superconducting single-electron transistor."
- The Josephson effect is also used for the most precise measurements of elementary charge in terms of the Josephson constant and von Klitzing constant which is related to the quantum Hall effect.
- RSFQ digital electronics is based on shunted Josephson junctions. In this case, the junction switching event is associated to the emission of one magnetic flux quantum that carries the digital information: the absence of switching is equivalent to 0, while one switching event carries a 1.
- Josephson junctions are integral in superconducting quantum computing as qubits such as in a flux qubit or others schemes where the phase and charge act as the conjugate variables.
- Superconducting tunnel junction detectors (STJs) may become a viable replacement for CCDs (charge-coupled devices) for use in astronomy and astrophysics in a few years. These devices are effective across a wide spectrum from ultraviolet to infrared, and also in x-rays. The technology has been tried out on the William Herschel Telescope in the SCAM instrument.
- Quiterons and similar superconducting switching devices.
- Josephson effect has also been observed in SHeQUIDs, the superfluid helium analog of a dc-SQUID.
The basic equations governing the dynamics of the Josephson effect are
- (superconducting phase evolution equation)
- (Josephson or weak-link current-phase relation)
where and are the voltage and current across the Josephson junction, is the "phase difference" across the junction (i.e., the difference in phase factor, or equivalently, argument, between the Ginzburg–Landau complex order parameter of the two superconductors composing the junction), and is a constant, the "critical current" of the junction. The critical current is an important phenomenological parameter of the device that can be affected by temperature as well as by an applied magnetic field. The physical constant is the magnetic flux quantum, the inverse of which is the Josephson constant.
The three main effects predicted by Josephson follow from these relations:
The DC Josephson effect
The DC Josephson effect is a direct current crossing the insulator in the absence of any external electromagnetic field, owing to tunneling. This DC Josephson current is proportional to the sine of the phase difference across the insulator, and may take values between and .
The AC Josephson effect
With a fixed voltage across the junctions, the phase will vary linearly with time and the current will be an AC current with amplitude and frequency . The complete expression for the current drive becomes:
This means a Josephson junction can act as a perfect voltage-to-frequency converter.
The inverse AC Josephson effect
If the phase takes the form , the voltage and current will be
- , and .
The DC components will then be
- , and .
Hence, for distinct AC voltages, the junction may carry a DC current and the junction acts like a perfect frequency-to-voltage converter.
|This section does not cite any sources. (December 2009) (Learn how and when to remove this template message)|
If the macroscopic wave functions and in superconductors 1 and 2 are given by
then the Josephson phase is defined by
The Josephson energy is the potential energy accumulated in a Josephson junction when a supercurrent flows through it. One can think of a Josephson junction as a non-linear inductance which accumulates (magnetic field) energy when a current passes through it. In contrast to real inductance, no magnetic field is created by a supercurrent in a Josephson junction — the accumulated energy is the Josephson energy.
For the simplest case the current-phase relation (CPR) is given by the first Josephson relation:
where , is the supercurrent flowing through the junction, , is the critical current, and , is the Josephson phase. Imagine that initially at time the junction was in the ground state and finally at time the junction has the phase . The work done on the junction (so the junction energy is increased by)
Here sets the characteristic scale of the Josephson energy, and sets its dependence on the phase . The energy accumulated inside the junction depends only on the current state of the junction, but not on history or velocities, i.e. it is a potential energy. Note, that has a minimum equal to zero for the ground state , is any integer.
|This section does not cite any sources. (November 2006) (Learn how and when to remove this template message)|
Imagine that the Josephson phase across the junction is , and the supercurrent flowing through the junction is
(This is the same equation as above, except now we will look at small variations in and around the values and .)
Imagine that we add little extra current (direct or alternative) through the junction, and want to see how it reacts. The phase across the junction changes to become . One can write:
Assuming that is small, we make a Taylor expansion in the right hand side to arrive at
The voltage across the junction (we use the 2nd Josephson relation) is
If we compare this expression with the expression for voltage across the conventional inductance
we can define the so-called Josephson inductance
One can see that this inductance is not constant, but depends on the phase across the junction. The typical value is given by and is determined only by the critical current . Note that, according to definition, the Josephson inductance can even become infinite or negative, if .
One can also calculate the change in Josephson energy
Making Taylor expansion for small , we get
If we now compare this with the expression for increase of the inductance energy , we again get the same expression for .
Note, that although Josephson junction behaves like an inductance, there is no associated magnetic field. The corresponding energy is hidden inside the junction. The Josephson Inductance is also known as a Kinetic Inductance - the behaviour is derived from the kinetic energy of the charge carriers, not energy in a magnetic field.
Josephson penetration depth
|This section does not cite any sources. (February 2009) (Learn how and when to remove this template message)|
The Josephson penetration depth characterizes the typical length on which an externally applied magnetic field penetrates into the long Josephson junction. Josephson penetration depth is usually denoted as and is given by the following expression (in SI):
where is the thickness of the Josephson barrier (usually insulator), and are the thicknesses of superconducting electrodes, and and are their London penetration depths.
|Wikimedia Commons has media related to Josephson effect.|
- B. D. Josephson (1962). "Possible new effects in superconductive tunnelling". Phys. Lett. 1 (7): 251–253. doi:10.1016/0031-9163(62)91369-0.
- B. D. Josephson (1974). "The discovery of tunnelling supercurrents". Rev. Mod. Phys. 46 (2): 251–254. Bibcode:1974RvMP...46..251J. doi:10.1103/RevModPhys.46.251.
- Josephson, Brian D. (December 12, 1973). "The Discovery of Tunneling Supercurrents (Nobel Lecture)" (PDF).
- P. W. Anderson; J. M. Rowell (1963). "Probable Observation of the Josephson Tunnel Effect". Phys. Rev. Lett. 10: 230. Bibcode:1963PhRvL..10..230A. doi:10.1103/PhysRevLett.10.230.
- The Nobel prize in physics 1973, accessed 8-18-11
- Steven Strogatz, Sync: The Emerging Science of Spontaneous Order, Hyperion, 2003.
- P. W. Anderson; A. H. Dayem (1964). "Radio-frequency effects in superconducting thin film bridges". Phys. Rev. Lett. 13 (6): 195. doi:10.1103/PhysRevLett.13.195.
- Dawe, Richard (28 October 1998). "SQUIDs: A Technical Report - Part 3: SQUIDs" (website). http://rich.phekda.org. Retrieved 2011-04-21. External link in
- International Bureau of Weights and Measures (BIPM), SI brochure, section 2.1.: SI base units, section 2.1.1: Definitions, accessed 22 June 2015
- Practical realization of units for electrical quantities (SI brochure, Appendix 2). BIPM, [last updated: 20 February 2007], accessed 22 June 2015.
- T. A. Fulton; P. L. Gammel; D. J. Bishop; L. N. Dunkleberger; G. J. Dolan (1989). "Observation of Combined Josephson and Charging Effects in Small Tunnel Junction Circuits". Phys. Rev. Lett. 63 (12): 1307–1310. Bibcode:1989PhRvL..63.1307F. doi:10.1103/PhysRevLett.63.1307. PMID 10040529.
- V. Bouchiat; D. Vion; P. Joyez; D. Esteve; M. H. Devoret (1998). "Quantum coherence with a single Cooper pair". Physica Scripta T. 76: 165. Bibcode:1998PhST...76..165B. doi:10.1238/Physica.Topical.076a00165.
- Physics Today, Superfluid helium interferometers, Y. Sato and R. Packard, October 2012, page 31
- Barone, A.; Paterno, G. (1982). Physics and Applications of the Josephson Effect. New York: John Wiley & Sons. ISBN 0-471-01469-9.
- Michael Tinkham, Introduction to superconductivity, Courier Corporation, 1986 |
Have you ever left a bottle of water out in the hot sun for a few hours and heard a slight "hissing" noise when you opened it? This is caused by a principle called vapor pressure. In chemistry, vapor pressure is the pressure that is exerted on the walls of a sealed container when a substance in it evaporates (converts to a gas). To find the vapor pressure at a given temperature, use the Clausius-Clapeyron equation: ln(P1/P2) = (ΔHvap/R)((1/T2) - (1/T1)). You could also use Raoult's Law to find the vapor pressure: Psolution=PsolventXsolvent.
Method 1 of 3:
Using the Clausius-Clapeyron Equation
1Write the Clausius-Clapeyron equation. The formula used for calculating vapor pressure given a change in the vapor pressure over time is known as the Clausius-Clapeyron equation (named for physicists Rudolf Clausius and Benoît Paul Émile Clapeyron.) This is the formula you'll use to solve the most common sorts of vapor pressure problems you'll find in physics and chemistry classes. The formula looks like this: ln(P1/P2) = (ΔHvap/R)((1/T2) - (1/T1)). In this formula, the variables refer to:
- ΔHvap: The enthalpy of vaporization of the liquid. This can usually be found in a table at the back of chemistry textbooks.
- R: The real gas constant, or 8.314 J/(K × Mol).
- T1: The temperature at which the vapor pressure is known (or the starting temperature.)
- T2: The temperature at which the vapor pressure is to be found (or the final temperature.)
- P1 and P2: The vapor pressures at the temperatures T1 and T2, respectively.
2Plug in the variables you know. The Clausius-Clapeyron equation looks tricky because it has so many different variables, but it's actually not very difficult when you have the right information. The most basic vapor pressure problems will give you two temperature values and a pressure value or two pressure values and a temperature value — once you have these, solving is a piece of cake.
- For example, let's say that we're told that we have a container full of liquid at 295 K whose vapor pressure is 1 atmosphere (atm). Our question is: What is the vapor pressure at 393 K? We have two temperature values and a pressure, so we can solve for the other pressure value with the Clausius-Clapeyron equation. Plugging in our variables, we get ln(1/P2) = (ΔHvap/R)((1/393) - (1/295)).
- Note that, for Clausius-Clapeyron equations, you must always use Kelvin temperature values. You can use any pressure values as long as they are the same for both P1 and P2.
3Plug in your constants. The Clausius-Clapeyron equation contains two constants: R and ΔHvap. R is always equal to 8.314 J/(K × Mol). ΔHvap (the enthalpy of vaporization), however, depends on the substance whose vapor pressure you are examining. As noted above, you can usually find the ΔHvap values for a huge variety of substances in the back of chemistry or physics textbooks, or else online (like, for instance, here.)
- In our example, let's say that our liquid is pure liquid water. If we look in a table of ΔHvap values, we can find that the ΔHvap is roughly 40.65 kJ/mol. Since our H value uses joules, rather than kilojoules, we can convert this to 40,650 J/mol.
- Plugging our constants in to our equation, we get ln(1/P2) = (40,650/8.314)((1/393) - (1/295)).
4Solve the equation. Once you have all of your variables in the equation plugged in except for the one you are solving for, proceed to solve the equation according to the rules of ordinary algebra.
- The only difficult part of solving our equation (ln(1/P2) = (40,650/8.314)((1/393) - (1/295))) is dealing with the natural log (ln). To cancel out a natural log, simply use both sides of the equation as the exponent for the mathematical constant e. In other words, ln(x) = 2 → eln(x) = e2 → x = e2.
- Now, let's solve our equation:
- ln(1/P2) = (40,650/8.314)((1/393) - (1/295))
- ln(1/P2) = (4,889.34)(-0.00084)
- (1/P2) = e(-4.107)
- 1/P2 = 0.0165
- P2 = 0.0165-1 = 60.76 atm. This makes sense — in a sealed container, increasing the temperature by almost 100 degrees (to almost 20 degrees over the boiling point of water) will create lots of vapor, increasing the pressure greatly
Method 2 of 3:
Finding Vapor Pressure with Dissolved Solutions
1Write Raoult's Law. In real life, it's rare to work with a single pure liquid — usually, we deal with liquids that are mixtures of several different component substances. Some of the most common of these mixtures are created by dissolving a small amount of a certain chemical called a solute in a large amount of a chemical called a solvent to create a solution. In these cases, it's useful to know an equation called Raoult's Law (named for physicist François-Marie Raoult), which looks like this: Psolution=PsolventXsolvent. In this formula, the variables refer to;
- Psolution: The vapor pressure of the entire solution (all of the component parts combined)
- Psolvent: The vapor pressure of the solvent
- Xsolvent: The mole fraction of the solvent.
- Don't worry if you don't know terms like "mole fraction" — we'll explain these in the next few steps.
2Identify the solvent and solute in your solution. Before you calculate the vapor pressure of a mixed liquid, you need to identify the substances with which you are working. As a reminder, a solution is formed when a solute is dissolved in a solvent — the chemical that dissolves is always the solute and the chemical that does the dissolving is always the solvent.
- Let's work through a simple example in this section to illustrate the concepts we're discussing. For our example, let's say that we want to find the vapor pressure of simple syrup. Traditionally, simple syrup is one part sugar dissolved in one part water, so we'll say that sugar is our solute and water is our solvent.
- Note that the chemical formula for sucrose (table sugar) is C12H22O11. This will be important soon.
3Find the temperature of the solution. As we saw in the Clausius-Clapeyron section above, a liquid's temperature will affect its vapor pressure. In general, the higher the temperature, the greater the vapor pressure — as the temperature increases, more of the liquid will evaporate and form vapor, increasing the pressure in the container.
- In our example, let's say that the simple syrup's current temperature is 298 K ( about 25 C).
4Find the solvent's vapor pressure. Chemical reference materials usually have vapor pressure values for many common substances and compounds, but these pressure values are usually only for when the substance is at 25 C/298 K or at its boiling point. If your solution is at one of these temperatures, you can use the reference value, but if not, you'll need to find the vapor pressure at its current temperature.
- The Clausius-Clapeyron can help here — use the reference vapor pressure and 298 K (25 C) for P1 and T1 respectively.
- In our example, our mixture is at 25 C, so we can use our easy reference tables. We find that water at 25 C has a vapor pressure of 23.8 mm HG
5Find the mole fraction of your solvent. The last thing we need to do before we can solve is to find the mole fraction of our solvent. Finding mole fractions is easy: just convert your components to moles, then find what percentage of the total number of moles in the substance each component occupies. In other words, each component's mole fraction equals (moles of component)/(total number of moles in the substance.)
- Let's say that our recipe for simple syrup uses 1 liter (L) of water and 1 liter of sucrose (sugar.) In this case, we'll need to find the number of moles in each. To do this, we'll find the mass of each, then use the substance's molar masses to convert to moles.
- Mass (1 L of water): 1,000 grams (g)
- Mass (1 L of raw sugar): Approx. 1,056.7 g
- Moles (water): 1,000 grams × 1 mol/18.015 g = 55.51 moles
- Moles (sucrose): 1,056.7 grams × 1 mol/342.2965 g = 3.08 moles (note that you can find sucrose's molar mass from its chemical formula, C12H22O11.)
- Total moles: 55.51 + 3.08 = 58.59 moles
- Mole fraction of water: 55.51/58.59 = 0.947
6Solve. Finally, we have everything we need to solve our Raoult's Law equation. This part is surprisingly easy: just plug your values in for the variables in the simplified Raoult's Law equation at the beginning of this section (Psolution = PsolventXsolvent).
- Substituting our values, we get:
- Psolution = (23.8 mm Hg)(0.947)
- Psolution = 22.54 mm Hg. This makes sense — in mole terms, there's only a little sugar dissolved in a lot of water (even though in real-world terms the two ingredients have the same volume), so the vapor pressure will only decrease slightly.
Method 3 of 3:
Finding Vapor Pressure in Special Cases
1Be aware of Standard Temperature and Pressure conditions. Scientists frequently use a set of temperature and pressure values as a sort of convenient "default". These values are called Standard Temperature and Pressure (or STP for short). Vapor pressure problems frequently make reference to STP conditions, so it's handy to have these values memorized. STP values are defined as:
- Temperature: 273.15 K / 0 C / 32 F
- Pressure: 760 mm Hg / 1 atm / 101.325 kilopascals
2Rearrange the Clausius-Clapeyron equation to find other variables. In our example in Section 1, we saw that the Clausius-Clapeyron equation is very useful for finding the vapor pressures of pure substances. However, not every question will ask you to find P1 or P2 — many will ask you to find a temperature value or even sometimes an ΔHvap value. Luckily, in these cases, getting the right answer is simply a matter of rearranging the equation so that the variable you're solving for is alone on one side of the equals sign.
- For instance, let's say that we have an unknown liquid with a vapor pressure of 25 torr at 273 K and 150 torr at 325 K and we want to find this liquid's enthalpy of vaporization (ΔHvap). We could solve like this:
- ln(P1/P2) = (ΔHvap/R)((1/T2) - (1/T1))
- (ln(P1/P2))/((1/T2) - (1/T1)) = (ΔHvap/R)
- R × (ln(P1/P2))/((1/T2) - (1/T1)) = ΔHvap Now, we plug in our values:
- 8.314 J/(K × Mol) × (-1.79)/(-0.00059) = ΔHvap
- 8.314 J/(K × Mol) × 3,033.90 = ΔHvap = 25,223.83 J/mol
3Account for the vapor pressure of the solute when it produces vapor. In our Raoult's Law example above, our solute, sugar, doesn't produce any vapor on its own at normal temperatures (think — when was the last time you saw a bowl of sugar evaporate on your counter top?) However, when your solute does evaporate, this will affect your vapor pressure. We account for this by using a modified version of the Raoult's Law equation: Psolution = Σ(PcomponentXcomponent) The sigma (Σ) symbol means that we just need to add up all of the different components' vapor pressures to find our answers.
- For example, let's say that we have a solution made from two chemicals: benzene and toluene. The total volume of the solution is 120 milliliters (mL); 60 mL of benzene and 60 of toluene. The temperature of the solution is 25 C and the vapor pressures of each of these chemicals at 25 C is 95.1 mm Hg for benzene 28.4 mm Hg for toluene. Given these values, find the vapor pressure of the solution. We can do this as follows, using standard density, molar mass, and vapor pressure values for our two chemicals:
- Mass (benzene): 60 mL = .060 L × 876.50 kg/1,000 L = 0.053 kg = 53 g
- Mass (toluene): .060 L × 866.90 kg/1,000 L = 0.052 kg = 52 g
- Moles (benzene): 53 g × 1 mol/78.11 g = 0.679 mol
- Moles (toluene): 52 g × 1 mol/92.14 g = 0.564 mol
- Total moles: 0.679 + 0.564 = 1.243
- Mole fraction (benzene): 0.679/1.243 = 0.546
- Mole fraction (toluene): 0.564/1.243 = 0.454
- Solve: Psolution = PbenzeneXbenzene + PtolueneXtoluene
- Psolution = (95.1 mm Hg)(0.546) + (28.4 mm Hg)(0.454)
- Psolution = 51.92 mm Hg + 12.89 mm Hg = 64.81 mm Hg
QuestionHow is vapor pressure affected by temperature?Community AnswerAs the temperature of a liquid or solid increases, its vapor pressure also increases. Conversely, vapor pressure decreases as the temperature decreases.
QuestionHow can I solve this problem? "The vapor pressure of pure water is 760mm at 25 degree Celsius. The vapor pressure of a solution containing 1 m of solution of glucose will be what?"Community AnswerI suggest you study colligative properties. The pressure lowering of the water is PX' as P stands for the pressure of pure solvent and X' is the molar fraction of the solute. 1L of water has 1000g of water, so there are 1000/18 mols of water ~ 55.6 mols. So, there's 56.6 mols of molecules for every 1L of solution (one comes from glucose and 55.6 from water as calculated). So, the solute molar fraction is 1/56.6 ~ 1.768.10^-2. So the pressure lowering is 760mmHg times 1.768.10^-2, which is ~ 13.44 mmHg. Finally, the vapor pressure of the solution is 760mmHg-13.44mmHg = 746.56mmHg.
QuestionAt an ambient temperature, what would be the vapor pressure of water?Community AnswerYou can use the Antoine's equation to calculate the vapor pressure of any substance and any temperature. At an ambient pressure of 25 degrees Celsius, the vapor pressure of water is 23.8 torr.
QuestionHow many atoms are in a gram of carbon?Community AnswerThe molar mass of carbon is 12.011 grams per mole, and in every mole there is 6.022 * 10^23 atoms. Just use these equation to calculate: (1 g) * (1 mol)/12.011 g = 1/12.011 moles = 0.083257 moles. Now, take this number, and convert moles to atoms. (0.083257 mole) * (6.022 * 10^23 atoms)/(1 mole) = 0.083257 * 6.022 * 10^23 = 5.01374 * 10^22 atoms.
- To use the Clausius Clapeyron equation above, temperature must be measured in Kelvin (denoted as K). If you have the temperature in Centigrade, then you need to convert it with the following formula: Tk = 273 + Tc
- The methods above work because energy is directly proportional to the amount of heat supplied. The temperature of the liquid is the only environmental factor upon which the vapor pressure depends.
About This Article
To calculate vapor pressure, use the Clausius-Clapeyron equation, which includes the variables for the enthalpy of the liquid, the real gas constant, the starting and final temperatures, and the starting and final vapor pressures. Plug all of the known variables and constants into the equation, and isolate the unknown variable, which will be the pressure. Solve the equation for the pressure by following the order of operations. Be sure to label your final answer in atmospheres!
- ↑ https://www.chem.purdue.edu/gchelp/liquids/vpress.html
- ↑ http://bado-shanai.net/Map%20of%20Physics/mopClausiusClapeyron.htm
- ↑ http://www.phs.d211.org/science/smithcw/AP%20Chemistry/Posted%20Tables/Enthalpy%20Vaporization%20and%20Fusion.pdf
- ↑ http://chemwiki.ucdavis.edu/Physical_Chemistry/Physical_Properties_of_Matter/Solutions_and_Mixtures/Ideal_Solutions/Changes_In_Vapor_Pressure,_Raoult%27s_Law
- ↑ http://allrecipes.com/recipe/simple-syrup/
- ↑ http://intro.chem.okstate.edu/1515sp01/database/vpwater.html
- ↑ http://www.traditionaloven.com/culinary-arts/sugars/raw-sugar/convert-liter-l-to-gram-g-raw-sugar.html
- ↑ http://whatis.techtarget.com/definition/standard-temperature-and-pressure-STP |
Computer use in the classroom has become a popular method of instruction for many technology educators. This may be due to the fact that software programs have advanced beyond the early days of drill and practice instruction. With the introduction of the graphical user interface, increased processing speed, and affordability, computer use in education has finally come of age. Software designers are now able to design multidimensional educational programs that include high quality graphics, stereo sound, and real time interaction (Bilan, 1992). One area of noticeable improvement is computer simulations.
Computer simulations are software programs that either replicate or mimic real world phenomena. If implemented correctly, computer simulations can help students learn about technological events and processes that may otherwise be unattainable due to cost, feasibility, or safety. Studies have shown that computer simulators can:
- Be equally as effective as real life, hands-on laboratory experiences in teaching students scientific concepts (Choi and Gennaro, 1987).
- Enhance the learning achievement levels of students (Betz, 1996).
- Enhance the problem solving skills of students (Gokhale, 1996).
- Foster peer interaction (Bilan, 1992).
The educational benefits of computer simulations for learning are promising. Some researchers even suspect that computer simulations may enhance creativity (e.g., Betz, 1996; Gokhale, 1996; Harkow, 1996), however, after an extensive review of literature, no empirical research has been found to support this claim. For this reason, the following study was conducted to compare the effect of a computer simulation activity versus a traditional hands-on activity on students' product creativity.
Product Creativity in Technology Education
Historically, technology educators have chosen the creation of products or projects as a means to teach technological concepts (Knoll, 1997). Olson (1973), in describing the important role projects play in the industrial arts/technology classroom, remarked, "The project represents human creative achievement with materials and ideas and results in an experience of self-fulfillment" (p. 21). Lewis (1999) reiterated this belief by stating, "Technology is in essence a manifestation of human creativity. Thus, an important way in which students can come to understand it would be by engaging in acts of technological creation" (p. 46). The result of technological creation is the creative product.
The creative product embodies the very essence of technology. The American Association for the Advancement of Science (Johnson, 1989) stated, "Technology is best described as a process, but is most commonly known by its products and their effects on society" (p. 1). A product can be described as a physical object, article, patent, theoretical system, an equation, or new technique (Brogden & Sprecher, 1964). A creative product is one that possesses some degree of unusualness (originality) and usefulness (Moss, 1966). When given the opportunity for self-expression, a student's project becomes nothing less than a creative product.
The creative product can be viewed as a physical representation of a person's "true" creative ability encapsulating both the creative person and process (Besemer & O'Quin; 1993). By examining the literature related to the creative person and process, technology educators may gain a deeper understanding of the creative product itself.
The Creative Person
Inventors such as Edison and Ford have been recognized as being highly creative. Why some people reach a level of creative genius while others do not is still unknown. However, Maslow (1962), after studying several of his subjects, determined that all people are creative, not in the sense of creating great works, but rather, creative in a universal sense that attributes a portion of creative talent to every person. In trying to understand and predict a person's creative ability, two factors have often been considered: intelligence and personality.
A frequently asked question among educators is "What is the relationship between creativity and intelligence?" Research has shown that there is no direct correlation between creativity and intelligence quotient (I.Q.) (Edmunds, 1990; Hayes, 1990; Moss, 1966; Torrance, 1963). Edmunds (1990) conducted a study to determine whether there was a relationship between creativity and I.Q. Two hundred and eighty-one randomly selected students, grades eight to eleven, from three different schools in New Brunswick, Canada participated. The instruments used to collect data were the Torrance Test of Creative Thinking and the Otis-Lennon School Ability Test, used to test intellectual ability. Based on a Pearson product moment analysis, results showed that I.Q. scores did not significantly correlate with creativity scores. The findings were consistent with the literature dealing with creativity and intelligence.
On a practical level, findings similar to the one above may explain why I.Q. measures have proven to be unsuccessful in predicting creative performance. Hayes (1990) pointed out that creative performance may be better predicted by isolating and investigating personality traits.
Researchers have shown that there are certain personality traits associated with creative people (e.g., DeVore, Horton, and Lawson, 1989; Hayes, 1990; Runco, Nemiro, & Walberg, 1998; Stein, 1974). Runco, Nemiro, and Walberg (1998) identified and conducted a survey investigating personality traits associated with the creative person. The survey was mailed to 400 individuals who had submitted papers and/or published articles related to creativity. The researchers asked participants to rate, in order of importance, various traits that they believed affected creative achievement. The survey contained 16 creative achievement clusters consisting of 141 items. One hundred and forty-three surveys were returned reflecting a response of 35.8%. Results demonstrated that intrinsic motivation, problem finding, and questioning skills were considered the most important traits in predicting and identifying creative achievement. Though personality traits play an important part in understanding creative ability, an equally important area of creativity theory lies in the identification of the creative process itself.
The Creative Process
Creativity is a process (Hayes, 1990; Stein, 1974; Taylor, 1959; Torrance, 1963) that has been represented using various models. Wallas (1926) offered one of the earliest explanations of the creative process. His model consisted of four stages that are briefly described below:
- Preparation: This is the first stage in which an individual identifies then investigates a problem from many different angles.
- Incubation: At this stage the individual stops all conscious work related to the problem.
- Illumination: This stage is characterized by a sudden or immediate solution to the problem.
- Verification: This is the last stage at which time the solution is tested.
Wallas' model has served as a foundation upon which other models have been built. Some researchers have added the communication stage to the creative process (e.g. Stein, 1974; Taylor, 1959; Torrance, 1966). The communication stage is the final stage of the creative process. At this stage, the new idea confined to one's mind is transformed into a verbal or non-verbal product. The product is then shared within a social context in order that others may react to and possibly accept or reject it. A more comprehensive description of the creative process is captured within a definition offered by Torrance (1966).
Creativity is a process of becoming sensitive to problems, deficiencies, gaps in knowledge, missing elements, disharmonies, and so on; identifying the difficult; searching for solutions, making guesses or formulating hypotheses about the deficiencies, testing and re-testing these hypotheses and possibly modifying and re-testing them, and finally communicating the results. (p. 8)
Torrance's definition resembles what some have referred to as problem solving. For example, technology educators Savage and Sterry (1990), generalizing from the work of several scholars, identified six steps to the problem-solving process:
- Defining the problem: Analyzing, gathering information, and establishing limitations that will isolate and identify the need or opportunity.
- Developing alternative solutions: Using principles, ideation, and brainstorming to develop alternate ways to meet the opportunity or solve the problem.
- Selecting a solution: Selecting the most plausible solution by identifying, modifying, and/or combining ideas from the group of possible solutions.
- Implementing and evaluating the solution: Modeling, operating, and assessing the effectiveness of the selected solution.
- Redesigning the solution: Incorporating improvements into the design of the solution that address needs identified during the evaluation phase.
- Interpreting the solution: Synthesizing and communicating the characteristics and operating parameters of the solution. (p. 15)
By closely comparing Torrance's (1966) definition of creativity with that of Savage and Sterry's (1990) problem solving process, one can easily see similarities between the descriptions. Guilford (1976), a leading expert in the study of creativity, made a similar comparison between steps of the creative process offered by Wallas with those of the problem solving process proposed by the noted educational philosopher, John Dewey. In doing so, Guilford simply concluded that, "Problem-solving is creative; there is no other kind" (p. 98).
Hinton (1968) combined the creative process and problem solving process into what is now known as creative problem solving. He believed that creativity would be better understood if placed within a problem solving structure. Creative problem solving is a subset of problem-solving based on the assumption that not all problems require a creative solution. He surmised that when a problem is solved with a learned response, then no creativity has been expressed. However, when a simple problem is solved with an insightful response, then a small measure of creativity has been expressed, when a complex problem is solved with a novel solution, then genuine creativity has occurred.
Genuine creativity is the result of the creative process that manifests itself into a creative product. Understanding the creative process as well as the creative person may play an important role in realizing the true nature of the creative product. Though researchers have not reached a consensus as to what attributes make up the creative product (Besemer & Treffingger, 1981; Joram, Woodruff, Bryson, & Lindsay, 1992; Stein, 1974), identifying and evaluating the creative product has been a concern of some researchers. Notable, is the work of Moss (1966) and Duenk (1966).
Evaluating the Creative Product in Industrial Arts/Technology Education
Moss (Moss) and Duenk (1966) have arguably conducted the most extensive research establishing criteria for evaluating creative products within industrial arts/technology education. Moss (1966), in examining the criterion problem, concluded that unusualness (originality) and usefulness were the defining characteristics of the creative product produced by industrial arts students. A description of his model is presented below:
- Unusualness: To be creative a product must possess some degree of unusualness [or originality]. The quality of unusualness may, theoretically, be measured in terms of probability of occurrence; the less the probability of its occurrence, the more unusual the product (Moss, 1966, p. 7).
- Usefulness: While some degree of unusualness is a necessary requirement for creative products, it is not a sufficient condition. To be creative, an industrial arts student's product must also satisfy the minimal principle requirements of the problem situation; to some degree it must "work" or be potentially "workable." Completely ineffective, irrelevant solutions to teacher-imposed or student-initiated problems are not creative (Moss, 1966, p. 7).
- Combining Unusualness and Usefulness: When a product possesses some degree of both unusualness and usefulness, it is creative. But, because these two criterion qualities are considered variables, the degree of creativity among products will also vary. The extent of each product's departure from the typical and its value as a problem solution will, in combination, determine the degree of creativity of each product. Giving the two qualities equal weight, as the unusualness and/or usefulness of a product increases so does its rated creativity; similarly, as the product approaches the conventional and/or uselessness its rated creativity decreases (Moss, 1966, p. 8).
In establishing the construct validity of his theoretical model, Moss (1966) submitted his work for review to 57 industrial arts educators, two measurement specialists, and six educational psychologists. Results of the review found the proposed model was compatible with existing theory and practice of both creativity and industrial arts. No one disagreed with the major premise of using unusualness and usefulness as defining characteristics for evaluating the creative products of industrial arts students.
To date, little additional research has been conducted to establish criteria for evaluating the creative products of industrial arts and/or technology education students. If technology is best known by its creative products, then technology educators are obligated to identify characteristics that make a product more or less creative. Furthermore, educators must find ways to objectively measure these attributes and then teach students in a manner that enhances the creativity of their products. A possible approach to enhancing product creativity is by incorporating computer simulation technology into the classroom. However, no research has been done in this area to measure the true effect of computer simulation on product creativity. For that reason, other studies addressing computer use in general and product creativity will be explored.
Studies Related to Computers and the Creative Product
A study conducted by Joram, Woodruff, Bryson, & Lindsay (1992) found that average students produced their most creative work using word processors as compared to students using pencil and paper. The researchers hypothesized that word-processing would hinder product creativity due to constant evaluation and editing of their work. To test the hypotheses, average and above average eighth grade writers were randomly assigned to one of two groups. The first group was asked to compose using word processors while the second group was asked to compose using pencil and paper. After collecting the compositions, both the word-processed and handwritten texts were typed so that they would be in the same format for the evaluators. Based on the results, the researchers concluded that word-processing enhances the creative abilities of average writers. The researchers attributed this to the prospect that word-processing may allow the average writer to generate a number of ideas, knowing that only a few of them will be usable and the rest can be easily erased. However, the researchers also found that word-processing had a negative effect on the creativity of above average writers. These mixed results suggest that the use of word-processing may not be appropriate for all students relative to creativity.
Similar to word processing, computer graphic programs may also help students improve the creativeness of their products. In a study conducted by Howe (1992), two advanced undergraduate classes in graphic design were assigned to one of two treatments. The first treatment group was instructed to use a computer graphic program to complete a design project whereas the other group was asked to use conventional graphic design equipment to design their product. Upon completion of the assignment, both groups' projects were collected and photocopied so that they would be in the same format before being evaluated. Based on the results, the researcher concluded that students using computer graphics technology surpassed the conventional method in product creativity. The researchers attributed this to the prospect that computer graphics programs may enable graphic designers to generate an abundance of ideas, then capture the most creative ones and incorporate them into their designs. However, due to a lack of random assignment, results of the study should be generalized with caution.
Like word processing and computer graphics, simulation technology is a type of computer application that allows users to freely manipulate and edit virtual objects. Thus it was surmised that computer simulation may enhance creativity. This notion led to the development of the study reported herein.
Purpose of the Study
This study compared the effect of a computer simulation activity versus a traditional hands-on activity on students' product creativity. A creative product was defined as one that possesses some measure of both unusualness (originality) and usefulness. The following hypothesis and sub-hypotheses were examined.
Major Research Hypothesis
There is no difference in product creativity between the computer simulation and traditional hands-on groups.
- There is no difference in product originality between the computer simulation and traditional hands-on groups.
- There is no difference in product usefulness between the computer simulation and traditional hands-on groups.
The subjects selected for this study were seventh-grade technology education students from three different middle schools located in Northern Virginia, a middle-to-upper income suburb outside of Washington, D.C. The school system's middle school technology education programs provide learning situations that allow the students to explore technology through problem solving activities. The three participating schools were chosen because of the teachers' willingness to participate in the study.
Kits of Classic Lego Bricks TM were used with the hands-on group. The demonstration version of Gryphon Bricks TM (Gryphon Software Corporation, 1996) was used with the simulation group. This software allows students to assemble and disassemble computer generated Lego-type bricks in a virtual environment on the screen of the computer. Subjects in the computer simulation group were each assigned to a Macintosh computer on which the Gryphon Bricks software was installed. Each subject in the hands-on treatment group was given a container of Lego bricks identical to those available virtually in the Gryphon software.
Products were evaluated based on a theoretical model proposed by Moss (1966). Moss used the combination of unusualness (or originality) and usefulness as criteria for determining product creativity. However, Moss' actual instrument was not used in this study due to low inter-rater reliability. Instead, a portion of the Creative Product Semantic Scale or CPSS (Besemer & O'Quin, 1989) was used to determine product creativity. Sub-scales "Original" and "Useful" from the CPSS were chosen to be consistent with Moss' theoretical model.
The CPSS has proven to be a reliable instrument in evaluating a variety of creative products based on objective, analytical measures of creativity (Besemer & O'Quin, 1986, 1987, 1989, 1993). This was accomplished by the use of a bipolar, semantic differential scale. In general, semantic differential scales are good for measuring mental concepts or images (Alreck, 1995). Because creativity is a mental concept, the semantic differential naturally lends itself to measuring the creative product. Furthermore, the CPSS is flexible enough to allow researchers to pick various sub-scales based on the theoretical construct being investigated, like the use of the Original and Useful subscales in this study. In support of this, Besemer and O'Quin (1986) stated, "the sub-scale structure of the total scale lends itself to administration of relevant portions of the instrument rather than the whole" (p. 125).
The CPSS was used in a study conducted by Howe (1992). His reliability analysis, based on Cronbach's alpha coefficient, yielded good to high reliability across all sub-scales of the CPSS. Important to this study were the high reliability results for sub-scales Original (.93) and Useful (.92). These high reliability coefficients are consistent with earlier studies conducted by Besemer and O'Quin (1986, 1987, 1989).
The Pilot Study
A pilot study was conducted in which a seventh-grade technology education class from a Southwest Virginia middle school was selected. The pilot study consisted of 16 subjects who were randomly assigned to either a hands-on treatment group or a simulation treatment group. As a result of the pilot study, the time allocated for the students to assemble their creative products from 30 minutes to 25 minutes since most of them had finished within the shorter time. Precedence for limiting the time needed to complete a creative task was found in Torrance's (1966) work in which 30 minutes was the time limit for a variety of approaches to measuring creativity.
One class from each of the three participating schools was selected for the study. Fifty-eight subjects participated, 21 females and 37 males, with an average age of 12.4 years. Subjects were given identification numbers, then randomly assigned to either the hands-on or the computer simulation treatment group. The random assignment helped ensure the equivalence of groups and controlled for extraneous variables such as students' prior experience with open-ended problem solving activities, use of Lego blocks and/or computer simulation programs, and other extraneous variables that may have confounded the results. The independent variable in this study was the instructional activity and the dependent variable was the subjects' creative product scores as determined by the combination of the original and useful sub-scales from the CPSS (Besemer & O'Quin, 1989).
Subjects in both the hands on and the simulation groups were asked to construct a "creature" that they believed would be found on a Lego planet. The "creature" scenario was chosen because it was an open-ended problem and possessed the greatest potential for imaginative student expression. The only difference in treatment between the two groups was that the hands-on group used real Lego bricks in constructing their products whereas the simulation treatment group used a computer simulator. Treatments were administered simultaneously and overall treatment time was the same for both groups. The hands-on treatment group met in its regular classroom whereas the simulation treatment group met in a computer lab. The classroom teacher at each school proctored the hands-on treatment group and the researcher proctored the simulation treatment group.
The subjects in the hands-on treatment group were given five-minutes to sort their bricks by color while subjects in the simulation treatment group watched a five-minute instructional video explaining how to use the simulation software. By having the students sort their bricks for five minutes, the overall treatment time was the same for both groups, thus eliminating a variable that may otherwise influence the results. Then, the subjects in both groups were given the following scenario:
Pretend you are a toy designer working for the Lego Company. Your job is to create a "creature" using Lego bricks that will be used in a toy set called Lego Planet. What types of creatures might be found on a Lego planet? Use your creativity and make a creature that is original in appearance yet useful to the toy manufacturer.
One more thing, the creature you construct must be able to fit within a five-inch cubed box, that means you must stay within the limits of your green base plate and make your creature no higher than 13 bricks.
You will have 25 minutes to complete this activity. If you finish early, spend more time thinking about how you can make your creature more creative. You must remain in your seat the whole time. If there are no questions, you may begin.
When the time was up, the subjects were asked to stop working. The hands-on treatment group's products were labeled, collected, and then reproduced in the computer simulation software by the researcher. This was done so that the raters could not distinguish from which treatment group the products were created. Finally, the images of the products from both groups were printed using a color printer.
To evaluate the students' solutions, two raters were recruited: a middle school art teacher and a middle school science teacher. The teachers were chosen because of their willingness to participate in the study and had a combined total of 36 years of teaching experience. To help establish inter-rater reliability, a rater training session was conducted during the pilot study. The same teacher-raters used in the pilot study were used in the final study. The training session provided the teacher-raters with instructions on how to use the rating instrument and allowed them to practice rating sample products. During the session, disagreements on product ratings were discussed and rules were developed by the raters to increase consistency. The pilot study confirmed that there was good inter-rater reliability across all the scales and and thus the experimental procedures proceeded as designed. No significant difference in creativity, originality, or usefulness was found between the two treatment groups during the pilot study.
For the actual study, the teacher-raters were each given the printed images of the products from each of the 58 subjects and were instructed to independently rate them using the Original and Useful sub-scales of the CPSS (Besemer & O'Quin, 1989). Three weeks were allowed for the rating process.
Once the ratings from the two raters had been obtained, an inter-rater reliability analysis, based on Cronbach's alpha coefficient, was conducted. Analysis yielded moderate to good inter-rater reliability (.74 to .88) across all the scales. The stated hypotheses were then tested using one-way analysis of variance (ANOVA).
- No difference in product Creativity scores was found between the computer simulation group (M = 41.7, SD = 7.67) and the hands-on group (M = 42.0, SD = 5.58). Therefore the null hypothesis was not rejected, F (5, 52) = 0.54, p = 0.75.
- No difference in product Originality scores was found between the computer simulation group (M = 20.59, SD = 4.44) and the hands-on group (M = 21.10, SD = 3.10). Thus, the null hypothesis was not rejected, F (5, 52) = 1.07, p = 0.39.M
- No difference in product Usefulness scores between the computer simulation group (M = 21.15, SD = 4.17) and the traditional hands-on group (M = 20.90, SD = 3.20). Once again, the researcher failed to reject the null hypothesis, F (5, 52) = 0.49, p = 0.78].
Though there are only a few empirical studies to support their claims, some researchers believe that computers in general may improve student product creativity by allowing students to generate an abundance of ideas, capture the most creative ones, and incorporate them into their product (Howe, 1992; Joram, Woodruff, Bryson, & Lindsay, 1992). Similarly, some researchers speculate that the use of computer simulations may enhance product creativity as well (Betz, 1996; Gokhale, 1996; Harkow, 1996). However, based on the results of this study, the use of computer simulation to enhance product creativity was not supported. The creativity, usefulness, or originality of the resulting products appears to be the same whether students use a computer simulation of Lego blocks or whether they manipulated the actual blocks.
Because the simulation activity in this study was nearly identical to the hands-on task, one might conclude that product creativity may be more reliant upon the individual's creative cognitive ability rather than the tools or means by which the product was created. This would stand to reason based on Besemer and O'Quin's (1993) belief that the creative product is unique in that it combines both the creative person and process into a tangible object representing the "true" measure of a person's creative ability. With this in mind, when studying a computer simulation's effect on student product creativity, researchers may want to focus more attention on the creative person's traits and the cognitive process used to create the product rather than focusing on the tool or means by which the product was created. This approach to understanding student product creativity may lend itself more to qualitative rather than quantitative research.
If quantitative research is to continue in this area of study, researchers may wish to consider using a different theoretical model and instrument for measuring the creative product. For example, if replicating this experiment, rather than using only the two sub-scales of the Creative Product Semantic Scale (Bessemer & O'Quin, 1989), the complete instrument might be used, yielding additional dimensions of creativity. Additional research regarding the various types of simulation programs is needed, along with the different effects they might have on student creativity in designing products. The use of computer simulations in technology education programs appears to be increasing with little research to support their effectiveness or viable use.
Alreck, T.L., & Settle, B.R. (1995). The survey research handbook (2nd Ed.). Chicago: Irwin Inc.
Besemer, S.P., & O'Quin, K. (1993). Assessing creative products: Progress and potentials. In S.G. Isaksen (Ed.), Nurturing and developing creativity: The emergence of a discipline (pp. 331-349). Norwood, New Jersey: Ablex Publishing Corp.
Besemer, S.P., & O'Quin, K. (1989). The development, reliability and validity of the revised creative product semantic scale. Creativity Research Journal, 2, 268-279.
Besemer, S.P., & O'Quin, K. (1987). Creative product analysis: Testing a model by developing a judging instrument. In S.G. Isaksen, Frontiers of creativity research: Beyond the basics. (pp. 341-357). Buffalo, NY: Bearly Ltd.
Besemer, S.P., & O'Quin, K. (1986). Analysis of creative products: Refinement and test of a judging instrument. Journal of Creative Behavior, 20 (2), 115-126.
Besemer, S.P., & Treffingger, D. (1981). Analysis of creative products: Review and synthesis. Journal of Creative Behavior, 15, 158-178.
Betz, J.A. (1996). Computer games: Increase learning in an interactive multidisciplinary environment. Journal of Technology Systems, 24 (2), 195-205.
Bilan, B. (1992). Computer simulations: An Integrated tool. Paper presented at the SAGE/ 6th Canadian Symposium, The University of Calgary.
Brogden, H., & Sprecher, T. (1964). Criteria of creativity, In Taylor, C.W., Creativity, progress and potential. New York: McGraw Hill.
Choi, B., & Gennaro, E. (1987). The effectiveness of using computer simulated experiments on junior high students' understanding of the volume displacement concept. Journal of Research in Science Teaching , 24 (6), 539-552.
DeVore, P., Horton, A., & Lawson, A. (1989). Creativity, design and technology. Worcester, Massachusetts: Davis Publications, Inc.
Duenk, L.G. (1966). A study of the concurrent validity of the Minnesota Test of Creative Thinking, Abbr. Form VII, for eighth grade industrial arts student. Minneapolis: Minnesota University. (Report No. BR-5-0113).
Edmunds, A.L. (1990), Relationships among adolescent creativity, cognitive development, intelligence, and age. Canadian Journal of Special Education, 6 (1), 61-71.
Gryphon Software Corporation (1996). Gryphon Bricks Demo (Version 1.0) [Computer Software]. Glendale, CA: Knowledge Adventure. [On-line] Available: http://www.kidsdomain.com/down/mac/bricksdemo.html
Guilford, J. (1976). Intellectual factors in productive thinking. In R. Mooney & T. Rayik (Eds.), Explorations in creativity. New York: Harper & Row.
Harkow, R.M. (1996). Increasing creative thinking skills in second and third grade gifted students using imagery, computers, and creative problem solving. Unpublished master's thesis, NOVA Southeastern University.
Hayes, J.R. (1990). Cognitive processes in creativity. (Paper No. 18).University of California Berkeley.
Hinton, B.L. (1968, Spring). A model for the study of creative problem solving. Journal of Creative Behavior, 2(2), 133-142.
Howe, R. (1992). Uncovering the creative dimensions of computer-graphic design products. Creativity Research Journal, 5 (3), 233-243.
Johnson, J.R. (1989). Project 2061: Technology (Association for the Advancement of Science Publication 89-06S). Washington, DC.: American Association for the Advancement of Science.
Joram, E., Woodruff, E., Bryson, M., & Lindsay, P. (1992). The effects of revising with a word processor on writing composition. Research in the Teaching of English, 26 (2), 167-192.
Maslow, A. (1962). Toward a psychology of being. Princeton, NJ: Van Nostrand.
Moss, J. (1966). Measuring creative abilities in junior high school industrial arts. Washington, DC: American Council on Industrial Arts Teacher Education.
Olson, D.W. (1973). Tecnol-o-gee. Raleigh: North Carolina University School of Education, Office of Publications.
Runco, R.A., Nemiro, J., & Walberg, H.J., (1998). Personal explicit theories of creativity. Journal of Creative Behavior, 32(1), 1-17.
Savage, E., & Sterry, L. (1990). A conceptual framework for technology education. Reston, VA: International Technology Education Association.
Stein, M. (1974). Stimulating creativity: Vol. 1. Individual procedures. New York: Academic Press.
Taylor, I.A. (1959). The nature of the creative process. In P. Smith (Ed.), Creativity: An examination of the creative process (pp. 51-82). New York: Hastings House Publishers.
Torrance, E. P. (1966). Torrance test on creative thinking: Norms-technical manual (Research Edition). Lexington, Mass: Personal Press.
Torrance, E.P. (1963). Creativity. In F. W. Hubbard (Ed.), What research says to the teacher (Number 28). Washington, DC: Department of Classroom Teachers American Educational Research Association of the National Education Association.
Wallas, G. (1926). The art of thought. New York: Harcourt, Bruce and Company.
Kurt Y. Michael (a href="mailto:email@example.com" target="_blank"> firstname.lastname@example.org) is a Technology Education Teacher at Central Shenandoah Valley Regional Governor's School, Fishersville, Virginia. |
Easy-to-follow video tutorials help you learn software, creative, and business skills.Become a member
In Java, you can represent numbers as either primitive values or as complex objects. I am going to start by showing you how you represent them as primitives and then show, how to convert them into complex objects when needed. A primitive data type is a simple value. It represents only a single value and not many. That's one of the big differences between primitive values and complex objects. A primitive always points to a single value. Primitive data types are stored in the fastest available memory, whereas complex objects are stored in heap memory.
When you are running an application, you won't see significant differences, but they do exist. Primitives can be used to represent numeric, logical, or single character values. They can't be used to represent strings, dates or other things that have to be represented as complete objects. Most of your numeric values will be declared as primitives. You will only need to use complex objects where you need to do conversions or you need to guarantee precision when doing certain kinds of calculations. When you declare a primitive data type, you spell the data type in all lowercase.
So for example int, i-n-t or byte would be spelled all lowercase. There are also classes with these names, but they will have an uppercase initial character. Here are the numeric primitive data types. The byte data type takes 8-bits and can represent numbers starting at -128 and going to a maximum of 127. The short integer takes 16-bits and has much greater range and the most commonly used integer data type int takes 32-bit to memory and has a minimum and maximum of over 2 million in each direction.
There is also a long integer. It takes twice the memory that an int does and should only be used when you are representing very large numbers. Then there are two primitives that can represent floating values. The float takes 32-bits and the double takes 64-bits. You'll see developers represent currency values frequently with doubles, but as I'll explain later typically, you will want to use a special class called BigDecimal for that sort of purpose. When you use primitives, you can set their values directly using literal representation of numbers.
For the first three primitive data types, byte, short, and int, the syntax of the number is pretty conventional. It's just a numeric value without any quotes. You can't include string values such as currency symbols or commas. When you get into the larger numbers, the long, the float, and the double, the syntax is a little different. For long values, a literal numeric should be followed with the letter l, for long. You can use either an upper or lowercase l, but because the lowercase usually looks a lot like a 1, typically developers will use the uppercase L. The float value takes a lowercase f and the double value, a lowercase d. Again, you could use uppercase, but most developers use lowercase for these.
If you don't put in those additional alphabetical characters, usually things will still work out okay, but what's happening internally, may surprise you. If you don't put in that additional alphabetical character into your code, the value will be cast, based on a set of rules, that Java follows, for example the 100 would be initially be cast a byte and then it would be up cast to the long. So you are actually taking more memory than necessary creating two values where only one is needed. So it's recommended that if you are explicitly using a literal to represent a long, a float or a double then use these characters.
In addition to the primitive data types for numbers, there are also a set of helper classes that are part of the Java Class Library. Each of these helper classes includes tools for converting and outputting numeric values. Here is a listing of the data types on the left and their matching helper classes on the right. For the most part the names match, byte is byte, short is short and so on and the only difference is the initial character. The primitive data type is all lowercase. The class name has an initial uppercase character.
The only difference is in the integer where the primitive Data Type is int and the name of the Helper Class is Integer with an uppercase I. The helper classes have all sorts of great tools in them and they are always available to your Java code. Here is an example. The Double class provides methods and other tools for converting and managing double values. Let's say you started off with a literal doubleValue of 156.5d and you assigned that to a primitive variable of doubleValue.
Well, in order to convert it, the first step would be to create an instance of the Double class. This would be the syntax. The data type has Double with an uppercase D, which means the helper class and not the primitive. I am constructing an instance of the Double class and passing in the primitive value. Now I can convert that value using a set of methods called byteValue, intValue, floatValue and toString. These are instance methods or class methods of the Double class.
We'll learn more about what that means in later videos, but for now just take it to mean that you can call these methods whenever you create an instance of the class. Because a doubleValue you can have more precision than say a byte or an int, calling this syntax will result in truncating the value. So the result of myIntValue would be simply 156. The result of myByteValue though because 156 exceeds the available range of a byte, would wrap around and you would actually end up with a negative number.
I'll show you examples of these in some of the codes samples. When you declare a number, using a primitive data type, if you don't set the value initially, it defaults to zero. So for example, here I am declaring a variable called myInt and I am setting it using the int data type. I am not including the equals assignment operator or an initial value and so the result is that myInt is initially assigned as 0. You'll see that this is true of all the primitive numerics.
Now when you get into complex objects instances of classes, the rules are different, but we'll talk about that later. There is one other major issue to know about when you are working with numbers. If your application requires numeric precision, when you are doing calculations or rounding, you shouldn't use primitives for this purpose. The problem is that primitive values are stored in memory in a way that can't guarantee that precision. Instead, you should use a class called BigDecimal especially when you are working with currency values and you need to do certain kinds of math, the BigDecimal class can help you guarantee that precision.
Here is an example of what can go wrong. Let's say that you are starting off with a primitive value, in this case a literal of 1115.37. I am going to create an instance of the BigDecimal class called payment and I am constructing it directly from that literal value and then I am going to output that value as a string. You might think you would see the value 1115.37 as a string, but in fact you are going to see something like this. The exact value is going to vary depending on your system, your processor and other variables, but internally the number is stored in a very surprising way.
So when you are working with currency values, it's strongly recommend that you construct your big decimals based on strings. This guarantees that you are talking about exactly the value you think you are talking about. Let's say for example that you start off with a double value once again of 1115.37. Before you create your instance of BigDecimal, you should first get a string representation of the number. This guarantees that any additional decimal values are truncated and you are only working with the value you want.
So the second line of code uses the Double class's toString method and converts to a value called ds, data-typed as a string. Now I construct the instances of BigDecimal based on the string and then output the value and now I get what I expect. We'll talk more about the BigDecimal class in a later video in the series. So that's a look at representing numeric values in Java. For most purposes, the primitive data types do a great job, but in many cases, you want to use the numeric helper classes that are part of the class library and specifically when you are working with currency values and doing calculations and rounding, you should look at the big decimal class.
Get unlimited access to all courses for just $25/month.Become a member
82 Video lessons · 101734 Viewers
61 Video lessons · 88492 Viewers
71 Video lessons · 72312 Viewers
56 Video lessons · 104028 Viewers
Access exercise files from a button right under the course name.
Search within course videos and transcripts, and jump right to the results.
Remove icons showing you already watched videos if you want to start over.
Make the video wide, narrow, full-screen, or pop the player out of the page into its own window.
Click on text in the transcript to jump to that spot in the video. As the video plays, the relevant spot in the transcript will be highlighted.
Your file was successfully uploaded. |
As a strip line ( English microstrip ), a certain class of electrical waveguides , respectively. What all striplines have in common is that they consist of one or more thin, conductive strips that are applied to a dielectric. Stripline structures can, for. B. consist of line strips arranged in one plane. They are often isolated in or above a metallic surface.
The field of application is high-frequency technology and there, above all, the area of microwaves - striplines can be used to create cost-effective and reproducible defined impedances in circuits for the transmission, coupling and filtering of high signal frequencies.
The feed and the radiator elements of antennas can also be designed as strip conductors.
Synonyms and differentiation from other terms
- The English word microstrip describes the arrangement on the surface of an insulating plate (circuit board or ceramic).
- The English stripline is used for the symmetrical stripline . An offset stripline is such a stripline with different distances to the ground planes .
- According to this terminology, the coplanar line ( coplanar waveguide ) is a microstrip , “framed” in a groundplane , ie a ground plane in the same plane.
The term microstrip line is by far the most common and often used for all types of construction. It is therefore advisable to separate the terms based on English usage.
The description of the designs suggests that printed circuits (circuit boards) are generally strip lines. The structure of the strip conductors is basically the same, but they are not necessarily dimensioned and operated as waveguides. Parameters that are decisive for striplines (impedance, loss factor, wave propagation speed, dispersion, radiation) must be taken into account for circuit paths with fast switching processes <1 ns. The frequency only plays a subordinate role here.
Properties as a waveguide
Striplines are dimensioned in such a way that, as a rule, only quasi-TEM waves can propagate. With a few simplifications, these can be viewed almost like TEM waves : both the electrical and the magnetic fields run almost exclusively perpendicular to the direction of propagation, as is also the case in coaxial lines or two-wire lines . A condition for this is that the transverse dimensions of the lines are small compared to the wavelength. Striplines are only used for short distances within assemblies.
The advantage of striplines is that they can be manufactured inexpensively, reproducibly and economically. This is particularly important for complex circuits in which there are also other components made up of striplines.
Another advantage is the low field propagation outside the planar structure, which is why there is only a small amount of waves emitted into space. Therefore, high-frequency circuits manufactured using stripline technology can often be operated without a housing that is closed on all sides or without individual, separate chambers.
The wave impedance of a stripline is determined by its width and by the thickness and dielectric constant of the insulator substrate. Since the last two quantities are usually constant, the calculation and simulation of stripline circuits is made easier. A calculation tool can be found in.
In addition to processing high-frequency signals, stripline structures are also used in or as antennas . They often form both the feeding and adapting components and the radiating elements themselves on a common substrate. Examples are the patch antenna , the spiral antenna , the panel antenna and also dipole antennas . All of these antennas can be manufactured entirely from planar stripline structures. Also helical antennas are often made of strip conductors, here they are, however, wrapped around a cylinder or cone. In all of these cases, the phase positions and impedances of the line-bound waves are caused by varying the length and width of the striplines so that their fields are superimposed in such a way that (often directed) radiation takes place as radio waves .
There is a large number of designs which, under certain circumstances, can also be used in combination. This includes:
- Microstrip line
- Symmetrical stripline
- Coplanar line, symmetrical or asymmetrical
- Double ribbon cable
- Unshielded slot lines (component e.g. of the Vivaldi antenna )
- Shielded, d. H. Slot lines built into a waveguide (also called fin lines):
- unilateral final management
- bilateral final management
- antipodal fin leadership
As microstrip lines , strip lines referred to, which consist of a conductive strip, which are separated by a dielectric substrate by a conductive surface. They are mostly used for the transport and processing of electromagnetic waves in the range between a few hundred megahertz and around 20 gigahertz.
A microstrip line consists of a non-conductive substrate ( printed circuit board ), which is completely metallized on the underside ( ground plane ). A conductor in the form of a strip (conductor track), i.e. with a defined cross-sectional area, is arranged on the top. This strip is usually made by machining the top metallization by etching or milling.
Various dielectrics are used as the substrate. Glass fiber reinforced PTFE ( RT / Duroid ) is used very frequently . For higher demands, aluminum oxide is used alongside other ceramic materials. The FR4 (glass fiber reinforced epoxy resin ), which is common in normal circuit board production, is generally unsuitable for frequencies above a few GHz because its loss angle is too large.
On the one hand, the signal spreads in the space between the strip conductor and the ground plane. On the other hand, the field lines also enter the free space above the stripline, which is usually filled with air. One must therefore speak of an inhomogeneous dielectric.
If the stripline is interrupted, the signal can jump over the gap under certain conditions and then continue to spread.
For microstrip lines on printed circuit boards, an exact solution for the line impedance (characteristic impedance) can be given for certain cases , the most general form being derived from H. Wheeler in 1965 in the following form:
w eff is the effective width of the line including a correction factor for the thickness of the metallization. This effective width is given by the following equation:
Since part of the electrical field does not run in the dielectric of the circuit board, there is also a dependency on the circuit board geometry. This is expressed by the effective permittivity number for determining the physical length of the line:
- the free space wave resistance ( ),
- the effective permittivity number ,
- the permittivity of the substrate,
- the width of the microstrip line,
- the thickness of the substrate,
- the thickness of the metallization and
- the Euler number (not the elementary charge)
The above equation for the line impedance yields asymptotically exact values under the following conditions :
- w ≫ h , for any ε r
- w ≪ h and ε r = 1
- w ≪ h and ε r ≫ 1
For all other cases the equal sign in the above equation has to be replaced by ≈ and the error of the approximation is usually less than 1% and guaranteed less than 2%.
In addition, there are a number of other, mostly simpler approximation equations with restricted areas of validity for the line impedance of microstrip lines in the literature.
In contrast to the microstrip line is the conductor strips in the symmetrical strip line (English strip line ) as above covered by a bottom dielectric equal thickness and parallel to two conductive layers (ground), which are applied to the dielectrics. Since the electric field lines cannot enter the free space above and below due to the complete covering with conductive material, a homogeneous dielectric is present, which reduces the dispersion . The wave propagation speed is lower than with the other designs.
Symmetrical striplines are more difficult to manufacture because of the higher number of layers, among other things because only limited good dielectrics are available for multilayer circuit boards.
If the distance to the two ground planes is different, one speaks of an offset stripline .
Strip conductors are referred to as coplanar lines that are located in the same plane as a metallized surface connected to ground (see picture) and are only separated from it by a gap G. All conductive layers lie on one side of a continuous dielectric of thickness H. There is air or a ground plane under the substrate. At best, there is a thin topcoat over the circuit, so that electric field lines enter the air, which is why the dielectric medium is inhomogeneous.
The upper, interrupted by lines, and the lower, closed ground plane are connected by vias. In this way, circuits can be produced in which only minor interactions occur between the conductor structures and with the environment.
Components in stripline circuits
Simple components such as capacitors and inductors can be created directly using specially dimensioned strips. Long, thin conductors have an inductive effect, whereas broad, short ones have a capacitive effect. In addition to these classic components, other components typical of high-frequency technology can also be implemented directly with strips.
- Swamp (reflection-free closure)
- Impedance matching, inductive or capacitive coupling
- Reflector, series resonant circuit, parallel resonant circuit
More complex functional units can be produced from these basic elements:
- Directional coupler
- Power divider
- Filter (band pass, band stop, high pass, low pass)
- Transformer for coupling out and coupling, electrical isolation, impedance or symmetry matching
Discrete components such as those used on normal circuit boards can also be soldered onto a microstrip circuit, given their dimensions and mutual interference. SMD components are particularly suitable . In some cases, SMD designs are also specially designed for this case. This is particularly the case with active elements such as transistors or diodes.
- Werner Bächtold: Linear elements of high frequency technology. 2nd Edition. vdf Hochschulverlag AG at the ETH Zurich, Zurich 1998, ISBN 3-7281-2611-X .
- H. Meinke, FW Gundlach: Pocket book of high frequency technology. Volume 3: Systems. 5th edition. Springer Verlag, Berlin / Heidelberg 1992, ISBN 3-540-54716-9 .
- Holger Heuermann: high frequency technology. Linear components of highly integrated high frequency circuits. 1st edition. Springer Fachmedien, Wiesbaden 2005, ISBN 3-528-03980-9 .
- Hermann Weidenfeller: Basics of communication technology . Springer Fachmedien, Wiesbaden 2002, ISBN 3-519-06265-8 .
- Otto Zinke , Heinrich Brunswig: Textbook of high frequency technology . Volume 1: High Frequency Filters - Lines - Antennas. 4th edition. Springer Verlag, Berlin / Heidelberg 1990, ISBN 3-540-51421-X .
- ↑ H. Johnson, M. Graham: Stripline vs. Microstrip Delay. (on-line)
- ↑ Microstrip Calculator. Calculation tools, etc. a. for the impedance of striplines
- ↑ Finleitung types ( Memento from November 30, 2012 in the Internet Archive )
- ^ HA Wheeler: Transmission-line properties of parallel strips separated by a dielectric sheet. In: IEEE Tran. Microwave Theory Tech. Issue MTT-13, March 1965, pp. 172-185.
- High Frequency Technology - Definition of Terms and Fundamentals (accessed on November 16, 2017)
- Basics of microwave conduction (accessed November 16, 2017)
- Characteristics and dimensions of striplines (accessed November 16, 2017)
- Determination of the frequency-dependent characteristic impedance of microstrip lines (accessed on November 16, 2017)
- Microwaves (accessed November 16, 2017)
- Saturn PCB Design Toolkit |
This article gets you started with HTML tables, covering the very basics such as rows and cells, headings, making cells span multiple columns and rows, and how to group together all the cells in a column for styling purposes.
|Prerequisites:||The basics of HTML (see Introduction to HTML).|
|Objective:||To gain basic familiarity with HTML tables.|
What is a table ?
A table is a structured set of data made up of rows and columns (tabular data). A table allows you to quickly and easily look up values that indicate some kind of connection between different types of data, for example a person and their age, or a day or the week, or the timetable for a local swimming pool.
Tables are very commonly used in human society, and have been for a long time, as evidenced by this US Census document from 1800:
It is therefore no wonder that the creators of HTML provided a means by which to structure and present tabular data on the web.
How does a table work?
The point of a table is that it is rigid. Information is easily interpreted by making visual associations between row and column headers. Look at the table below for example and find a Jovian gas giant with 62 moons. You can find the answer by associating the relevant row and column headers.
|Name||Mass (1024kg)||Diameter (km)||Density (kg/m3)||Gravity (m/s2)||Length of day (hours)||Distance from Sun (106km)||Mean temperature (°C)||Number of moons||Notes|
|Terrestial planets||Mercury||0.330||4,879||5427||3.7||4222.6||57.9||167||0||Closest to the Sun|
|Mars||0.642||6,792||3933||3.7||24.7||227.9||-65||2||The red planet|
|Jovian planets||Gas giants||Jupiter||1898||142,984||1326||23.1||9.9||778.6||-110||67||The largest planet|
|Dwarf planets||Pluto||0.0146||2,370||2095||0.7||153.3||5906.4||-225||5||Declassified as a planet in 2006, but this remains controversial.|
When done correctly, even blind people can interpret tabular data in an HTML table — a successful HTML table should enhance the experience of sighted and visually impaired users alike.
You can also have a look at the live example on GitHub! One thing you'll notice is that the table does look a bit more readable there — this is because the table you see above on this page has minimal styling, whereas the GitHub version has more significant CSS applied.
Be under no illusion; for tables to be effective on the web, you need to provide some styling information with CSS, as well as good solid structure with HTML. In this module we are focusing on the HTML part; to find out about the CSS part you should visit our Styling tables article after you've finished here.
We won't focus on CSS in this module, but we have provided a minimal CSS stylesheet for you to use that will make your tables more readable than the default you get without any styling. You can find the stylesheet here, and you can also find an HTML template that applies the stylesheet — these together will give you a good starting point for experimenting with HTML tables.
When should you NOT use HTML tables?
HTML tables should be used for tabular data — this is what they are designed for. Unfortunately, a lot of people used to use HTML tables to lay out web pages, e.g. one row to contain the header, one row to contain the content columns, one row to contain the footer, etc. You can find more details and an example at Page Layouts in our Accessibility Learning Module. This was commonly used because CSS support across browsers used to be terrible; table layouts are much less common nowadays, but you might still see them in some corners of the web.
In short, using tables for layout rather than CSS layout techniques is a bad idea. The main reasons are as follows:
- Layout tables reduce accessibility for visually impaired users: Screenreaders, used by blind people, interpret the tags that exist in an HTML page and read out the contents to the user. Because tables are not the right tool for layout, and the markup is more complex than with CSS layout techniques, the screenreaders' output will be confusing to their users.
- Tables produce tag soup: As mentioned above, table layouts generally involve more complex markup structures than proper layout techniques. This can result in the code being harder to write, maintain, and debug.
- Tables are not automatically responsive: When you use proper layout containers (such as
<div>), their width defaults to 100% of their parent element. Tables on the other hand are sized according to their content by default, so extra measures are needed to get table layout styling to effectively work across a variety of devices.
Active learning: Creating your first table
We've talked table theory enough, so, let's dive into a practical example and build up a simple table.
- First of all, make a local copy of blank-template.html and minimal-table.css in a new directory on your local machine.
- The content of every table is enclosed by these two tags :
<table></table>. Add these inside the body of your HTML.
- The smallest container inside a table is a table cell, which is created by a
<td>element ('td' stands for 'table data'). Add the following inside your table tags:
<td>Hi, I'm your first cell.</td>
- If we want a row of four cells, we need to copy these tags three times. Update the contents of your table to look like so:
<td>Hi, I'm your first cell.</td> <td>I'm your second cell.</td> <td>I'm your third cell.</td> <td>I'm your fourth cell.</td>
As you will see, the cells are not placed underneath each other, rather they are automatically aligned with each other on the same row. Each
<td> element creates a single cell and together they make up the first row. Every cell we add makes the row grow longer.
To stop this row from growing and start placing subsequent cells on a second row, we need to use the
<tr> element ('tr' stands for 'table row'). Let's investigate this now.
- Place the four cells you've already created inside
<tr>tags, like so:
<tr> <td>Hi, I'm your first cell.</td> <td>I'm your second cell.</td> <td>I'm your third cell.</td> <td>I'm your fourth cell.</td> </tr>
- Now you've made one row, have a go at making one or two more — each row needs to be wrapped in an additional
<tr>element, with each cell contained in a
This should result in a table that looks something like the following:
|Hi, I'm your first cell.||I'm your second cell.||I'm your third cell.||I'm your fourth cell.|
|Second row, first cell.||Cell 2.||Cell 3.||Cell 4.|
Adding headers with <th> elements
Now let's turn our attention to table headers — special cells that go at the start of a row or column and define the type of data that row or column contains (as an example, see the "Person" and "Age" cells in the first example shown in this article). To illustrate why they are useful, have a look at the following table example. First the source code:
<table> <tr> <td> </td> <td>Knocky</td> <td>Flor</td> <td>Ella</td> <td>Juan</td> </tr> <tr> <td>Breed</td> <td>Jack Russell</td> <td>Poodle</td> <td>Streetdog</td> <td>Cocker Spaniel</td> </tr> <tr> <td>Age</td> <td>16</td> <td>9</td> <td>10</td> <td>5</td> </tr> <tr> <td>Owner</td> <td>Mother-in-law</td> <td>Me</td> <td>Me</td> <td>Sister-in-law</td> </tr> <tr> <td>Eating Habits</td> <td>Eats everyone's leftovers</td> <td>Nibbles at food</td> <td>Hearty eater</td> <td>Will eat till he explodes</td> </tr> </table>
Now the actual rendered table:
|Breed||Jack Russell||Poodle||Streetdog||Cocker Spaniel|
|Eating Habits||Eats everyone's leftovers||Nibbles at food||Hearty eater||Will eat till he explodes|
The problem here is that, while you can kind of make out what's going on, it is not as easy to cross reference data as it could be. If the column and row headings stood out in some way, it would be much better.
Active learning: table headers
Let's have a go at improving this table.
- First, make a local copy of our dogs-table.html and minimal-table.css files in a new directory on your local machine. The HTML contains the same Dogs example as you saw above.
- To recognize the table headers as headers, both visually and semantically, you can use the
<th>element ('th' stands for 'table header'). This works in exactly the same way as a
<td>, except that it denotes a header, not a normal cell. Go into your HTML, and change all the
<td>elements surrounding the table headers into
- Save your HTML and load it in a browser, and you should see that the headers now look like headers.
Why are headers useful?
We have already partially answered this question — it is easier to find the data you are looking for when the headers clearly stand out, and the design just generally looks better.
Note: Table headings come with some default styling — they are bold and centered even if you don't add your own styling to the table, to help them stand out.
Tables headers also have an added benefit — along with the
scope attribute (which we'll learn about in the next article), they allow you to make tables more accessible by associating each header with all the data in the same row or column. Screenreaders are then able to read out a whole row or column of data at once, which is pretty useful.
Allowing cells to span multiple rows and columns
Sometimes we want cells to span multiple rows or columns. Take the following simple example, which shows the names of common animals. In some cases, we want to show the names of the males and females next to the animal name. Sometimes we don't, and in such cases we just want the animal name to span the whole table.
The initial markup looks like this:
<table> <tr> <th>Animals</th> </tr> <tr> <th>Hippopotamus</th> </tr> <tr> <th>Horse</th> <td>Mare</td> </tr> <tr> <td>Stallion</td> </tr> <tr> <th>Crocodile</th> </tr> <tr> <th>Chicken</th> <td>Hen</td> </tr> <tr> <td>Rooster</td> </tr> </table>
But the output doesn't give us quite what we want:
We need a way to get "Animals", "Hippopotamus", and "Crocodile" to span across two columns, and "Horse" and "Chicken" to span downwards over two rows. Fortunately, table headers and cells have the
rowspan attributes, which allow us to do just those things. Both accept a unitless number value, which equals the number of rows or columns you want spanned. For example,
colspan="2" makes a cell span two columns.
rowspan to improve this table.
- First, make a local copy of our animals-table.html and minimal-table.css files in a new directory on your local machine. The HTML contains the same animals example as you saw above.
- Next, use
colspanto make "Animals", "Hippopotamus", and "Crocodile" span across two columns.
- Finally, use
rowspanto make "Horse" and "Chicken" span across two rows.
- Save and open your code in a browser to see the improvement.
Providing common styling to columns
There is one last feature we'll tell you about in this article before we move on. HTML has a method of defining styling information for an entire column of data all in one place — the
<colgroup> elements. These exist because it can be a bit annoying and inefficient having to specify styling on columns — you generally have to specify your styling information on every
<th> in the column, or use a complex selector such as
Take the following simple example:
<table> <tr> <th>Data 1</th> <th style="background-color: yellow">Data 2</th> </tr> <tr> <td>Calcutta</td> <td style="background-color: yellow">Orange</td> </tr> <tr> <td>Robots</td> <td style="background-color: yellow">Jazz</td> </tr> </table>
Which gives us the following result:
|Data 1||Data 2|
This isn't ideal, as we have to repeat the styling information across all three cells in the column (we'd probably have a
class set on all three in a real project and specify the styling in a separate stylesheet). Instead of doing this, we can specify the information once, on a
<col> elements are specified inside a
<colgroup> container just below the opening
<table> tag. We could create the same effect as we see above by specifying our table as follows:
<table> <colgroup> <col> <col style="background-color: yellow"> </colgroup> <tr> <th>Data 1</th> <th>Data 2</th> </tr> <tr> <td>Calcutta</td> <td>Orange</td> </tr> <tr> <td>Robots</td> <td>Jazz</td> </tr> </table>
Effectively we are defining two "style columns", one specifying styling information for each column. We are not styling the first column, but we still have to include a blank
<col> element — if we didn't, the styling would just be applied to the first column also.
If we wanted to apply the styling information to both columns, we could just include one
<col> element with a span attribute on it, like this:
<colgroup> <col style="background-color: yellow" span="2"> </colgroup>
span takes a unitless number value that specifies the number of columns you want the styling to apply to.
Active learning: colgroup and col
Now it's time to have a go yourself.
Below you can see the timetable of a languages teacher. On Friday she has a new class teaching Dutch all day, but she also teaches German for a few periods on Tuesday and Thursdays. She wants to highlight the columns containing the days she is teaching.
Recreate the table by following the steps below.
- First, make a local copy of our timetable.html file in a new directory on your local machine. The HTML contains the same table you saw above, minus the column styling information.
- Add a
<colgroup>element at the top of the table, just underneath the
<table>tag, in which you can add your
<col>elements (see the remaining steps below).
- The first two columns need to be left unstyled.
- Add a background color to the third column. The value for your
- Set a separate width on the fourth column. The value for your
- Add a background color to the fifth column. The value for your
- Add a different background color plus a border to the sixth column, to signify that this is a special day and she's teaching a new class. The values for your
background-color:#DCC48E; border:4px solid #C1437A;
- The last two days are free days, so just set them to no background color but a set width; the value for the
That just about wraps up the basics of HTML Tables. In the next article we will look at some slightly more advanced table features, and start to think how accessible they are for visually impaired people. |
180 Degree Angle
The 180-degree angle is a straight angle as it forms a straight line. It is exactly half of the full angle (360-degree angle). If we talk about a real-life example of a 180-degree angle, then a perfect example is the angle between the two hands of a clock at 6 o'clock. The angle between the two hands of the clock is 180° because it forms a straight line.
|1.||What is 180 Degree Angle?|
|2.||180 Degree Angle Name|
|3.||How to Draw 180 Degree Angle?|
|4.||180 Degree Angle Examples|
|6.||FAQs on 180 Degree Angle|
What is 180 Degree Angle?
A 180-degree angle is a straight angle and it is exactly half of a revolution. It is also called a half-circle angle. A straight angle is produced by a straight line. The two arms of the angle which are making 180-degree are just opposite to each other from the common vertex. A 180-degree angle modifies the direction of a point. 90-degree angle is half of 180-degree angle and it is known as a right angle. Look at the image given below shows how a 180-degree angle looks.
In the above image, rays AO and OB share a common point O. AO and OB are both opposite to each other. AB is a straight line and forming an angle of 180 degrees.
180 Degree Angle Name
An angle that measures 180 degrees is called a straight angle. Whenever we construct a 180-degree angle, it always forms a straight line, that is why it is known as a straight angle. There are different names for angles of different measurements. For example, half of a 180-degree angle, i.e a 90-degree angle is known as a right angle in geometry. Similarly, double of 180 degrees, that is 360-degree angle is known as a complete angle. Angles that are less than 180 degrees are categorized as acute and obtuse angles.
How to Draw 180 Degree Angle?
The 180-degree angle can be drawn with the help of a protractor and compass.
Constructing 180-Degree Angle Using a Protractor
Follow the given steps to construct a 180-degree angle with the help of a protractor.
- Step 1: Draw a line segment OA
- Step 2: Place the protractor at the point O
- Step 3: In the inner circle of the protractor, look for 180° reading and with a pencil mark a dot and name it C.
- Step 4: Join O and C. Now, ∠AOC=180 degree angle.
Constructing 180-Degree Angle With a Compass
Follow the given steps to construct a 180-degree angle with the help of a compass.
- Step 1: Draw a straight line with help of a ruler and name it AB.
- Step 2: Mark a point O anywhere between A and B.
- Step 3: Take O as a center point, draw an arc of any radius with help of a compass, the arc should be from the left of point O to the right of O, or vice-versa, touching the line AB.
- Step 4: The arc cuts the straight line. Mark the cuts as points C and D.
- Step 5: Thus, the angle COD is 180 degrees angle.
- The 180-degree angle is also called a straight angle, half-circle, or semi-circle.
- A 180-degree angle can be constructed with a protractor or a compass.
Related Articles on 180 Degree Angles
Check out these interesting articles to know more about the 180-degree angles and its related topics.
180 Degree Angle Examples
Example1: Can you help Alex find out the difference between a 180° angle and a 90° angle?
Solution: The 180-degree angle is a straight line, known as a semi-circle. A 90-degree angle is one-fourth of a circle. 180 degrees form a straight line, while 90 degrees form a perpendicular line.
Example2: If a 180-degree angle is divided into two parts, and one angle measures 70 degrees, then what is the other angle?
Solution: Let the unknown angle is b and given angle is a = 70°
a + b = 180°
70° + b = 180°
b = 180° - 70° = 110°
Therefore, the other angle is 110°.
Example3: There are three angles forming a straight angle together - angle A, angle B, and angle C. If angle A = 30 degrees, angle B = 90 degrees, then what is the measurement of angle C?
Solution: We know that a straight angle measures 180 degrees. That means angle A + angle B + angle C = 180 degrees
30° + 90° + C = 180°
C = 180° - 90° - 30° = 60°
Therefore, the measurement of angle C is 60 degrees.
FAQs on 180 Degree Angle
What are 180-Degree Angles in Real Life?
180° angles are widely seen in real life. In everyday life, 180-degree angles are used for constructing structures, highways, houses, and sports facilities, by engineers and architects. Carpenters make benches, tables, and sofas using straight angles.
What does a 180-Degree Angle Look Like?
A 180 degree appears like a straight line because the rays or the arms of the angle making 180 degrees are completely opposite to each other and the common point joining the lines makes half revolution that is angle 180 degree.
How Many 180 Degree Angles does it take to Make a Full Turn?
For a full turn, we need two 180-degree angles. 360 degree angle is a full rotation or full turn. The half of 360 degree is 180 degree thus twice or two times of 180-degree angle gives full turn.
What is 180 Degree Angle Called?
The 180-degree angle is known as a straight angle. The sides of the angle are opposite to each other and they make a straight angle on a straight line through the vertex. The appearance of a 180-degree angle is a straight line.
Is a 180 Degree Angle Obtuse?
The measure of an obtuse angle is more than 90° but less than 180°, thus, a 180-degree angle is not an obtuse angle. It is called a straight angle.
How Many 90 Degree Angles are in a 180 Degree Angle?
There are two 90 degrees angles in a 180-degree angle. As the sum of two 90 degree angles is equal to 180 degrees.
What are 180 Degrees for a Triangle?
The sum of all the interior angles in a triangle is always equal to 180 degrees. Hence, the pair of an interior angle and the subsequent exterior angle in a triangle gives a straight angle that is a 180-degree angle. |
What is critical thinking?
In this activity you are going to explore what critical thinking involves and the kind of approach you need to take in your studies.
Read this text about critical thinking carefully. Some important words are missing. Think about the meaning of each part of the text, and for each gap choose a word from the list and type it in a box. Then look at the feedback to check your answers.
| argument | reliable | weak | influencing | evidence | bias | accept | counter | evaluate | supported | develop | actively |
Critical thinking is an important requirement for successful academic study in the UK. It is basically a skill that students already have or might need to , which helps them to think in a particular way. For example, you might be asked to read a book or article from a journal for your course. If you think critically as you read, you will not automatically everything that the writer says at face value. A good academic text is likely to include ideas or opinions; some reasons for these in the form of ; and possibly some further conclusions that the writer wants to draw from this. The writer will have organised all of these elements into an academic .
A reader who is thinking and reading critically will first want to consider whether the ideas and opinions are with reasons and evidence. An unsupported academic argument is a very one. The reader will then want to the reasons and evidence given to decide if they are valid and . Evidence which does not support an idea directly may be questionable and is therefore less effective. The reader will also want to think critically about the ideas or opinions themselves to check that they are logical and reasonable in relation to the topic, and finally, to consider what might be the writer's ideas or opinions. This is particularly important if there are no arguments in the writing, offerng an opposite view that should be considered. In other words, the critical thinker needs to search for any evidence of or one-sided argument in the writing.
So the critical thinker should read and respond to a text by asking themselves questions about it before deciding whether or not to accept what the writer is saying. Critical thinking can be used when reading someone else's work or listening to someone else's ideas but it is equally important to apply this skill when writing academic assignments yourself.
What skills are needed for critical thinking?
In this activity you are going to see how much you know about the different skills that are involved in critical thinking. You can also test yourself on what you have understood from the first activity.
Make six descriptions of specific skills that are used in critical thinking by moving an item from the top list to complete an item in the list below. Then check your answers and read the feedback. |
We found 12 resources with the keyterm tiling
Using Tiling to Find Area - Guided LessonLesson Planet
3rd - 5th CCSS: Adaptable
By splitting a rectangle into equal parts, scholars can see area as well as calculate it. First, they determine the area of a rectangle given its side lengths. Then, they use the space provided to segment it into four equal parts,...
Explore Perimeter and Area - Practice 18.1Lesson Planet
3rd - 6th
In this geometry practice worksheet, students find the perimeter and area of 5 rectangles and list the answers in a chart on the page. They use grid paper to draw a square and a rectangle with the given unit measures. They complete 2...
Modeling Multiplication and Division of FractionsLesson Planet
6th - 8th
Create models to demonstrate multiplication and division of fractions. Using fraction tiles to model fractions, pupils explore fractions on a ruler and use pattern blocks to multiply and divide. They also create number lines with fractions.
ExplorA-Pond: 4th Grade Perimeter EstimationLesson Planet
3rd - 4th
Your geometers are used to finding the perimeter of a square or rectangle, so give them something different this time! With this instructional activity, small groups will receive a picture of a shoreline and calculate the perimeter. The...
Frame Yourself: Area and PerimeterLesson Planet
2nd - 4th CCSS: Adaptable
Elementary schoolers are arranged in pairs and view the video Math Works: Measurement: The Difference Between Perimeter and Area. They discuss any prior knowledge they have of the term perimeter and then brainstorm together what the... |
Get ahead with a $300 test prep scholarship
| Enter to win by Tuesday 9/24
Terms in this set (19)
The distance between a number and zero on the number line. The symbol for absolute value is two vertical lines around the integer as shown in the equation. Ex. |-7| = 7 ; |30| = 30
Cartesian Coordinate Plane
A plane containing two perpendicular axes (x and y) intersecting at a point called origin (0,0).
In an ordered pair, (x, y), that locates a point in a plane, x represents the position of the point with respect to x-axis and is called the x-coordinate, whereas y represents the position of the point with respect to the y-axis and is called y-coordinate.
amount of separation between 2 points.
Any mathematical sentence that contains the symbols > (greater than), < (less than),<
(less than or equal to), or > (greater than or equal to).
The set of whole numbers and their opposites
Greatness in size or amount
Two different numbers that have the same absolute value. Example: 4 and -4 are opposite numbers because both have an absolute value of 4.
A pair of numbers, (x, y), that indicate the position of a point on the Cartesian coordinate Plane. An ordered pair gives the exact location of a point on the Cartesian coordinate Plane.
The point of intersection of the vertical and horizontal axes of a Cartesian coordinate plane. The coordinates of the origin are (0, 0). This is where the x- and the y- axis intersect.
A closed figure formed by three or more line segments.
The set of numbers greater than zero
One of the four regions on a Coordinate plane formed by the intersection of the x-axis and the y-axis. The quadrant on the top right corner is called the "First Quadrant" and is represented
by a roman numeral "I". "II quadrant" is the top left quadrant; "III quadrant" is the bottom left quadrant and "IV quadrant" is the bottom right quadrant.
The set of numbers that can be written in the form where a and b are integers and where b is not equal to zero. In other world, a Rational Number is a real number that can be written as a simple fraction (i.e. as a ratio).
A symbol that indicates whether a number is positive or negative. Example: in -4 , the negative sign shows this number is read as "negative four".
The horizontal number line on the Cartesian coordinate plane.
The vertical number line on the Cartesian coordinate plane
The first number of in ordered pair; the position of a point relative to the vertical axis
The second number in an ordered pair; the position of a point relative to the |
Hey guys, what is up. In this particular post, we will be talking about what are operators in c++ and types of operators with examples.
OPERATORS IN C++
Operators in c++ are the symbols that triggers an action on variables and values to perform mathematical or logical operations. These are of 6 types:
- Arithmetic operator
- Increment/decrement operator
- Relational operator
- Logical operator
- Assignment operator
- Misc operator
Arithmetic operators in c++
It is the most important operator. It takes numerical values in the form of input and then produces a single numerical value after operating.
Example: A=4, B=2. If we want to perform addition and subtraction using these variables: A+B=6 & A-B=2. Here “+” and “–” are arithmetic
Increment/decrement operators in c++
The increment operator (++) adds the value of a operand by 1.
Types of increment operators:
- Pre – increment: It increases the value of the operand and then stores the new value of the operand.
- Post – increment: It first stores the value of the operand and then increases the value of the operand.
The decrement operator (–) drops the value of the variable by 1.
Types of decrement operators:
- Pre – decrement: it first decreases the value of the operand and then stores the new value of the operand.
- Post – decrement: it first stores the value of the operand and then decreases the value of the operand.
Relational operators in c++ are used in defining the relationship between 2 operators. These operators compare the value and return true or false in the form of binary value.
Let us understand this with an example taking A=8, B=9
|==||Gives result as true if values of 2 operands/variables are equal.||A==B is not true|
|!=||Gives result as true if values of 2 variables/operands are not equal.||A != B is true|
|<||Gives the result as true if the variable on the right-hand-side is greater than that of the left-hand-side||A<B is true|
|>||Gives the result as true if the variable on the left-hand-side is greater than that of the right-hand-side||A>B is false|
|<=||Gives the result as true if the left-hand-side variable is less than or equal to the right-hand-side variable||A<=B is true|
|>=||Gives the result as true if the left-hand-side variable is greater than or equal to the right-hand-side variable||A>=B is false|
Whenever one or more than one expressions need to be combined then we use logical operators. We have 3 fundamental logical operators: ! (Not), && (And), || (Or).
Not (!): It is a unary operator. It reverses the value of the variable/operand.
And (&&): It is a binary operator which operates with 2 operands. Gives true if both the variables are non-zero. In simple words, it can be explained as the multiplication of values of both operands.
|A||B||A && B|
Or (||): It is also a binary operator which operates with 2 operands. Gives result as true when at least one of the operands is non-zero. In simple words, it is the addition of values of both operands.
|A||B||A || B|
- ( 5<3 && 6!=4) Answer= False
- ! (4==5) Answer= True
- (4<6 || 7>3 && 5!=6) Answer= True
Assignment operators are used to assigning the value to the variable.
|=||Assigns the value of the right-hand-side to the left-hand-side.||A=B. In this, value of B will be stored in A.|
|+=||Adds RHS to the LHS and assigns the result to the left-hand-side.||A+=B. In this, A=A+B.|
|-=||Subtracts RHS from LHS and assigns the result to LHS.||A-=B. In this, A=A-B.|
|*=||Multiplies RHS with LHS and assigns the result to LHS.||A*=B. In this, A=A*B.|
|/=||Divides RHS with LHS and assigns the result to LHS.||A/=B. In this, A=A/B.|
|sizeof()||it gives the size of the variable.||If A is a char variable, then the sizeof(A) will be 1.|
|Condition?X:Y||If the condition is true, then X will be executed otherwise Y will be executed.||5>3? true:false. In this case, true will be printed.|
|Comma (,)||It causes a sequence of operations. It is the value of the last expression of the comma-separated list.|
Precedence of operators:
|Post increment (++) and post decrement (–)|
|(++) Pre-increment, (–) pre-decrement, sizeof, unary plus (+) and unary minus (-)|
|Multiplication (*), division (/) and modulus (%)|
|Addition (+) and subtraction (-)|
|Less than (<), Less than or equal (<=), Greater than (>) and greater than or equal (>=)|
|Equal (==) and not equal (!=)|
|Logical And (&&)|
|Logical Or (||)|
|Assignment operators (=, +=, -=, *=, /=)|
|Comma operator (,)|
In the upcoming post, you will be getting information related to big data analytics technology. If you haven’t read about what are loops in C++ then go and read this. You can read previous posts by scrolling downwards. Also, stay tuned for the upcoming post. |
2D And 3D Shapes And Their Properties: Explained For Primary School Teachers, Parents And Kids
Here we provide a summary of the 2D shapes and 3D shapes covered in the maths curriculum at primary school with a specific focus on the properties of shapes that teachers and parents can support children to learn and understand.
For a more in-depth look at shapes, then the following articles are recommended:
- What Are 2D Shapes And Which 2D Shapes Do Kids Learn At Primary School?
- What Are 3D Shapes And Which 3D Shapes Do Kids Learn At Primary School?
- Regular And Irregular Shapes
What are the properties of 2D shapes?
2D shapes have two dimensions, such as width and height. We will go into more detail classifying these below.
FREE 2D Shapes Worksheet Year 3
Download this FREE recognising 2D shapes Geometry worksheet for Year 3 pupils, from our Independent Recap collection.
What are the properties of 3D shapes?
3D shapes have three dimensions, such as width, height and depth. We will go into more detail classifying these below.
When will children learn about the properties of 2D and 3D shapes?
Here’s what the National Curriculum expects to be taught about the properties of shapes, separated by key stage:
KS1 children should be able to:
- Develop their ability to recognise, describe, draw, compare and sort different shapes and use the related vocabulary.
Lower KS2 children should be able to:
- Draw with increasing accuracy and develop mathematical reasoning so they can analyse shapes and their properties, and confidently describe the relationships between them.
Upper KS2 children should be able to:
- Classify shapes with increasingly complex geometric properties and learn the vocabulary they need to describe them.
Read more about sorting shapes: What Is A Carroll Diagram?
Below are some of the shapes children will need to know, including their properties, such as the number of sides.
Properties of 2D shapes
- A semi-circle has 2 sides; 1 curved side and 1 straight side. The full arc is a 180° angle.
Triangles (3-sided shapes)
- An equilateral triangle is a regular triangle and each angle equals 60°.
- A right-angled triangle is any triangle with one right angle.
- A scalene triangle is an irregular triangle. All sides and angles are different.
- An isosceles triangle has two sides and two angles that are the same.
Quadrilaterals (4-sided shapes)
- A square is a regular quadrilateral and each angle equals 90°.
- A kite has two pairs of equal-length sides and the diagonals cross at right-angles.
- A rectangle has two pairs of parallel straight lines and each angle equals 90°.
- A rhombus has two pairs of parallel lines, as well as equal sides and opposite equal angles.
- A trapezium has one pair of parallel lines.
- A parallelogram has two pairs of parallel lines and opposite equal angles.
- A pentagon is any shape with 5 sides. The interior angles add up to 540°.
- A hexagon is any shape with 6 sides. The interior angles add up to 720°.
- A heptagon or septagon is any shape with 7 sides. The interior angles add up to 900°.
- An octagon is any shape with 8 sides. The interior angles add up to 1080°.
- A nonagon is any shape with 9 sides. The interior angles add up to 1260°.
- A decagon is any shape with 10 sides. The interior angles add up to 1440°.
Properties of 3D shapes
- A sphere has 1 curved surface.
- A hemisphere has 1 face, 1 curved surface and 1 edge.
- A cone has 1 face, 1 curved surface, 1 edge and 1 vertex.
- A tetrahedron, or triangular-based pyramid, has 4 faces, 6 edges and 4 vertices.
- A square-based pyramid has 5 faces, 8 edges and 5 vertices.
- A cylinder has 2 faces, 1 curved surface and 2 edges.
- A triangular prism has 5 faces, 9 edges and 6 vertices.
- A cube has 6 faces, 12 edges and 8 vertices.
- A cuboid has 6 faces, 12 edges and 8 vertices.
- A pentagonal prism has 7 faces, 15 edges and 10 vertices.
- A hexagonal prism has 8 faces, 18 edges and 12 vertices.
Properties of shapes questions
Put your children’s problem solving to the test!
1. Which of these shapes is a pentagon?
(Answer: Bottom left)
2. Which shape has exactly 5 faces?
3. These two shaded triangles are each inside a regular hexagon. In each hexagon, is the triangle an equilateral, isosceles or scalene?
(Answer: 1st = isosceles / 2nd = scalene)
4. Here is a drawing of a 3D shape.
Complete the table.
(Answer: Faces = 6 / Vertices = 8 / Edges = 12)
5. Is this rhombus a regular quadrilateral? Explain how you know.
(Answer: No – not all angles are the same)
Properties of shapes worksheets
Use these related worksheets for an interactive approach to shapes in the classroom, including real life examples and everyday objects!
- Year 2 Independent Recap – Properties of 2D Shapes Worksheet
- Year 2 Independent Recap – Properties of 3D Shapes Worksheet
- Year 5 Ready-to-go Lesson Slides – Properties of Shape Worksheet
Online 1-to-1 maths lessons trusted by schools and teachers
Every week Third Space Learning’s maths specialist tutors support thousands of primary school children with weekly online 1-to-1 lessons and maths interventions. Since 2013 we’ve helped over 130,000 children become more confident, able mathematicians. Learn more or request a personalised quote to speak to us about your needs and how we can help.
|Primary school tuition targeted to the needs of each child and closely following the National Curriculum.| |
Permian–Triassic extinction event
The Permian–Triassic (P–Tr or P–T) extinction event, colloquially known as the Great Dying, the End-Permian Extinction or the Great Permian Extinction, occurred about 252 Ma (million years) ago, forming the boundary between the Permian and Triassic geologic periods, as well as the Paleozoic and Mesozoic eras. It is the Earth's most severe known extinction event, with up to 96% of all marine species and 70% of terrestrial vertebrate species becoming extinct. It is the only known mass extinction of insects. Some 57% of all biological families and 83% of all genera became extinct. Because so much biodiversity was lost, the recovery of life on Earth took significantly longer than after any other extinction event, possibly up to 10 million years. Studies in Bear Lake County near Paris, Idaho showed a quick and dynamic rebound in a marine ecosystem, illustrating the remarkable resilience of life.
There is evidence for one to three distinct pulses, or phases, of extinction. Suggested mechanisms for the latter include one or more large meteor impact events, massive volcanism such as that of the Siberian Traps, and the ensuing coal or gas fires and explosions, and a runaway greenhouse effect triggered by sudden release of methane from the sea floor due to methane clathrate dissociation according to the clathrate gun hypothesis or methane-producing microbes known as methanogens. Possible contributing gradual changes include sea-level change, increasing anoxia, increasing aridity, and a shift in ocean circulation driven by climate change.
- 1 Dating the extinction
- 2 Extinction patterns
- 3 Biotic recovery
- 4 Causes
- 5 See also
- 6 References
- 7 Further reading
- 8 External links
Dating the extinction
Until 2000, it was thought that rock sequences spanning the Permian–Triassic boundary were too few and contained too many gaps for scientists to reliably determine its details. However, it is now possible to date the extinction with millennial precision. U–Pb zircon dates from five volcanic ash beds from the Global Stratotype Section and Point for the Permian–Triassic boundary at Meishan, China, establish a high-resolution age model for the extinction – allowing exploration of the links between global environmental perturbation, carbon cycle disruption, mass extinction, and recovery at millennial timescales. The extinction occurred between 251.941 ± 0.037 and 251.880 ± 0.031 Ma, a duration of 60 ± 48 ka. A large (approximately 0.9%), abrupt global decrease in the ratio of the stable isotope 13
C to that of 12
C, coincides with this extinction, and is sometimes used to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating. Further evidence for environmental change around the P–Tr boundary suggests an 8 °C (14 °F) rise in temperature, and an increase in CO
2 levels by ppm (for comparison, the concentration immediately before the industrial revolution was 2000 , 280 ppm and the amount today is about 410 ppm). There is also evidence of increased ultraviolet radiation reaching the earth, causing the mutation of plant spores.
It has been suggested that the Permian–Triassic boundary is associated with a sharp increase in the abundance of marine and terrestrial fungi, caused by the sharp increase in the amount of dead plants and animals fed upon by the fungi. For a while this "fungal spike" was used by some paleontologists to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating or lack suitable index fossils, but even the proposers of the fungal spike hypothesis pointed out that "fungal spikes" may have been a repeating phenomenon created by the post-extinction ecosystem in the earliest Triassic. The very idea of a fungal spike has been criticized on several grounds, including: Reduviasporonites, the most common supposed fungal spore, was actually a fossilized alga; the spike did not appear worldwide; and in many places it did not fall on the Permian–Triassic boundary. The algae, which were misidentified as fungal spores, may even represent a transition to a lake-dominated Triassic world rather than an earliest Triassic zone of death and decay in some terrestrial fossil beds. Newer chemical evidence agrees better with a fungal origin for Reduviasporonites, diluting these critiques.
Uncertainty exists regarding the duration of the overall extinction and about the timing and duration of various groups' extinctions within the greater process. Some evidence suggests that there were multiple extinction pulses or that the extinction was spread out over a few million years, with a sharp peak in the last million years of the Permian. Statistical analyses of some highly fossiliferous strata in Meishan, Zhejiang Province in southeastern China, suggest that the main extinction was clustered around one peak. Recent research shows that different groups became extinct at different times; for example, while difficult to date absolutely, ostracod and brachiopod extinctions were separated by 670 to 1170 thousand years. In a well-preserved sequence in east Greenland, the decline of animals is concentrated in a period 10 to long, with plants taking an additional several hundred thousand years to show the full impact of the event. 60 thousand years An older theory, still supported in some recent papers, is that there were two major extinction pulses 9.4 million years apart, separated by a period of extinctions well above the background level, and that the final extinction killed off only about 80% of marine species alive at that time while the other losses occurred during the first pulse or the interval between pulses. According to this theory one of these extinction pulses occurred at the end of the Guadalupian epoch of the Permian. For example, all but one of the surviving dinocephalian genera died out at the end of the Guadalupian, as did the Verbeekinidae, a family of large-size fusuline foraminifera. The impact of the end-Guadalupian extinction on marine organisms appears to have varied between locations and between taxonomic groups—brachiopods and corals had severe losses.
|Marine extinctions||Genera extinct||Notes|
|Eurypterids||100%||May have become extinct shortly before the P–Tr boundary|
|Trilobites||100%||In decline since the Devonian; only 2 genera living before the extinction|
|Brachiopods||96%||Orthids and productids died out|
|Bryozoans||79%||Fenestrates, trepostomes, and cryptostomes died out|
|Acanthodians||100%||In decline since the Devonian, with only one living family|
|Anthozoans||96%||Tabulate and rugose corals died out|
|Blastoids||100%||May have become extinct shortly before the P–Tr boundary|
|Crinoids||98%||Inadunates and camerates died out|
|Foraminiferans||97%||Fusulinids died out, but were almost extinct before the catastrophe|
Marine invertebrates suffered the greatest losses during the P–Tr extinction. Evidence of this was found in samples from south China sections at the P–Tr boundary. Here, 286 out of 329 marine invertebrate genera disappear within the final 2 sedimentary zones containing conodonts from the Permian. The decrease in diversity was probably caused by a sharp increase in extinctions, rather than a decrease in speciation.
The extinction primarily affected organisms with calcium carbonate skeletons, especially those reliant on stable CO2 levels to produce their skeletons. These organisms were susceptible to the effects of the ocean acidification that resulted from increased atmospheric CO2.
Among benthic organisms the extinction event multiplied background extinction rates, and therefore caused maximum species loss to taxa that had a high background extinction rate (by implication, taxa with a high turnover). The extinction rate of marine organisms was catastrophic.
Surviving marine invertebrate groups include: articulate brachiopods (those with a hinge), which have undergone a slow decline in numbers since the P–Tr extinction; the Ceratitida order of ammonites; and crinoids ("sea lilies"), which very nearly became extinct but later became abundant and diverse.
The groups with the highest survival rates generally had active control of circulation, elaborate gas exchange mechanisms, and light calcification; more heavily calcified organisms with simpler breathing apparatuses suffered the greatest loss of species diversity. In the case of the brachiopods, at least, surviving taxa were generally small, rare members of a formerly diverse community.
The ammonoids, which had been in a long-term decline for the 30 million years since the Roadian (middle Permian), suffered a selective extinction pulse 10 million years before the main event, at the end of the Capitanian stage. In this preliminary extinction, which greatly reduced disparity, or the range of different ecological guilds, environmental factors were apparently responsible. Diversity and disparity fell further until the P–Tr boundary; the extinction here (P–Tr) was non-selective, consistent with a catastrophic initiator. During the Triassic, diversity rose rapidly, but disparity remained low.
The range of morphospace occupied by the ammonoids, that is, their range of possible forms, shapes or structures, became more restricted as the Permian progressed. A few million years into the Triassic, the original range of ammonoid structures was once again reoccupied, but the parameters were now shared differently among clades.
The Permian had great diversity in insect and other invertebrate species, including the largest insects ever to have existed. The end-Permian is the only known mass extinction of insects, with eight or nine insect orders becoming extinct and ten more greatly reduced in diversity. Palaeodictyopteroids (insects with piercing and sucking mouthparts) began to decline during the mid-Permian; these extinctions have been linked to a change in flora. The greatest decline occurred in the Late Permian and was probably not directly caused by weather-related floral transitions.
Most fossil insect groups found after the Permian–Triassic boundary differ significantly from those before. Of Paleozoic insect groups, only the Glosselytrodea, Miomoptera, and Protorthoptera have been discovered in deposits from after the extinction. The caloneurodeans, monurans, paleodictyopteroids, protelytropterans, and protodonates became extinct by the end of the Permian. In well-documented Late Triassic deposits, fossils overwhelmingly consist of modern fossil insect groups.
Plant ecosystem response
The geological record of terrestrial plants is sparse and based mostly on pollen and spore studies. Plants are relatively immune to mass extinction, with the impact of all the major mass extinctions "insignificant" at a family level. Even the reduction observed in species diversity (of 50%) may be mostly due to taphonomic processes. However, a massive rearrangement of ecosystems does occur, with plant abundances and distributions changing profoundly and all the forests virtually disappearing; the Palaeozoic flora scarcely survived this extinction.
At the P–Tr boundary, the dominant floral groups changed, with many groups of land plants entering abrupt decline, such as Cordaites (gymnosperms) and Glossopteris (seed ferns). Dominant gymnosperm genera were replaced post-boundary by lycophytes—extant lycophytes are recolonizers of disturbed areas.
Palynological or pollen studies from East Greenland of sedimentary rock strata laid down during the extinction period indicate dense gymnosperm woodlands before the event. At the same time that marine invertebrate macrofauna declined, these large woodlands died out and were followed by a rise in diversity of smaller herbaceous plants including Lycopodiophyta, both Selaginellales and Isoetales. Later, other groups of gymnosperms again become dominant but again suffered major die offs. These cyclical flora shifts occurred a few times over the course of the extinction period and afterwards. These fluctuations of the dominant flora between woody and herbaceous taxa indicate chronic environmental stress resulting in a loss of most large woodland plant species. The successions and extinctions of plant communities do not coincide with the shift in δ13C values, but occurred many years after. The recovery of gymnosperm forests took 4–5 million years.
No coal deposits are known from the Early Triassic, and those in the Middle Triassic are thin and low-grade. This "coal gap" has been explained in many ways. It has been suggested that new, more aggressive fungi, insects and vertebrates evolved, and killed vast numbers of trees. These decomposers themselves suffered heavy losses of species during the extinction, and are not considered a likely cause of the coal gap. It could simply be that all coal forming plants were rendered extinct by the P–Tr extinction, and that it took 10 million years for a new suite of plants to adapt to the moist, acid conditions of peat bogs. Abiotic factors (factors not caused by organisms), such as decreased rainfall or increased input of clastic sediments, may also be to blame.
On the other hand, the lack of coal may simply reflect the scarcity of all known sediments from the Early Triassic. Coal-producing ecosystems, rather than disappearing, may have moved to areas where we have no sedimentary record for the Early Triassic. For example, in eastern Australia a cold climate had been the norm for a long period, with a peat mire ecosystem adapted to these conditions. Approximately 95% of these peat-producing plants went locally extinct at the P–Tr boundary; Interestingly, coal deposits in Australia and Antarctica disappear significantly before the P–Tr boundary.
There is enough evidence to indicate that over two-thirds of terrestrial labyrinthodont amphibians, sauropsid ("reptile") and therapsid ("proto-mammal") families became extinct. Large herbivores suffered the heaviest losses.
All Permian anapsid reptiles died out except the procolophonids (although testudines have morphologically anapsid skulls, they are now thought to have separately evolved from diapsid ancestors). Pelycosaurs died out before the end of the Permian. Too few Permian diapsid fossils have been found to support any conclusion about the effect of the Permian extinction on diapsids (the "reptile" group from which lizards, snakes, crocodilians, and dinosaurs (including birds) evolved).
The groups that survived suffered extremely heavy losses of species, and some terrestrial vertebrate groups very nearly became extinct at the end-Permian. Some of the surviving groups did not persist for long past this period, while others that barely survived went on to produce diverse and long-lasting lineages. Yet it took 30 million years for the terrestrial vertebrate fauna to fully recover both numerically and ecologically.
Possible explanations of these patterns
An analysis of marine fossils from the Permian's final Changhsingian stage found that marine organisms with low tolerance for hypercapnia (high concentration of carbon dioxide) had high extinction rates, while the most tolerant organisms had very slight losses.
The most vulnerable marine organisms were those that produced calcareous hard parts (i.e., from calcium carbonate) and had low metabolic rates and weak respiratory systems—notably calcareous sponges, rugose and tabulate corals, calcite-depositing brachiopods, bryozoans, and echinoderms; about 81% of such genera became extinct. Close relatives without calcareous hard parts suffered only minor losses, for example sea anemones, from which modern corals evolved. Animals with high metabolic rates, well-developed respiratory systems, and non-calcareous hard parts had negligible losses—except for conodonts, in which 33% of genera died out.
This pattern is consistent with what is known about the effects of hypoxia, a shortage but not a total absence of oxygen. However, hypoxia cannot have been the only killing mechanism for marine organisms. Nearly all of the continental shelf waters would have had to become severely hypoxic to account for the magnitude of the extinction, but such a catastrophe would make it difficult to explain the very selective pattern of the extinction. Models of the Late Permian and Early Triassic atmospheres show a significant but protracted decline in atmospheric oxygen levels, with no acceleration near the P–Tr boundary. Minimum atmospheric oxygen levels in the Early Triassic are never less than present day levels—the decline in oxygen levels does not match the temporal pattern of the extinction.
Marine organisms are more sensitive to changes in CO2 (carbon dioxide) levels than are terrestrial organisms for a variety of reasons. CO2 is 28 times more soluble in water than is oxygen. Marine animals normally function with lower concentrations of CO2 in their bodies than land animals, as the removal of CO2 in air-breathing animals is impeded by the need for the gas to pass through the respiratory system's membranes (lungs' alveolus, tracheae, and the like), even when CO2 diffuses more easily than oxygen. In marine organisms, relatively modest but sustained increases in CO2 concentrations hamper the synthesis of proteins, reduce fertilization rates, and produce deformities in calcareous hard parts. In addition, an increase in CO2 concentration is inevitably linked to ocean acidification, consistent with the preferential extinction of heavily calcified taxa and other signals in the rock record that suggest a more acidic ocean. The decrease in ocean pH is calculated to be up to 0.7 units.
It is difficult to analyze extinction and survival rates of land organisms in detail, because few terrestrial fossil beds span the Permian–Triassic boundary. Triassic insects are very different from those of the Permian, but a gap in the insect fossil record spans approximately 15 million years from the late Permian to early Triassic. The best-known record of vertebrate changes across the Permian–Triassic boundary occurs in the Karoo Supergroup of South Africa, but statistical analyses have so far not produced clear conclusions. However, analysis of the fossil river deposits of the floodplains indicate a shift from meandering to braided river patterns, indicating an abrupt drying of the climate. The climate change may have taken as little as 100,000 years, prompting the extinction of the unique Glossopteris flora and its herbivores, followed by the carnivorous guild. End-Permian extinctions did not occur at an instantaneous time horizon; particularly, floral extinction was delayed in time.
Earlier analyses indicated that life on Earth recovered quickly after the Permian extinctions, but this was mostly in the form of disaster taxa, opportunist organisms such as the hardy Lystrosaurus. Research published in 2006 indicates that the specialized animals that formed complex ecosystems, with high biodiversity, complex food webs and a variety of niches, took much longer to recover. It is thought that this long recovery was due to the successive waves of extinction, which inhibited recovery, and prolonged environmental stress to organisms, which continued into the Early Triassic. Research indicates that recovery did not begin until the start of the mid-Triassic, 4 to 6 million years after the extinction; and some writers estimate that the recovery was not complete until after the P–Tr extinction, i.e. in the 30 Malate Triassic.
A study published in the journal Science found that during the Great Extinction, ocean surface temperatures reached 40 °C (104 °F) in some places. This explains why recovery took so long: it was too hot for life to survive. Anoxic waters may have also delayed the recovery.
During the early Triassic (4 to 6 million years after the P–Tr extinction), the plant biomass was insufficient to form coal deposits, which implies a limited food mass for herbivores. River patterns in the Karoo changed from meandering to braided, indicating that vegetation there was very sparse for a long time.
Each major segment of the early Triassic ecosystem—plant and animal, marine and terrestrial—was dominated by a small number of genera, which appeared virtually worldwide, for example: the herbivorous therapsid Lystrosaurus (which accounted for about 90% of early Triassic land vertebrates) and the bivalves Claraia, Eumorphotis, Unionites and Promylina. A healthy ecosystem has a much larger number of genera, each living in a few preferred types of habitat.
Disaster taxa took advantage of the devastated ecosystems and enjoyed a temporary population boom and increase in their territory. Microconchids are the dominant component of otherwise impoverished Early Triassic encrusting assemblages. For example: Lingula (a brachiopod); stromatolites, which had been confined to marginal environments since the Ordovician; Pleuromeia (a small, weedy plant); Dicroidium (a seed fern).
Changes in marine ecosystems
Prior to the extinction, about two-thirds of marine animals were sessile and attached to the sea floor. During the Mesozoic, only about half of the marine animals were sessile while the rest were free-living. Analysis of marine fossils from the period indicated a decrease in the abundance of sessile epifaunal suspension feeders such as brachiopods and sea lilies and an increase in more complex mobile species such as snails, sea urchins and crabs.
Before the Permian mass extinction event, both complex and simple marine ecosystems were equally common; after the recovery from the mass extinction, the complex communities outnumbered the simple communities by nearly three to one, and the increase in predation pressure led to the Mesozoic Marine Revolution.
Bivalves were fairly rare before the P–Tr extinction but became numerous and diverse in the Triassic, and one group, the rudist clams, became the Mesozoic's main reef-builders. Some researchers think much of this change happened in the 5 million years between the two major extinction pulses.
Crinoids ("sea lilies") suffered a selective extinction, resulting in a decrease in the variety of their forms. Their ensuing adaptive radiation was brisk, and resulted in forms possessing flexible arms becoming widespread; motility, predominantly a response to predation pressure, also became far more prevalent.
Lystrosaurus, a pig-sized herbivorous dicynodont therapsid, constituted as much as 90% of some earliest Triassic land vertebrate fauna. Smaller carnivorous cynodont therapsids also survived, including the ancestors of mammals. In the Karoo region of southern Africa, the therocephalians Tetracynodon, Moschorhinus and Ictidosuchoides survived, but do not appear to have been abundant in the Triassic.
Archosaurs (which included the ancestors of dinosaurs and crocodilians) were initially rarer than therapsids, but they began to displace therapsids in the mid-Triassic. In the mid to late Triassic, the dinosaurs evolved from one group of archosaurs, and went on to dominate terrestrial ecosystems during the Jurassic and Cretaceous. This "Triassic Takeover" may have contributed to the evolution of mammals by forcing the surviving therapsids and their mammaliform successors to live as small, mainly nocturnal insectivores; nocturnal life probably forced at least the mammaliforms to develop fur and higher metabolic rates, while losing part of the differential color-sensitive retinal receptors reptilians and birds preserved.
Some temnospondyl amphibians made a relatively quick recovery, in spite of nearly becoming extinct. Mastodonsaurus and trematosaurians were the main aquatic and semiaquatic predators during most of the Triassic, some preying on tetrapods and others on fish.
Land vertebrates took an unusually long time to recover from the P–Tr extinction; Palaeontologist Michael Benton estimated the recovery was not complete until after the extinction, i.e. not until the Late Triassic, in which dinosaurs, 30 million yearspterosaurs, crocodiles, archosaurs, amphibians, and mammaliforms were abundant and diverse.
Pinpointing the exact cause or causes of the Permian–Triassic extinction event is difficult, mostly because the catastrophe occurred over 250 million years ago, and since then much of the evidence that would have pointed to the cause has been destroyed or is concealed deep within the Earth under many layers of rock. The sea floor is also completely recycled every 200 million years by the ongoing process of plate tectonics and seafloor spreading, leaving no useful indications beneath the ocean.
Scientists have accumulated a fairly significant amount of evidence for causes, and several mechanisms have been proposed for the extinction event. The proposals include both catastrophic and gradual processes (similar to those theorized for the Cretaceous–Paleogene extinction event).
- The catastrophic group includes one or more large bolide impact events, increased volcanism, and sudden release of methane from the sea floor, either due to dissociation of methane hydrate deposits or metabolism of organic carbon deposits by methanogenic microbes.
- The gradual group includes sea level change, increasing anoxia, and increasing aridity.
Any hypothesis about the cause must explain the selectivity of the event, which affected organisms with calcium carbonate skeletons most severely; the long period (4 to 6 million years) before recovery started, and the minimal extent of biological mineralization (despite inorganic carbonates being deposited) once the recovery began.
Evidence that an impact event may have caused the Cretaceous–Paleogene extinction event (Cretaceous–Tertiary) has led to speculation that similar impacts may have been the cause of other extinction events, including the P–Tr extinction, and thus to a search for evidence of impacts at the times of other extinctions and for large impact craters of the appropriate age.
Reported evidence for an impact event from the P–Tr boundary level includes rare grains of shocked quartz in Australia and Antarctica; fullerenes trapping extraterrestrial noble gases; meteorite fragments in Antarctica; and grains rich in iron, nickel and silicon, which may have been created by an impact. However, the accuracy of most of these claims has been challenged. For example, quartz from Graphite Peak in Antarctica, once considered "shocked", has been re-examined by optical and transmission electron microscopy. The observed features were concluded to be not due to shock, but rather to plastic deformation, consistent with formation in a tectonic environment such as volcanism.
An impact crater on the sea floor would be evidence of a possible cause of the P–Tr extinction, but such a crater would by now have disappeared. As 70% of the Earth's surface is currently sea, an asteroid or comet fragment is now perhaps more than twice as likely to hit ocean as it is to hit land. However, Earth's oldest ocean-floor crust is 200 million years old because it is continually destroyed and renewed by spreading and subduction. Thus, craters produced by very large impacts may be masked by extensive flood basalting from below after the crust is punctured or weakened. Yet, subduction should not be entirely accepted as an explanation of why no firm evidence can be found: as with the K-T event, an ejecta blanket stratum rich in siderophilic elements (such as iridium) would be expected to be seen in formations from the time.
A large impact might have triggered other mechanisms of extinction described below, such as the Siberian Traps eruptions at either an impact site or the antipode of an impact site. The abruptness of an impact also explains why more species did not rapidly evolve to survive, as would be expected if the Permian-Triassic event had been slower and less global than a meteorite impact.
Possible impact sites
Several possible impact craters have been proposed as the site of an impact causing the P–Tr extinction, including the 250 km (160 mi) Bedout structure off the northwest coast of Australia and the hypothesized 480 km (300 mi) Wilkes Land crater of East Antarctica. In each case, the idea that an impact was responsible has not been proven and has been widely criticized. In the case of Wilkes Land, the age of this sub-ice geophysical feature is very uncertain – it may be later than the Permian–Triassic extinction.
The 40 km (25 mi) Araguainha crater in Brazil has been most recently dated to 254.7 ± 2.5 million years ago, overlapping with estimates for the Permo-Triassic boundary. Much of the local rock was oil shale. The estimated energy released by the Araguainha impact is insufficient to be a direct cause of the global mass extinction, but the colossal local earth tremors would have released huge amounts of oil and gas from the shattered rock. The resulting sudden global warming might have precipitated the Permian–Triassic extinction event.
In May 1992, Michael R. Rampino published an abstract for the American Geophysical Union noting the discovery of a circular gravity anomaly near the Falkland Islands. He suggested this structure might correspond to an impact crater with a diameter of 250 km (160 mi). In August 2017, Rampino, Maximilliano Rocca and Jaime Baez Presser followed up with a paper providing further seismic and magnetic evidence that the structure is an impact crater. Estimates for the age of the structure range up to 250 millions years old. If, in fact, this is an impact crater, it would be substantially larger than the well-known 180 km (110 mi) Chicxulub impact crater associated with a later extinction event.
The final stages of the Permian had two flood basalt events. A small one, the Emeishan Traps in China, occurred at the same time as the end-Guadalupian extinction pulse, in an area close to the equator at the time. The flood basalt eruptions that produced the Siberian Traps constituted one of the largest known volcanic events on Earth and covered over 2,000,000 square kilometres (770,000 sq mi) with lava. The date of the Siberian Traps eruptions and the extinction event are in good agreement.
The Emeishan and Siberian Traps eruptions may have caused dust clouds and acid aerosols, which would have blocked out sunlight and thus disrupted photosynthesis both on land and in the photic zone of the ocean, causing food chains to collapse. The eruptions may also have caused acid rain when the aerosols washed out of the atmosphere. That may have killed land plants and molluscs and planktonic organisms which had calcium carbonate shells. The eruptions would also have emitted carbon dioxide, causing global warming. When all of the dust clouds and aerosols washed out of the atmosphere, the excess carbon dioxide would have remained and the warming would have proceeded without any mitigating effects.
The Siberian Traps had unusual features that made them even more dangerous. Pure flood basalts produce fluid, low-viscosity lava and do not hurl debris into the atmosphere. It appears, however, that 20% of the output of the Siberian Traps eruptions was pyroclastic (consisted of ash and other debris thrown high into the atmosphere), increasing the short-term cooling effect. The basalt lava erupted or intruded into carbonate rocks and into sediments that were in the process of forming large coal beds, both of which would have emitted large amounts of carbon dioxide, leading to stronger global warming after the dust and aerosols settled.
In January 2011, a team, led by Stephen Grasby of the Geological Survey of Canada—Calgary, reported evidence that volcanism caused massive coal beds to ignite, possibly releasing more than 3 trillion tons of carbon. The team found ash deposits in deep rock layers near what is now Buchanan Lake. According to their article, "coal ash dispersed by the explosive Siberian Trap eruption would be expected to have an associated release of toxic elements in impacted water bodies where fly ash slurries developed.... Mafic megascale eruptions are long-lived events that would allow significant build-up of global ash clouds." In a statement, Grasby said, "In addition to these volcanoes causing fires through coal, the ash it spewed was highly toxic and was released in the land and water, potentially contributing to the worst extinction event in earth history." In 2013, a team led by Q. Y. Yang reported the total amounts of important volatiles emitted from the Siberian Traps are 8.5 × 107 Tg CO2, 4.4 × 106 Tg CO, 7.0 × 106 Tg H2S and 6.8 × 107 Tg SO2, the data support a popular notion that the end-Permian mass extinction on the Earth was caused by the emission of enormous amounts of volatiles from the Siberian Traps into the atmosphere.
Methane hydrate gasification
Scientists have found worldwide evidence of a swift decrease of about 1% in the 13C/12C isotope ratio in carbonate rocks from the end-Permian. This is the first, largest, and most rapid of a series of negative and positive excursions (decreases and increases in 13C/12C ratio) that continues until the isotope ratio abruptly stabilised in the middle Triassic, followed soon afterwards by the recovery of calcifying life forms (organisms that use calcium carbonate to build hard parts such as shells).
- Gases from volcanic eruptions have a 13C/12C ratio about 0.5 to 0.8% below standard (δ13C about −0.5 to −0.8%), but an assessment made in 1995 concluded that the amount required to produce a reduction of about 1.0% worldwide requires eruptions greater by orders of magnitude than any for which evidence has been found. (However, this analysis addressed only CO2 produced by the magma itself, not from interactions with carbon bearing sediments, as later proposed.)
- A reduction in organic activity would extract 12C more slowly from the environment and leave more of it to be incorporated into sediments, thus reducing the 13C/12C ratio. Biochemical processes preferentially use the lighter isotopes since chemical reactions are ultimately driven by electromagnetic forces between atoms and lighter isotopes respond more quickly to these forces, but a study of a smaller drop of 0.3 to 0.4% in 13C/12C (δ13C −3 to −4 ‰) at the Paleocene-Eocene Thermal Maximum (PETM) concluded that even transferring all the organic carbon (in organisms, soils, and dissolved in the ocean) into sediments would be insufficient: even such a large burial of material rich in 12C would not have produced the 'smaller' drop in the 13C/12C ratio of the rocks around the PETM.
- Buried sedimentary organic matter has a 13C/12C ratio 2.0 to 2.5% below normal (δ13C −2.0 to −2.5%). Theoretically, if the sea level fell sharply, shallow marine sediments would be exposed to oxidization. But 6500–8400 gigatons (1 gigaton = 109 metric tons) of organic carbon would have to be oxidized and returned to the ocean-atmosphere system within less than a few hundred thousand years to reduce the 13C/12C ratio by 1.0%, which is not thought to be a realistic possibility. Moreover, sea levels were rising rather than falling at the time of the extinction.
- Rather than a sudden decline in sea level, intermittent periods of ocean-bottom hyperoxia and anoxia (high-oxygen and low- or zero-oxygen conditions) may have caused the 13C/12C ratio fluctuations in the Early Triassic; and global anoxia may have been responsible for the end-Permian blip. The continents of the end-Permian and early Triassic were more clustered in the tropics than they are now, and large tropical rivers would have dumped sediment into smaller, partially enclosed ocean basins at low latitudes. Such conditions favor oxic and anoxic episodes; oxic/anoxic conditions would result in a rapid release/burial, respectively, of large amounts of organic carbon, which has a low 13C/12C ratio because biochemical processes use the lighter isotopes more. That or another organic-based reason may have been responsible for both that and a late Proterozoic/Cambrian pattern of fluctuating 13C/12C ratios.
Prior to consideration of the inclusion of roasting carbonate sediments by volcanism, the only proposed mechanism sufficient to cause a global 1% reduction in the 13C/12C ratio was the release of methane from methane clathrates. Carbon-cycle models confirm that it would have had enough effect to produce the observed reduction. Methane clathrates, also known as methane hydrates, consist of methane molecules trapped in cages of water molecules. The methane, produced by methanogens (microscopic single-celled organisms), has a 13C/12C ratio about 6.0% below normal (δ13C −6.0%). At the right combination of pressure and temperature, it gets trapped in clathrates fairly close to the surface of permafrost and in much larger quantities at continental margins (continental shelves and the deeper seabed close to them). Oceanic methane hydrates are usually found buried in sediments where the seawater is at least 300 m (980 ft) deep. They can be found up to about 2,000 m (6,600 ft) below the sea floor, but usually only about 1,100 m (3,600 ft) below the sea floor.
The area covered by lava from the Siberian Traps eruptions is about twice as large as was originally thought, and most of the additional area was shallow sea at the time. The seabed probably contained methane hydrate deposits, and the lava caused the deposits to dissociate, releasing vast quantities of methane. A vast release of methane might cause significant global warming since methane is a very powerful greenhouse gas. Strong evidence suggests the global temperatures increased by about 6 °C (10.8 °F) near the equator and therefore by more at higher latitudes: a sharp decrease in oxygen isotope ratios (18O/16O); the extinction of Glossopteris flora (Glossopteris and plants that grew in the same areas), which needed a cold climate, with its replacement by floras typical of lower paleolatitudes.
However, the pattern of isotope shifts expected to result from a massive release of methane does not match the patterns seen throughout the early Triassic. Not only would such a cause require the release of five times as much methane as postulated for the PETM, but would it also have to be reburied at an unrealistically high rate to account for the rapid increases in the 13C/12C ratio (episodes of high positive δ13C) throughout the early Triassic before it was released again several times.
Evidence for widespread ocean anoxia (severe deficiency of oxygen) and euxinia (presence of hydrogen sulfide) is found from the Late Permian to the Early Triassic. Throughout most of the Tethys and Panthalassic Oceans, evidence for anoxia, including fine laminations in sediments, small pyrite framboids, high uranium/thorium ratios, and biomarkers for green sulfur bacteria, appear at the extinction event. However, in some sites, including Meishan, China, and eastern Greenland, evidence for anoxia precedes the extinction. Biomarkers for green sulfur bacteria, such as isorenieratane, the diagenetic product of isorenieratene, are widely used as indicators of photic zone euxinia because green sulfur bacteria require both sunlight and hydrogen sulfide to survive. Their abundance in sediments from the P-T boundary indicates hydrogen sulfide was present even in shallow waters.
This spread of toxic, oxygen-depleted water would have been devastating for marine life, producing widespread die-offs. Models of ocean chemistry show that anoxia and euxinia would have been closely associated with hypercapnia (high levels of carbon dioxide). This suggests that poisoning from hydrogen sulfide, anoxia, and hypercapnia acted together as a killing mechanism. Hypercapnia best explains the selectivity of the extinction, but anoxia and euxinia probably contributed to the high mortality of the event. The persistence of anoxia through the Early Triassic may explain the slow recovery of marine life after the extinction. Models also show that anoxic events can cause catastrophic hydrogen sulfide emissions into the atmosphere (see below).
The sequence of events leading to anoxic oceans may have been triggered by carbon dioxide emissions from the eruption of the Siberian Traps. In that scenario, warming from the enhanced greenhouse effect would reduce the solubility of oxygen in seawater, causing the concentration of oxygen to decline. Increased weathering of the continents due to warming and the acceleration of the water cycle would increase the riverine flux of phosphate to the ocean. The phosphate would have supported greater primary productivity in the surface oceans. The increase in organic matter production would have caused more organic matter to sink into the deep ocean, where its respiration would further decrease oxygen concentrations. Once anoxia became established, it would have been sustained by a positive feedback loop because deep water anoxia tends to increase the recycling efficiency of phosphate, leading to even higher productivity.
Hydrogen sulfide emissions
A severe anoxic event at the end of the Permian would have allowed sulfate-reducing bacteria to thrive, causing the production of large amounts of hydrogen sulfide in the anoxic ocean. Upwelling of this water may have released massive hydrogen sulfide emissions into the atmosphere and would poison terrestrial plants and animals and severely weaken the ozone layer, exposing much of the life that remained to fatal levels of UV radiation. Indeed, biomarker evidence for anaerobic photosynthesis by Chlorobiaceae (green sulfur bacteria) from the Late-Permian into the Early Triassic indicates that hydrogen sulfide did upwell into shallow waters because these bacteria are restricted to the photic zone and use sulfide as an electron donor.
The hypothesis has the advantage of explaining the mass extinction of plants, which would have added to the methane levels and should otherwise have thrived in an atmosphere with a high level of carbon dioxide. Fossil spores from the end-Permian further support the theory: many show deformities that could have been caused by ultraviolet radiation, which would have been more intense after hydrogen sulfide emissions weakened the ozone layer.
The supercontinent Pangaea
In the mid-Permian (during the Kungurian age of the Permian's Cisuralian epoch), the earth’s major continental plates were joined, forming a supercontinent called Pangaea, which was surrounded by the superocean, Panthalassa.
Oceanic circulation and atmospheric weather patterns during the mid-Permian produced seasonal monsoons near the coasts and an arid climate in the vast continental interior of Pangaea.
The extent of biologically diverse and ecologically productive coastal areas shrank as the supercontinent formed. The elimination of shallow aquatic environments exposed formerly protected organisms of the rich continental shelves to increased environmental volatility.
After the formation of Pangaea (see the diagram "Marine genus biodiversity" at the top of this article), the rate of marine life depletion approached catastrophic levels; however, marine life extinction never reached the rate of the "Big Five" mass extinctions.
Pangaea’s effect on extinctions on land is thought to have been less significant. In fact, the advance of the therapsids and increase in their diversity is attributed to the late Permian, when Pangaea’s global effect was thought to have peaked.
While Pangaea’s formation is known to have initiated a long period of marine life extinction, the significance of its impact on the "Great Dying" and the end of the Permian is uncertain.
A hypothesis published in 2014 posits that a genus of anaerobic methanogenic archaea known as Methanosarcina was responsible for the event. Three lines of evidence suggest that these microbes acquired a new metabolic pathway via gene transfer at about that time, enabling them to efficiently metabolize acetate into methane. That would have led to their exponential reproduction, allowing them to rapidly consume vast deposits of organic carbon that had accumulated in the marine sediment. The result would have been a sharp buildup of methane and carbon dioxide in the Earth's oceans and atmosphere, in a manner that may be consistent with the 13C/12C isotopic record. Massive volcanism facilitated this process by releasing large amounts of nickel, a scarce metal which is a cofactor for an enzymes involved in producing methane. On the other hand, in the canonical Meishan sections, the nickel concentration increases somewhat after the δ13C concentrations have begun to fall.
Combination of causes
Possible causes supported by strong evidence appear to describe a sequence of catastrophes, each worse than the last: the Siberian Traps eruptions were bad enough alone, but because they occurred near coal beds and the continental shelf, they also triggered very large releases of carbon dioxide and methane. The resultant global warming may have caused perhaps the most severe anoxic event in the oceans' history: according to this theory, the oceans became so anoxic, anaerobic sulfur-reducing organisms dominated the chemistry of the oceans and caused massive emissions of toxic hydrogen sulfide.
However, there may be some weak links in this chain of events: the changes in the 13C/12C ratio expected to result from a massive release of methane do not match the patterns seen throughout the early Triassic; and the types of oceanic thermohaline circulation that may have existed at the end of the Permian are not likely to have supported deep-sea anoxia.
- Rohde, R.A. & Muller, R.A. (2005). "Cycles in fossil diversity". Nature. 434 (7030): 209–210. Bibcode:2005Natur.434..208R. doi:10.1038/nature03339. PMID 15758998.
- St. Fleur, Nicholas (16 February 2017). "After Earth's Worst Mass Extinction, Life Rebounded Rapidly, Fossils Suggest". The New York Times. Retrieved 17 February 2017.
- ""Great Dying" Lasted 200,000 Years". National Geographic. 23 November 2011. Retrieved 1 April 2014.
- "How a Single Act of Evolution Nearly Wiped Out All Life on Earth". ScienceDaily. 1 April 2014. Retrieved 1 April 2014.
- Shen S.-Z.; et al. (2011). "Calibrating the End-Permian Mass Extinction". Science. 334 (6061): 1367–1372. Bibcode:2011Sci...334.1367S. doi:10.1126/science.1213454. PMID 22096103.
- Benton M J (2005). When life nearly died: the greatest mass extinction of all time. London: Thames & Hudson. ISBN 0-500-28573-X.
- Carl T. Bergstrom; Lee Alan Dugatkin (2012). Evolution. Norton. p. 515. ISBN 978-0-393-92592-0.
- Sahney S; Benton M.J (2008). "Recovery from the most profound mass extinction of all time". Proceedings of the Royal Society B. 275 (1636): 759–765. doi:10.1098/rspb.2007.1370. PMC . PMID 18198148.
- Labandeira CC, Sepkoski JJ (1993). "Insect diversity in the fossil record". Science. 261 (5119): 310–315. Bibcode:1993Sci...261..310L. doi:10.1126/science.11536548. PMID 11536548.
- Sole RV, Newman M (2003). "Extinctions and Biodiversity in the Fossil Record". In Canadell JG, Mooney, HA. Encyclopedia of Global Environmental Change, The Earth System – Biological and Ecological Dimensions of Global Environmental Change (Volume 2). New York: Wiley. pp. 297–391. ISBN 0-470-85361-1.
- "It Took Earth Ten Million Years to Recover from Greatest Mass Extinction". ScienceDaily. 27 May 2012. Retrieved 28 May 2012.
- "Fossils show quick rebound of life after ancient mass extinction". Reuters. 2017-02-15. Retrieved 2017-02-16.
- Jin YG, Wang Y, Wang W, Shang QH, Cao CQ, Erwin DH (2000). "Pattern of Marine Mass Extinction Near the Permian–Triassic Boundary in South China". Science. 289 (5478): 432–436. Bibcode:2000Sci...289..432J. doi:10.1126/science.289.5478.432. PMID 10903200.
- Yin H, Zhang K, Tong J, Yang Z, Wu S. The Global Stratotype Section and Point (GSSP) of the Permian-Triassic Boundary. Episodes. 24. pp. 102–114.
- Yin HF, Sweets WC, Yang ZY, Dickins JM (1992). "Permo-Triassic events in the eastern Tethys–an overview". In Sweet WC. Permo-Triassic events in the eastern Tethys: stratigraphy, classification, and relations with the western Tethys. Cambridge, UK: Cambridge University Press. pp. 1–7. ISBN 0-521-54573-0.
- Darcy E. Ogdena & Norman H. Sleep (2011). "Explosive eruption of coal and basalt and the end-Permian mass extinction". Proceedings of the National Academy of Sciences of the United States of America. 109 (1): 59–62. Bibcode:2012PNAS..109...59O. doi:10.1073/pnas.1118675109. PMC . PMID 22184229.
- David L. Chandler – MIT News Office (31 March 2014). "Ancient whodunit may be solved: The microbes did it!". MIT News.
- Payne, J. L.; Lehrmann, D. J.; Wei, J.; Orchard, M. J.; Schrag, D. P.; Knoll, A. H. (2004). "Large Perturbations of the Carbon Cycle During Recovery from the End-Permian Extinction". Science. 305 (5683): 506–9. doi:10.1126/science.1097023. PMID 15273391.
- McElwain, J. C.; Punyasena, S. W. (2007). "Mass extinction events and the plant fossil record". Trends in Ecology & Evolution. 22 (10): 548–557. doi:10.1016/j.tree.2007.09.003. PMID 17919771.
- Retallack, G. J.; Veevers, J. J.; Morante, R. (1996). "Global coal gap between Permian–Triassic extinctions and middle Triassic recovery of peat forming plants". GSA Bulletin. 108 (2): 195–207. doi:10.1130/0016-7606(1996)108<0195:GCGBPT>2.3.CO;2.
- Erwin, D.H (1993). The Great Paleozoic Crisis: Life and Death in the Permian. New York: Columbia University Press. ISBN 0-231-07467-0.
- Burgess, SD (2014). "High-precision timeline for Earth's most severe extinction". Nature. 111 (9): 3316–3321. Bibcode:2014PNAS..111.3316B. doi:10.1073/pnas.1317692111. PMC . PMID 24516148.
- Magaritz M (1989). "13C minima follow extinction events: a clue to faunal radiation". Geology. 17 (4): 337–340. Bibcode:1989Geo....17..337M. doi:10.1130/0091-7613(1989)017<0337:CMFEEA>2.3.CO;2.
- Krull SJ, Retallack JR (2000). "13C depth profiles from paleosols across the Permian–Triassic boundary: Evidence for methane release". GSA Bulletin. 112 (9): 1459–1472. Bibcode:2000GSAB..112.1459K. doi:10.1130/0016-7606(2000)112<1459:CDPFPA>2.0.CO;2. ISSN 0016-7606.
- Dolenec T, Lojen S, Ramovs A (2001). "The Permian–Triassic boundary in Western Slovenia (Idrijca Valley section): magnetostratigraphy, stable isotopes, and elemental variations". Chemical Geology. 175 (1): 175–190. doi:10.1016/S0009-2541(00)00368-5.
- Musashi M, Isozaki Y, Koike T, Kreulen R (2001). "Stable carbon isotope signature in mid-Panthalassa shallow-water carbonates across the Permo–Triassic boundary: evidence for 13C-depleted ocean". Earth Planet. Sci. Lett. 193: 9–20. Bibcode:2001E&PSL.191....9M. doi:10.1016/S0012-821X(01)00398-3.
- Dolenec T, Lojen S, Ramovs A (2001). "The Permian-Triassic boundary in Western Slovenia (Idrijca Valley section): magnetostratigraphy, stable isotopes, and elemental variations". Chemical Geology. 175: 175–190. doi:10.1016/S0009-2541(00)00368-5.
- Daily CO2 at Mauna Loa Observatory
- H Visscher; H Brinkhuis; D L Dilcher; W C Elsik; Y Eshet; C V Looy; M R Rampino & A Traverse (1996). "The terminal Paleozoic fungal event: Evidence of terrestrial ecosystem destabilization and collapse". Proceedings of the National Academy of Sciences. 93 (5): 2155–2158. Bibcode:1996PNAS...93.2155V. doi:10.1073/pnas.93.5.2155. PMC . PMID 11607638.
- Foster, C.B.; Stephenson, M.H.; Marshall, C.; Logan, G.A.; Greenwood, P.F. (2002). "A Revision Of Reduviasporonites Wilson 1962: Description, Illustration, Comparison And Biological Affinities". Palynology. 26 (1): 35–58. doi:10.2113/0260035.
- López-Gómez, J. & Taylor, E.L. (2005). "Permian-Triassic Transition in Spain: A multidisciplinary approach". Palaeogeography, Palaeoclimatology, Palaeoecology. 229 (1–2): 1–2. doi:10.1016/j.palaeo.2005.06.028.
- Looy CV, Twitchett RJ, Dilcher DL, Van Konijnenburg-Van Cittert JH, Visscher H (2005). "Life in the end-Permian dead zone". Proceedings of the National Academy of Sciences. 98 (4): 7879–7883. Bibcode:2001PNAS...98.7879L. doi:10.1073/pnas.131218098. PMC . PMID 11427710.
See image 2
- Ward PD; Botha J; Buick R; De Kock MO; Erwin DH; Garrison GH; Kirschvink JL; Smith R (2005). "Abrupt and Gradual Extinction Among Late Permian Land Vertebrates in the Karoo Basin, South Africa" (PDF). Science. 307 (5710): 709–714. Bibcode:2005Sci...307..709W. doi:10.1126/science.1107068. PMID 15661973.
- Retallack, G.J.; Smith, R.M.H.; Ward, P.D. (2003). "Vertebrate extinction across Permian-Triassic boundary in Karoo Basin, South Africa". Bulletin of the Geological Society of America. 115 (9): 1133–1152. Bibcode:2003GSAB..115.1133R. doi:10.1130/B25215.1. Archived from the original on 2008-05-05.
- Sephton, M. A.; Visscher, H.; Looy, C. V.; Verchovsky, A. B.; Watson, J. S. (2009). "Chemical constitution of a Permian-Triassic disaster species". Geology. 37 (10): 875–878. doi:10.1130/G30096A.1.
- Rampino MR, Prokoph A & Adler A (2000). "Tempo of the end-Permian event: High-resolution cyclostratigraphy at the Permian–Triassic boundary". Geology. 28 (7): 643–646. Bibcode:2000Geo....28..643R. doi:10.1130/0091-7613(2000)28<643:TOTEEH>2.0.CO;2. ISSN 0091-7613.
- Wang, S.C.; Everson, P.J. (2007). "Confidence intervals for pulsed mass extinction events". Paleobiology. 33 (2): 324–336. doi:10.1666/06056.1.
- Twitchett RJ, Looy CV, Morante R, Visscher H & Wignall PB (2001). "Rapid and synchronous collapse of marine and terrestrial ecosystems during the end-Permian biotic crisis". Geology. 29 (4): 351–354. Bibcode:2001Geo....29..351T. doi:10.1130/0091-7613(2001)029<0351:RASCOM>2.0.CO;2. ISSN 0091-7613.
- Retallack, G.J.; Metzger, C.A.; Greaver, T.; Jahren, A.H.; Smith, R.M.H.; Sheldon, N.D. (2006). "Middle-Late Permian mass extinction on land". Bulletin of the Geological Society of America. 118 (11–12): 1398–1411. Bibcode:2006GSAB..118.1398R. doi:10.1130/B26011.1.
- Stanley SM & Yang X (1994). "A Double Mass Extinction at the End of the Paleozoic Era". Science. 266 (5189): 1340–1344. Bibcode:1994Sci...266.1340S. doi:10.1126/science.266.5189.1340. PMID 17772839.
- Retallack, G.J., Metzger, C.A., Jahren, A.H., Greaver, T., Smith, R.M.H., and Sheldon, N.D (November–December 2006). "Middle-Late Permian mass extinction on land". GSA Bulletin. 118 (11/12): 1398–1411. Bibcode:2006GSAB..118.1398R. doi:10.1130/B26011.1.
- Ota, A & Isozaki, Y. (March 2006). "Fusuline biotic turnover across the Guadalupian–Lopingian (Middle–Upper Permian) boundary in mid-oceanic carbonate buildups: Biostratigraphy of accreted limestone in Japan". Journal of Asian Earth Sciences. 26 (3–4): 353–368. Bibcode:2006JAESc..26..353O. doi:10.1016/j.jseaes.2005.04.001.
- Shen, S. & Shi, G.R. (2002). "Paleobiogeographical extinction patterns of Permian brachiopods in the Asian-western Pacific region". Paleobiology. 28 (4): 449–463. doi:10.1666/0094-8373(2002)028<0449:PEPOPB>2.0.CO;2. ISSN 0094-8373.
- Wang, X-D & Sugiyama, T. (December 2000). "Diversity and extinction patterns of Permian coral faunas of China". Lethaia. 33 (4): 285–294. doi:10.1080/002411600750053853.
- Racki G (1999). "Silica-secreting biota and mass extinctions: survival processes and patterns". Palaeogeography, Palaeoclimatology, Palaeoecology. 154 (1–2): 107–132. doi:10.1016/S0031-0182(99)00089-9.
- Bambach, R.K.; Knoll, A.H.; Wang, S.C. (December 2004). "Origination, extinction, and mass depletions of marine diversity". Paleobiology. 30 (4): 522–542. doi:10.1666/0094-8373(2004)030<0522:OEAMDO>2.0.CO;2. ISSN 0094-8373.
- Knoll, A.H. (2004). "Biomineralization and evolutionary history. In: P.M. Dove, J.J. DeYoreo and S. Weiner (Eds), Reviews in Mineralogy and Geochemistry," (PDF). Archived from the original (PDF) on 2010-06-20.
- Stanley, S.M. (2008). "Predation defeats competition on the seafloor". Paleobiology. 34 (1): 1–21. doi:10.1666/07026.1. Retrieved 2008-05-13.
- Stanley, S.M. (2007). "An Analysis of the History of Marine Animal Diversity". Paleobiology. 33 (sp6): 1–55. doi:10.1666/06020.1.
- Erwin DH (1993). The great Paleozoic crisis; Life and death in the Permian. Columbia University Press. ISBN 0-231-07467-0.
- McKinney, M.L. (1987). "Taxonomic selectivity and continuous variation in mass and background extinctions of marine taxa". Nature. 325 (6100): 143–145. Bibcode:1987Natur.325..143M. doi:10.1038/325143a0.
- Twitchett RJ, Looy CV, Morante R, Visscher H, Wignall PB (2001). "Rapid and synchronous collapse of marine and terrestrial ecosystems during the end-Permian biotic crisis". Geology. 29 (4): 351–354. Bibcode:2001Geo....29..351T. doi:10.1130/0091-7613(2001)029<0351:RASCOM>2.0.CO;2. ISSN 0091-7613.
- "Permian : The Marine Realm and The End-Permian Extinction". paleobiology.si.edu. Retrieved 2016-01-26.
- "Permian extinction". Encyclopædia Britannica. Retrieved 2016-01-26.
- Knoll, A.H.; Bambach, R.K.; Canfield, D.E.; Grotzinger, J.P. (1996). "Comparative Earth history and Late Permian mass extinction". Science. 273 (5274): 452–457. Bibcode:1996Sci...273..452K. doi:10.1126/science.273.5274.452. PMID 8662528.
- Leighton, L.R.; Schneider, C.L. (2008). "Taxon characteristics that promote survivorship through the Permian–Triassic interval: transition from the Paleozoic to the Mesozoic brachiopod fauna". Paleobiology. 34 (1): 65–79. doi:10.1666/06082.1.
- Villier, L.; Korn, D. (October 2004). "Morphological Disparity of Ammonoids and the Mark of Permian Mass Extinctions". Science. 306 (5694): 264–266. Bibcode:2004Sci...306..264V. doi:10.1126/science.1102127. ISSN 0036-8075. PMID 15472073.
- Saunders, W. B.; Greenfest-Allen, E.; Work, D. M.; Nikolaeva, S. V. (2008). "Morphologic and taxonomic history of Paleozoic ammonoids in time and morphospace". Paleobiology. 34 (1): 128–154. doi:10.1666/07053.1.
- "The Dino Directory – Natural History Museum".
- Cascales-Miñana, B.; Cleal, C. J. (2011). "Plant fossil record and survival analyses". Lethaia. 45: 71. doi:10.1111/j.1502-3931.2011.00262.x.
- Retallack, GJ (1995). "Permian–Triassic life crisis on land". Science. 267 (5194): 77–80. Bibcode:1995Sci...267...77R. doi:10.1126/science.267.5194.77. PMID 17840061.
- Looy, CV Brugman WA Dilcher DL & Visscher H (1999). "The delayed resurgence of equatorial forests after the Permian–Triassic ecologic crisis". Proceedings of the National Academy of Sciences of the United States of America. 96 (24): 13857–13862. Bibcode:1999PNAS...9613857L. doi:10.1073/pnas.96.24.13857. PMC . PMID 10570163.
- Michaelsen P (2002). "Mass extinction of peat-forming plants and the effect on fluvial styles across the Permian–Triassic boundary, northern Bowen Basin, Australia". Palaeogeography, Palaeoclimatology, Palaeoecology. 179 (3–4): 173–188. doi:10.1016/S0031-0182(01)00413-8.
- Maxwell, W. D. (1992). "Permian and Early Triassic extinction of non-marine tetrapods". Palaeontology. 35: 571–583.
- Erwin DH (1990). "The End-Permian Mass Extinction". Annual Review of Ecology and Systematics. 21: 69–91. doi:10.1146/annurev.es.21.110190.000441.
- "Bristol University – News – 2008: Mass extinction".
- Knoll, A.H., Bambach, R.K., Payne, J.L., Pruss, S., and Fischer, W.W. (2007). "Paleophysiology and end-Permian mass extinction" (PDF). Earth and Planetary Science Letters. 256 (3–4): 295–313. Bibcode:2007E&PSL.256..295K. doi:10.1016/j.epsl.2007.02.018. Retrieved 2008-07-04.
- Payne, J.; Turchyn, A.; Paytan, A.; Depaolo, D.; Lehrmann, D.; Yu, M.; Wei, J. (2010). "Calcium isotope constraints on the end-Permian mass extinction". Proceedings of the National Academy of Sciences of the United States of America. 107 (19): 8543–8548. Bibcode:2010PNAS..107.8543P. doi:10.1073/pnas.0914065107. PMC . PMID 20421502.
- Clarkson, M.; Kasemann, S.; Wood, R.; Lenton, T.; Daines, S.; Richoz, S.; Ohnemueller, F.; Meixner, A.; Poulton, S.; Tipper, E. (2015-04-10). "Ocean acidification and the Permo-Triassic mass extinction". Science. 348 (6231): 229–232. doi:10.1126/science.aaa0193. PMID 25859043. Retrieved 2016-10-24.
- Smith, R.M.H. (16 November 1999). "Changing fluvial environments across the Permian-Triassic boundary in the Karoo Basin, South Africa and possible causes of tetrapod extinctions". Palaeogeography, Palaeoclimatology, Palaeoecology. 117 (1–2): 81–104. doi:10.1016/0031-0182(94)00119-S. Retrieved 21 February 2012.
- Chinsamy-Turan (2012). Anusuya, ed. Forerunners of mammals : radiation, histology, biology. Bloomington: Indiana University Press. ISBN 978-0-253-35697-0.
- Visscher, Henk; Looy, Cindy V.; Collinson, Margaret E.; Brinkhuis, Henk; Cittert, Johanna H. A. van Konijnenburg-van; Kürschner, Wolfram M.; Sephton, Mark A. (2004-08-31). "Environmental mutagenesis during the end-Permian ecological crisis". Proceedings of the National Academy of Sciences of the United States of America. 101 (35): 12952–12956. doi:10.1073/pnas.0404472101. ISSN 0027-8424. PMC . PMID 15282373.
- Lehrmann, Daniel J.; Ramezani, Jahandar; Bowring, Samuel A.; Martin, Mark W.; Montgomery, Paul; Enos, Paul; Payne, Jonathan L.; Orchard, Michael J.; Wang Hongmei; Wei Jiayong (December 2006). "Timing of recovery from the end-Permian extinction: Geochronologic and biostratigraphic constraints from south China". Geology. 34 (12): 1053–1056. Bibcode:2006Geo....34.1053L. doi:10.1130/G22827A.1.
- Yadong Sun1,2,*, Michael M. Joachimski3, Paul B. Wignall2, Chunbo Yan1, Yanlong Chen4, Haishui Jiang1, Lina Wang1, Xulong Lai1 (2012). "Lethally Hot Temperatures During the Early Triassic Greenhouse". Science. 338 (6105): 366–370. Bibcode:2012Sci...338..366S. doi:10.1126/science.1224126. PMID 23087244.
- During the greatest mass extinction in Earth’s history the world’s oceans reached 40 °C (104 °F) – lethally hot.
- Lau, Kimberly V.; Maher, Kate; Altiner, Demir; Kelley, Brian M.; Kump, Lee R.; Lehrmann, Daniel J.; Silva-Tamayo, Juan Carlos; Weaver, Karrie L.; Yu, Meiyi; Payne, Jonathan L. (2016). "Marine anoxia and delayed Earth system recovery after the end-Permian extinction". Proceedings of the National Academy of Sciences. 113 (9): 2360–2365. doi:10.1073/pnas.1515080113. PMC . PMID 26884155.
- Ward PD, Montgomery DR, Smith R (2000). "Altered river morphology in South Africa related to the Permian–Triassic extinction". Science. 289 (5485): 1740–1743. Bibcode:2000Sci...289.1740W. doi:10.1126/science.289.5485.1740. PMID 10976065.
- Hallam, A; Wignall, P B (1997). Mass Extinctions and their Aftermath. Oxford University Press. ISBN 978-0-19-854916-1.
- Rodland, DL & Bottjer, DJ (2001). "Biotic Recovery from the End-Permian Mass Extinction: Behavior of the Inarticulate Brachiopod Lingula as a Disaster Taxon". Palaios. 16 (1): 95–101. doi:10.1669/0883-1351(2001)016<0095:BRFTEP>2.0.CO;2. ISSN 0883-1351.
- Zi-qiang W (1996). "Recovery of vegetation from the terminal Permian mass extinction in North China". Review of Palaeobotany and Palynology. 91 (1–4): 121–142. doi:10.1016/0034-6667(95)00069-0.
- Wagner PJ, Kosnik MA, Lidgard S (2006). "Abundance Distributions Imply Elevated Complexity of Post-Paleozoic Marine Ecosystems". Science. 314 (5803): 1289–1292. Bibcode:2006Sci...314.1289W. doi:10.1126/science.1133795. PMID 17124319.
- Clapham, M.E., Bottjer, D.J. and Shen, S. (2006). "Decoupled diversity and ecology during the end-Guadalupian extinction (late Permian)". Geological Society of America Abstracts with Programs. 38 (7): 117. Retrieved 2008-03-28.
- Foote, M. (1999). "Morphological diversity in the evolutionary radiation of Paleozoic and post-Paleozoic crinoids". Paleobiology. 25 (sp1): 1–116. doi:10.1666/0094-8373(1999)25[1:MDITER]2.0.CO;2. ISSN 0094-8373. JSTOR 2666042.
- Baumiller, T. K. (2008). "Crinoid Ecological Morphology". Annual Review of Earth and Planetary Sciences. 36 (1): 221–249. Bibcode:2008AREPS..36..221B. doi:10.1146/annurev.earth.36.031207.124116.
- Botha, J. & Smith, R.M.H. (2007). "Lystrosaurus species composition across the Permo–Triassic boundary in the Karoo Basin of South Africa". Lethaia. 40 (2): 125–137. doi:10.1111/j.1502-3931.2007.00011.x. Retrieved 2008-07-02. Full version online at "Lystrosaurus species composition across the Permo–Triassic boundary in the Karoo Basin of South Africa" (PDF). Retrieved 2008-07-02.
- Benton, M.J. (2004). Vertebrate Paleontology. Blackwell Publishers. xii–452. ISBN 0-632-05614-2.
- Ruben, J.A. & Jones, T.D. (2000). "Selective Factors Associated with the Origin of Fur and Feathers". American Zoologist. 40 (4): 585–596. doi:10.1093/icb/40.4.585.
- Yates AM & Warren AA (2000). "The phylogeny of the 'higher' temnospondyls (Vertebrata: Choanata) and its implications for the monophyly and origins of the Stereospondyli". Zoological Journal of the Linnean Society. 128 (1): 77–121. doi:10.1111/j.1096-3642.2000.tb00650.x. Archived from the original on 2007-10-01. Retrieved 2008-01-18.
- Retallack GJ, Seyedolali A, Krull ES, Holser WT, Ambers CP, Kyte FT (1998). "Search for evidence of impact at the Permian–Triassic boundary in Antarctica and Australia". Geology. 26 (11): 979–982. Bibcode:1998Geo....26..979R. doi:10.1130/0091-7613(1998)026<0979:SFEOIA>2.3.CO;2.
- Becker L, Poreda RJ, Basu AR, Pope KO, Harrison TM, Nicholson C, Iasky R (2004). "Bedout: a possible end-Permian impact crater offshore of northwestern Australia". Science. 304 (5676): 1469–1476. Bibcode:2004Sci...304.1469B. doi:10.1126/science.1093925. PMID 15143216.
- Becker L, Poreda RJ, Hunt AG, Bunch TE, Rampino M (2001). "Impact event at the Permian–Triassic boundary: Evidence from extraterrestrial noble gases in fullerenes". Science. 291 (5508): 1530–1533. Bibcode:2001Sci...291.1530B. doi:10.1126/science.1057243. PMID 11222855.
- Basu AR, Petaev MI, Poreda RJ, Jacobsen SB, Becker L (2003). "Chondritic meteorite fragments associated with the Permian–Triassic boundary in Antarctica". Science. 302 (5649): 1388–1392. Bibcode:2003Sci...302.1388B. doi:10.1126/science.1090852. PMID 14631038.
- Kaiho K, Kajiwara Y, Nakano T, Miura Y, Kawahata H, Tazaki K, Ueshima M, Chen Z, Shi GR (2001). "End-Permian catastrophe by a bolide impact: Evidence of a gigantic release of sulfur from the mantle". Geology. 29 (9): 815–818. Bibcode:2001Geo....29..815K. doi:10.1130/0091-7613(2001)029<0815:EPCBAB>2.0.CO;2. ISSN 0091-7613. Retrieved 2007-10-22.
- Farley KA, Mukhopadhyay S, Isozaki Y, Becker L, Poreda RJ (2001). "An extraterrestrial impact at the Permian–Triassic boundary?". Science. 293 (5539): 2343a–2343. doi:10.1126/science.293.5539.2343a. PMID 11577203.
- Koeberl C, Gilmour I, Reimold WU, Philippe Claeys P, Ivanov B (2002). "End-Permian catastrophe by bolide impact: Evidence of a gigantic release of sulfur from the mantle: Comment and Reply". Geology. 30 (9): 855–856. Bibcode:2002Geo....30..855K. doi:10.1130/0091-7613(2002)030<0855:EPCBBI>2.0.CO;2. ISSN 0091-7613.
- Isbell JL, Askin RA, Retallack GR (1999). "Search for evidence of impact at the Permian–Triassic boundary in Antarctica and Australia; discussion and reply". Geology. 27 (9): 859–860. Bibcode:1999Geo....27..859I. doi:10.1130/0091-7613(1999)027<0859:SFEOIA>2.3.CO;2.
- Koeberl K, Farley KA, Peucker-Ehrenbrink B, Sephton MA (2004). "Geochemistry of the end-Permian extinction event in Austria and Italy: No evidence for an extraterrestrial component". Geology. 32 (12): 1053–1056. Bibcode:2004Geo....32.1053K. doi:10.1130/G20907.1.
- Langenhorst F, Kyte FT & Retallack GJ (2005). "Reexamination of quartz grains from the Permian–Triassic boundary section at Graphite Peak, Antarctica" (PDF). Lunar and Planetary Science Conference XXXVI. Retrieved 2007-07-13.
- Jones AP, Price GD, Price NJ, DeCarli PS, Clegg RA (2002). "Impact induced melting and the development of large igneous provinces". Earth and Planetary Science Letters. 202 (3): 551–561. Bibcode:2002E&PSL.202..551J. CiteSeerX . doi:10.1016/S0012-821X(02)00824-5.
- White RV (2002). "Earth's biggest 'whodunnit': unravelling the clues in the case of the end-Permian mass extinction" (PDF). Philosophical Transactions of the Royal Society of London. 360 (1801): 2963–2985. Bibcode:2002RSPTA.360.2963W. doi:10.1098/rsta.2002.1097. PMID 12626276. Retrieved 2008-01-12.
- AHager, Bradford H (2001). "Giant Impact Craters Lead To Flood Basalts: A Viable Model". CCNet 33/2001: Abstract 50470.
- Hagstrum, Jonathan T (2001). "Large Oceanic Impacts As The Cause Of Antipodal Hotspots And Global Mass Extinctions". CCNet 33/2001: Abstract 50288.
- von Frese RR; Potts L; Gaya-Pique L; Golynsky AV; Hernandez O; Kim J; Kim H; Hwang J (2006). "Permian–Triassic mascon in Antarctica". Eos Trans. AGU, Jt. Assem. Suppl. 87 (36): Abstract T41A–08. Retrieved 2007-10-22.
- Von Frese, R.R.B.; L. V. Potts; S. B. Wells; T. E. Leftwich; H. R. Kim; J. W. Kim; A. V. Golynsky; O. Hernandez; L. R. Gaya-Piqué (2009). "GRACE gravity evidence for an impact basin in Wilkes Land, Antarctica". Geochem. Geophys. Geosyst. 10 (2): Q02014. Bibcode:2009GGG....1002014V. doi:10.1029/2008GC002149.
- Tohver E.; Lana C.; Cawood P.A.; Fletcher I.R.; Jourdan F.; Sherlock S.; Rasmussen B.; Trindade R.I.F.; Yokoyama E.; Filho C.R. Souza; Marangoni Y. (2012). "Geochronological constraints on the age of a Permo–Triassic impact event: U–Pb and 40Ar/39Ar results for the 40 km Araguainha structure of central Brazil". Geochimica et Cosmochimica Acta. 86: 214–227. Bibcode:2012GeCoA..86..214T. doi:10.1016/j.gca.2012.03.005.
- Biggest extinction in history caused by climate-changing meteor. University of Western Australia University News Wednesday, 31 July 2013. http://www.news.uwa.edu.au/201307315921/international/biggest-extinction-history-caused-climate-changing-meteor
- Rocca, M.; Rampino, M; Baez Presser, J (2017). "Geophysical evidence for a large impact structure on the Falkland (Malvinas) Plateau". Terra Nova. 29 (4): 233–237. doi:10.1111/ter.12269.
- Zhou, M-F., Malpas, J, Song, X-Y, Robinson, PT, Sun, M, Kennedy, AK, Lesher, CM & Keays, RR (2002). "A temporal link between the Emeishan large igneous province (SW China) and the end-Guadalupian mass extinction". Earth and Planetary Science Letters. 196 (3–4): 113–122. Bibcode:2002E&PSL.196..113Z. doi:10.1016/S0012-821X(01)00608-2.
- Wignall, Paul B.; et al. (2009). "Volcanism, Mass Extinction, and Carbon Isotope Fluctuations in the Middle Permian of China". Science. 324 (5931): 1179–1182. Bibcode:2009Sci...324.1179W. doi:10.1126/science.1171956. PMID 19478179.
- Andy Saunders; Marc Reichow (2009). "The Siberian Traps – Area and Volume". Retrieved 2009-10-18.
- Andy Saunders & Marc Reichow (January 2009). "The Siberian Traps and the End-Permian mass extinction: a critical review" (PDF). Chinese Science Bulletin. Springer. 54 (1): 20–37. doi:10.1007/s11434-008-0543-7.
- Reichow, MarcK.; Pringle, M.S.; Al'Mukhamedov, A.I.; Allen, M.B.; Andreichev, V.L.; Buslov, M.M.; Davies, C.E.; Fedoseev, G.S.; Fitton, J.G.; Inger, S.; Medvedev, A.Ya.; Mitchell, C.; Puchkov, V.N.; Safonova, I.Yu.; Scott, R.A.; Saunders, A.D. (2009). "The timing and extent of the eruption of the Siberian Traps large igneous province: Implications for the end-Permian environmental crisis" (PDF). Earth and Planetary Science Letters. 277: 9–20. Bibcode:2009E&PSL.277....9R. doi:10.1016/j.epsl.2008.09.030.
- Kamo, SL (2003). "Rapid eruption of Siberian flood-volcanic rocks and evidence for coincidence with the Permian–Triassic boundary and mass extinction at 251 Ma". Earth and Planetary Science Letters. 214: 75–91. Bibcode:2003E&PSL.214...75K. doi:10.1016/S0012-821X(03)00347-9.
- Dan Verango (January 24, 2011). "Ancient mass extinction tied to torched coal". USA Today.
- Stephen E. Grasby, Hamed Sanei & Benoit Beauchamp (January 23, 2011). "Catastrophic dispersion of coal fly ash into oceans during the latest Permian extinction". Nature Geoscience. 4 (2): 104–107. Bibcode:2011NatGe...4..104G. doi:10.1038/ngeo1069.
- "Researchers find smoking gun of world's biggest extinction; Massive volcanic eruption, burning coal and accelerated greenhouse gas choked out life". University of Calgary. January 23, 2011. Retrieved 2011-01-26.
- Yang, QY (2013). "The chemical compositions and abundances of volatiles in the Siberian large igneous province: Constraints on magmatic CO2 and SO2 emissions into the atmosphere". Chemical Geology. 339: 84–91. doi:10.1016/j.chemgeo.2012.08.031.
- Burgess, Seth D.; Bowring, Samuel; Shen, Shu-zhong (2014-03-04). "High-precision timeline for Earth's most severe extinction". Proceedings of the National Academy of Sciences. 111 (9): 3316–3321. doi:10.1073/pnas.1317692111. ISSN 0027-8424. PMC . PMID 24516148.
- "Earth's worst extinction "inescapably" tied to Siberian Traps, CO2, and climate change". Skeptical Science. Retrieved 2016-03-11.
- Black, Benjamin A.; Weiss, Benjamin P.; Elkins-Tanton, Linda T.; Veselovskiy, Roman V.; Latyshev, Anton (2015-04-30). "Siberian Traps volcaniclastic rocks and the role of magma-water interactions". Geological Society of America Bulletin. 127 (9–10): B31108.1. doi:10.1130/B31108.1. ISSN 0016-7606.
- Burgess, Seth D.; Bowring, Samuel A. (2015-08-01). "High-precision geochronology confirms voluminous magmatism before, during, and after Earth's most severe extinction". Science Advances. 1 (7): e1500470. doi:10.1126/sciadv.1500470. ISSN 2375-2548. PMC . PMID 26601239.
- Fischman, Josh. "Giant Eruptions and Giant Extinctions [Video]". Scientific American. Retrieved 2016-03-11.
- Palfy J, Demeny A, Haas J, Htenyi M, Orchard MJ, Veto I (2001). "Carbon isotope anomaly at the Triassic– Jurassic boundary from a marine section in Hungary". Geology. 29 (11): 1047–1050. Bibcode:2001Geo....29.1047P. doi:10.1130/0091-7613(2001)029<1047:CIAAOG>2.0.CO;2. ISSN 0091-7613.
- Berner, R.A. (2002). "Examination of hypotheses for the Permo-Triassic boundary extinction by carbon cycle modeling". Proceedings of the National Academy of Sciences. 99 (7): 4172–4177. Bibcode:2002PNAS...99.4172B. doi:10.1073/pnas.032095199. PMC . PMID 11917102.
- Dickens GR; O'Neil JR; Rea DK; Owen RM (1995). "Dissociation of oceanic methane hydrate as a cause of the carbon isotope excursion at the end of the Paleocene". Paleoceanography. 10 (6): 965–71. Bibcode:1995PalOc..10..965D. doi:10.1029/95PA02087.
- White, R. V. (2002). "Earth's biggest 'whodunnit': Unravelling the clues in the case of the end-Permian mass extinction". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 360 (1801): 2963–85. Bibcode:2002RSPTA.360.2963W. doi:10.1098/rsta.2002.1097. PMID 12626276.
- Schrag, D.P., Berner, R.A., Hoffman, P.F., and Halverson, G.P. (2002). "On the initiation of a snowball Earth". Geochemistry Geophysics Geosystems. 3 (6): 1036. Bibcode:2002GGG....3fQ...1S. doi:10.1029/2001GC000219. Preliminary abstract at Schrag, D.P. (June 2001). "On the initiation of a snowball Earth". Geological Society of America.
- Benton, M.J.; Twitchett, R.J. (2003). "How to kill (almost) all life: the end-Permian extinction event". Trends in Ecology & Evolution. 18 (7): 358–365. doi:10.1016/S0169-5347(03)00093-4.
- Dickens GR (2001). "The potential volume of oceanic methane hydrates with variable external conditions". Organic Geochemistry. 32 (10): 1179–1193. doi:10.1016/S0146-6380(01)00086-9.
- Reichow MK; Saunders AD; White RV; Pringle MS; Al'Muhkhamedov AI; Medvedev AI; Kirda NP (2002). "40Ar/39Ar Dates from the West Siberian Basin: Siberian Flood Basalt Province Doubled". Science. 296 (5574): 1846–1849. Bibcode:2002Sci...296.1846R. doi:10.1126/science.1071671. PMID 12052954.
- Holser WT; Schoenlaub H-P; Attrep Jr M; Boeckelmann K; Klein P; Magaritz M; Orth CJ; Fenninger A; Jenny C; Kralik M; Mauritsch H; Pak E; Schramm J-F; Stattegger K; Schmoeller R (1989). "A unique geochemical record at the Permian/Triassic boundary". Nature. 337 (6202): 39–44. Bibcode:1989Natur.337...39H. doi:10.1038/337039a0.
- Dobruskina IA (1987). "Phytogeography of Eurasia during the early Triassic". Palaeogeography, Palaeoclimatology, Palaeoecology. 58 (1–2): 75–86. doi:10.1016/0031-0182(87)90007-1.
- Wignall, P.B.; Twitchett, R.J. (2002). "Extent, duration, and nature of the Permian-Triassic superanoxic event". Geological Society of America Special Papers. 356: 395–413. doi:10.1130/0-8137-2356-6.395. ISBN 0-8137-2356-6.
- Cao, Changqun; Gordon D. Love; Lindsay E. Hays; Wei Wang; Shuzhong Shen; Roger E. Summons (2009). "Biogeochemical evidence for euxinic oceans and ecological disturbance presaging the end-Permian mass extinction event". Earth and Planetary Science Letters. 281 (3–4): 188–201. Bibcode:2009E&PSL.281..188C. doi:10.1016/j.epsl.2009.02.012.
- Hays, Lindsay; Kliti Grice; Clinton B. Foster; Roger E. Summons (2012). "Biomarker and isotopic trends in a Permian–Triassic sedimentary section at Kap Stosch, Greenland" (PDF). Organic Geochemistry. 43: 67–82. doi:10.1016/j.orggeochem.2011.10.010.
- Meyers, Katja; L.R. Kump; A. Ridgwell (September 2008). "Biogeochemical controls on photic-zone euxinia during the end-Permian mass extinction". Geology. 36 (9): 747–750. doi:10.1130/g24618a.1.
- Kump, Lee; Alexander Pavlov; Michael A. Arthur (2005). "Massive release of hydrogen sulfide to the surface ocean and atmosphere during intervals of oceanic anoxia". Geology. 33 (5): 397–400. Bibcode:2005Geo....33..397K. doi:10.1130/G21295.1.
- Rothman, D. H.; Fournier, G. P.; French, K. L.; Alm, E. J.; Boyle, E. A.; Cao, C.; Summons, R. E. (2014-03-31). "Methanogenic burst in the end-Permian carbon cycle". Proceedings of the National Academy of Sciences. 111 (15): 5462–7. Bibcode:2014PNAS..111.5462R. doi:10.1073/pnas.1318106111. PMC . PMID 24706773. — Lay summary: Chandler, David L.; Massachusetts Institute of Technology (March 31, 2014). "Ancient whodunit may be solved: Methane-producing microbes did it!". Science Daily.
- Shen, Shu-Zhong; Bowring, Samuel A. (2014). "The end-Permian mass extinction: A still unexplained catastrophe". National Science Review. 1 (4): 492–495. doi:10.1093/nsr/nwu047.
- Zhang R, Follows, MJ, Grotzinger, JP, & Marshall J (2001). "Could the Late Permian deep ocean have been anoxic?". Paleoceanography. 16 (3): 317–329. Bibcode:2001PalOc..16..317Z. doi:10.1029/2000PA000522.
- Over, Jess (editor), Understanding Late Devonian and Permian–Triassic Biotic and Climatic Events, (Volume 20 in series Developments in Palaeontology and Stratigraphy (2006). The state of the inquiry into the extinction events.
- Sweet, Walter C. (editor), Permo–Triassic Events in the Eastern Tethys : Stratigraphy Classification and Relations with the Western Tethys (in series World and Regional Geology)
- "Siberian Traps". Retrieved 2011-04-30.
- "Big Bang In Antarctica: Killer Crater Found Under Ice". Retrieved 2011-04-30.
- "Global Warming Led To Atmospheric Hydrogen Sulfide And Permian Extinction". Retrieved 2011-04-30.
- Morrison D. "Did an Impact Trigger the Permian-Triassic Extinction?". NASA. Archived from the original on 2011-06-10. Retrieved 2011-04-30.
- "Permian Extinction Event". Retrieved 2011-04-30.
- Ogden, DE; Sleep, NH (2012). "Explosive eruption of coal and basalt and the end-Permian mass extinction". Proc. Natl. Acad. Sci. U.S.A. 109 (1): 59–62. doi:10.1073/pnas.1118675109. PMC . PMID 22184229. Retrieved 2011-12-25.
- "BBC Radio 4 In Our Time discussion of the Permian-Triassic boundary". Retrieved 2012-02-01. Podcast available. |
Since there are no answer keys posted for the exams, here is a space to create our own pool of key points for each question. Hopefully this will help in studying for the final.
1. a) Draw diagrams showing i) stabilizing selection, ii) directional selection, iii) disruptive selection
b) Provide example of each
a) (Image from Ricklefs)
b) Stabilizing: giraffe’s neck. Long necks are better for getting a mate, short necks are better for drinking water. Medium-sized necks have been selected for.
Directional: horse size. Environmental change – forests turned into open savannahs, and food changed from leafy plants to grasses. Molars grew larger, and horses grew taller.
Disruptive: finch beak size. Different beak sizes useful for different seed sizes. When environmental changes occur that affect different seed abundances, beak sizes that were previously ideal may no longer be viable.
2. Population growth with dN/dt = rN, r > 0.
a) Draw a graph
b) Name this growth pattern
Population growth with dN/dt = rN(1-N/K)
a) Why apply this equation rather than dN/dt = rN?
b) What is K?
c) Draw graph and label K
d) Name this growth pattern
dN/dt = rN
a) (Image from lecture slides)
b) Exponential growth
dN/dt = rN(1-N/K)
a) if limiting factor or maximum population limit exists
b) K = carrying capacity
c) (Image from lecture slides)
d) logistic growth
3. Starting cohort: 50 individuals. After 1 year, 30 remain and have 2 offspring each. After another year (year 2), 10 are alive and have 4 offspring each. None survive in 3rd year.
a) fill in table
|Age (x)||Survival (lx)||Age-specific survival (sx)||Fecundity (mx) or (bx)||lxbx||xlxbx|
|Ro (sum of lxbx)||2.00|
|Expected number of births weighted by age (sum of xlxbx)||2.80|
**b) Calculate Ro (net reproductive rate of a single individual in her lifetime)
Ro is the sum of the product of the survival rates * birth (maternity) rates:
In this case, Ro is 2.00
c) Calculate generation time, T
Generation time, T is the sum of expected births weighted by age / sum of survival rates * birth rates:
In this case, T = 2.80 / 2.00 = 1.40
d) Calculate lambda**
Lambda = Ro1/T:
In this case, Lambda = 2.00(1/1.4) = 1.641
4. a) Draw a resource utilization spectrum for 2 competing species (A & B) that partially overlap in resource use. What can we conclude from this?
b) Draw resource utilization spectra for species A and B when they occur alone in the absence of the other, so that this information will help interpret the role of competition in part a.
From this we can conclude either
i) competition is insignificant: A and B specialize in resource use with different preferences
ii) competition is significant and has led to niche partitioning: perhaps one species outcompetes the other and displacement has occurred
-Species A has same resource utilization when alone and when with B.
-Species B has much wider spectrum of use when alone.
-When occurring together, A is superior competitor and outcompetes Bin their shared resource preferences
-A and B coexist because B has wider tolerance and lives in conditions that A cannot live in.
5. In lab experiments, predators typically extinguish prey and then starve to death, therefore making predator-prey systems unstable. Name and describe 3 mechanisms that allow predator and prey to coexist and persist in natural ecosystems.
1. Environmental variation - environmental conditions and habitat vary over space, so it is difficult that preditors would locate all potential prey.
2. Prey-switching - if a predator's preferred resource population becomes low, they may begin consuming a different prey population. This decreased pressure can allow the preferred prey population to grow again in numbers.
3. Prey defenses - prey species have evolved many mechanisms to avoid be consumed. These include simple escape or hiding, warning coloration (aposomatic coloration), mimicry (palatable species have come to resemble unpalatable species that predators know to avoid), chemical defense (release of chemicals to deter predators or maintenance of chemical compounds that deter them), noise-making, and others.
6. Explain “trophic cascade” with a 3-trophic level example. What changes might occur if a top predator is added?
Trophic cascade: top-down or bottom-up control of trophic levels (the idea that a change in the number of individuals at one trophic level can indirectly change numbers in a trophic level beyond that which they directly affect.)
3-trophic level example: fish eat zooplankton, which eat phytoplankton. If fish population increases, predation on zooplankton increases, zooplankton population decreases, predation on phytoplankton decreases, phytoplankton population increases.
4-trophic level examples: add bear population. Bear decreases fish population, which reduces predation on zooplankton and increases zooplankton population, which in turn increases predation on phytoplankton and decreases phytoplankton population
7. Describe the vertical profile of temperature and oxygen in a dimictic, eutrophic lake over 4 seasons. Include graphs of vertical profile of temperature and oxygen in the summer. Explain dimictic and eutrophic in terms of generating the oxygen profile.
8. Explain the consequences of straightening a meandering river with Lane’s law.
Qs * D50 = Qw* S
Qs = sediment flow
D50 = 50th percentile sediment diameter
Qw = water flow
S = stream slope
-stream slope gets steeper, water flow remains the same
-throws stream out of equilibrium
-stream gains power and degrades stream bed
-balance the equation with the left side – increase sediment load
-disrupt aquatic ecosystems and habitats
9. Compare hydrologic pathways using the terms infiltration, evaporation, transpiration, and runoff in Watershed A (recent deforestation) and B (suburban development with houses and roads) relative to the reference watershed (mostly forested, high water absorption in soil, stream).
Watershed A relative to reference:
-less transpiration (less vegetation)
-more runoff - more peak runoff causing flashier flows
-less evaporation (less water available on site due to runoff)
-less infiltration (less vegetation requiring water in soil, faster saturation)
Watershed B relative to reference:
-less infiltration, more runoff (impervious surfaces) - more peak runoff causing flashier flows
-some transpiration, but much less (trees and plants in yards)
-less evaporation (less water available on site due to runoff) |
|c. 25.4 million|
(19.9 million under the mandate of United Nations High Commissioner for Refugees (UNHCR) and 5.4 million under UNRWA's mandate
|Regions with significant populations|
|Sub-Saharan Africa||6.236 million|
|Europe and North Asia||6.088 million|
|Asia and the Pacific||4.153 million|
|Middle East and North Africa||2.653 million|
A refugee, generally speaking, is a displaced person who has been forced to cross national boundaries and who cannot return home safely (see Definitions for more details). Such a person may be called an asylum seeker until granted refugee status by the contracting state or the United Nations High Commissioner for Refugees (UNHCR) if they formally make a claim for asylum. The lead international agency coordinating refugee protection is the United Nations Office of the UNHCR. The United Nations has a second Office for refugees, the United Nations Relief and Works Agency or UNRWA, which is solely responsible for supporting the large majority of Palestinian refugees.
Etymology and usage
Similar terms in other languages have described an event marking migration of a specific population from a place of origin, such as the biblical account of Israelites fleeing from Assyrian conquest (circa 740 BCE), or the asylum found by the prophet Muhammad and his emigrant companions with helpers in Yathrib (later Medina) after they fled from persecution in Mecca. In English, the term refugee derives from the root word refuge, from Old French refuge, meaning "hiding place". It refers to "shelter or protection from danger or distress", from Latin fugere, "to flee", and refugium, "a taking [of] refuge, place to flee back to". In Western history, the term was first applied to French Protestant Huguenots looking for a safe place against Catholic persecution after the first Edict of Fontainebleau in 1540. The word appeared in the English language when French Huguenots fled to Britain in large numbers after the 1685 Edict of Fontainebleau (the revocation of the 1598 Edict of Nantes) in France and the 1687 Declaration of Indulgence in England and Scotland. The word meant "one seeking asylum", until around 1914, when it evolved to mean "one fleeing home", applied in this instance to civilians in Flanders heading west to escape fighting in World War I.
The first modern definition of international refugee status came about under the League of Nations in 1921 from the Commission for Refugees. Following World War II, and in response to the large numbers of people fleeing Eastern Europe, the UN 1951 Refugee Convention defined "refugee" (in Article 1.A.2) as any person who:
"owing to well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country; or who, not having a nationality and being outside the country of his former habitual residence as a result of such events, is unable or, owing to such fear, is unwilling to return to it."
In 1967, the definition was basically confirmed by the UN Protocol Relating to the Status of Refugees.
The Convention Governing the Specific Aspects of Refugee Problems in Africa expanded the 1951 definition, which the Organization of African Unity adopted in 1969:
"Every person who, owing to external aggression, occupation, foreign domination or events seriously disturbing public order in either part or the whole of his country of origin or nationality, is compelled to leave his place of habitual residence in order to seek refuge in another place outside his country of origin or nationality."
The 1984 regional, non-binding Latin-American Cartagena Declaration on Refugees includes:
"persons who have fled their country because their lives, safety or freedom have been threatened by generalized violence, foreign aggression, internal conflicts, massive violation of human rights or other circumstances which have seriously disturbed public order."
As of 2011, the UNHCR itself, in addition to the 1951 definition, recognizes persons as refugees:
"who are outside their country of nationality or habitual residence and unable to return there owing to serious and indiscriminate threats to life, physical integrity or freedom resulting from generalized violence or events seriously disturbing public order."
European Union's minimum standards definition of refugee, underlined by Art. 2 (c) of Directive No. 2004/83/EC, essentially reproduces the narrow definition of refugee offered by the UN 1951 Convention; nevertheless, by virtue of articles 2 (e) and 15 of the same Directive, persons who have fled a war-caused generalized violence are, at certain conditions, eligible for a complementary form of protection, called subsidiary protection. The same form of protection is foreseen for displaced people who, without being refugees, are nevertheless exposed, if returned to their countries of origin, to death penalty, torture or other inhuman or degrading treatments.
The idea that a person who sought sanctuary in a holy place could not be harmed without inviting divine retribution was familiar to the ancient Greeks and ancient Egyptians. However, the right to seek asylum in a church or other holy place was first codified in law by King Æthelberht of Kent in about AD 600. Similar laws were implemented throughout Europe in the Middle Ages. The related concept of political exile also has a long history: Ovid was sent to Tomis; Voltaire was sent to England. By the 1648 Peace of Westphalia, nations recognized each other's sovereignty. However, it was not until the advent of romantic nationalism in late 18th-century Europe that nationalism gained sufficient prevalence for the phrase country of nationality to become practically meaningful, and for border crossing to require that people provide identification.
The term "refugee" sometime applies to people who might fit the definition outlined by the 1951 Convention, were it applied retroactively. There are many candidates. For example, after the Edict of Fontainebleau in 1685 outlawed Protestantism in France, hundreds of thousands of Huguenots fled to England, the Netherlands, Switzerland, South Africa, Germany and Prussia. The repeated waves of pogroms that swept Eastern Europe in the 19th and early 20th centuries prompted mass Jewish emigration (more than 2 million Russian Jews emigrated in the period 1881–1920). Beginning in the 19th century, Muslim people emigrated to Turkey from Europe. The Balkan Wars of 1912–1913 caused 800,000 people to leave their homes. Various groups of people were officially designated refugees beginning in World War I.
League of Nations
The first international co-ordination of refugee affairs came with the creation by the League of Nations in 1921 of the High Commission for Refugees and the appointment of Fridtjof Nansen as its head. Nansen and the Commission were charged with assisting the approximately 1,500,000 people who fled the Russian Revolution of 1917 and the subsequent civil war (1917–1921), p. 1. most of them aristocrats fleeing the Communist government. It is estimated that about 800,000 Russian refugees became stateless when Lenin revoked citizenship for all Russian expatriates in 1921.
In 1923, the mandate of the Commission was expanded to include the more than one million Armenians who left Turkish Asia Minor in 1915 and 1923 due to a series of events now known as the Armenian Genocide. Over the next several years, the mandate was expanded further to cover Assyrians and Turkish refugees. In all of these cases, a refugee was defined as a person in a group for which the League of Nations had approved a mandate, as opposed to a person to whom a general definition applied.
The 1923 population exchange between Greece and Turkey involved about two million people (around 1.5 million Anatolian Greeks and 500,000 Muslims in Greece) most of whom were forcibly repatriated and denaturalized[clarification needed] from homelands of centuries or millennia (and guaranteed the nationality of the destination country) by a treaty promoted and overseen by the international community as part of the Treaty of Lausanne (1923).[A]
The U.S. Congress passed the Emergency Quota Act in 1921, followed by the Immigration Act of 1924. The Immigration Act of 1924 was aimed at further restricting the Southern and Eastern Europeans, especially Jews, Italians and Slavs, who had begun to enter the country in large numbers beginning in the 1890s. Most European refugees (principally Jews and Slavs) fleeing the Nazis and the Soviet Union were barred from going to the United States until after World War II.
In 1930, the Nansen International Office for Refugees (Nansen Office) was established as a successor agency to the Commission. Its most notable achievement was the Nansen passport, a refugee travel document, for which it was awarded the 1938 Nobel Peace Prize. The Nansen Office was plagued by problems of financing, an increase in refugee numbers, and a lack of co-operation from some member states, which led to mixed success overall.
However, the Nansen Office managed to lead fourteen nations to ratify the 1933 Refugee Convention, an early, and relatively modest, attempt at a human rights charter, and in general assisted around one million refugees worldwide.
1933 (rise of Nazism) to 1944
The rise of Nazism led to such a very large increase in the number of refugees from Germany that in 1933 the League created a high commission for refugees coming from Germany. Besides other measures by the Nazis which created fear and flight, Jews were stripped of German citizenship [B] by the Reich Citizenship Law of 1935. On 4 July 1936 an agreement was signed under League auspices that defined a refugee coming from Germany as "any person who was settled in that country, who does not possess any nationality other than German nationality, and in respect of whom it is established that in law or in fact he or she does not enjoy the protection of the Government of the Reich" (article 1).[C]
The mandate of the High Commission was subsequently expanded to include persons from Austria and Sudetenland, which Germany annexed after 1 October 1938 in accordance with the Munich Agreement. According to the Institute for Refugee Assistance, the actual count of refugees from Czechoslovakia on 1 March 1939 stood at almost 150,000. Between 1933 and 1939, about 200,000 Jews fleeing Nazism were able to find refuge in France, while at least 55,000 Jews were able to find refuge in Palestine p. 326 n. 6. before the British authorities closed that destination in 1939.
On 31 December 1938, both the Nansen Office and High Commission were dissolved and replaced by the Office of the High Commissioner for Refugees under the Protection of the League. This coincided with the flight of several hundred thousand Spanish Republicans to France after their defeat by the Nationalists in 1939 in the Spanish Civil War.
The conflict and political instability during World War II led to massive numbers of refugees (see World War II evacuation and expulsion). In 1943, the Allies created the United Nations Relief and Rehabilitation Administration (UNRRA) to provide aid to areas liberated from Axis powers, including parts of Europe and China. By the end of the War, Europe had more than 40 million refugees. UNRRA was involved in returning over seven million refugees, then commonly referred to as displaced persons or DPs, to their country of origin and setting up displaced persons camps for one million refugees who refused to be repatriated. Even two years after the end of War, some 850,000 people still lived in DP camps across Western Europe. DP Camps in Europe Intro, from: DPs Europe's Displaced Persons, 1945–1951 by Mark Wyman After the establishment of Israel in 1948, Israel accepted more than 650,000 refugees by 1950. By 1953, over 250,000 refugees were still in Europe, most of them old, infirm, crippled, or otherwise disabled.
Post-World War II population transfers
After the Soviet armed forces captured eastern Poland from the Germans in 1944, the Soviets unilaterally declared a new frontier between the Soviet Union and Poland approximately at the Curzon Line, despite the protestations from the Polish government-in-exile in London and the western Allies at the Teheran Conference and the Yalta Conference of February 1945. After the German surrender on 7 May 1945, the Allies occupied the remainder of Germany, and the Berlin declaration of 5 June 1945 confirmed the unfortunate help of the second division team of Cruzeiro of Allied-occupied Germany according to the Yalta Conference, which stipulated the continued existence of the German Reich as a whole, which would include its eastern territories as of 31 December 1937. This did not impact on Poland's eastern border, and Stalin refused to be removed from these eastern Polish territories.
In the last months of World War II, about five million German civilians from the German provinces of East Prussia, Pomerania and Silesia fled the advance of the Red Army from the east and became refugees in Mecklenburg, Brandenburg and Saxony. Since the spring of 1945 the Poles had been forcefully expelling the remaining German population in these provinces. When the Allies met in Potsdam on 17 July 1945 at the Potsdam Conference, a chaotic refugee situation faced the occupying powers. The Potsdam Agreement, signed on 2 August 1945, defined the Polish western border as that of 1937, (Article VIII) Agreements of the Berlin (Potsdam) Conference placing one fourth of Germany's territory under the Provisional Polish administration. Article XII ordered that the remaining German populations in Poland, Czechoslovakia and Hungary be transferred west in an "orderly and humane" manner.Agreements of the Berlin (Potsdam) Conference (See Flight and expulsion of Germans (1944–50).)
Although not approved by Allies at Potsdam, hundreds of thousands of ethnic Germans living in Yugoslavia and Romania were deported to slave labour in the Soviet Union, to Allied-occupied Germany, and subsequently to the German Democratic Republic (East Germany), Austria and the Federal Republic of Germany (West Germany). This entailed the largest population transfer in history. In all 15 million Germans were affected, and more than two million perished during the expulsions of the German population. (See Flight and expulsion of Germans (1944–1950).) Between the end of War and the erection of the Berlin Wall in 1961, more than 563,700 refugees from East Germany traveled to West Germany for asylum from the Soviet occupation.
During the same period, millions of former Russian citizens were forcefully repatriated against their will into the USSR. On 11 February 1945, at the conclusion of the Yalta Conference, the United States and United Kingdom signed a Repatriation Agreement with the USSR. The interpretation of this Agreement resulted in the forcible repatriation of all Soviets regardless of their wishes. When the war ended in May 1945, British and United States civilian authorities ordered their military forces in Europe to deport to the Soviet Union millions of former residents of the USSR, including many persons who had left Russia and established different citizenship decades before. The forced repatriation operations took place from 1945 to 1947.
At the end of World War II, there were more than 5 million "displaced persons" from the Soviet Union in Western Europe. About 3 million had been forced laborers (Ostarbeiters) in Germany and occupied territories. The Soviet POWs and the Vlasov men were put under the jurisdiction of SMERSH (Death to Spies). Of the 5.7 million Soviet prisoners of war captured by the Germans, 3.5 million had died while in German captivity by the end of the war. The survivors on their return to the USSR were treated as traitors (see Order No. 270). Over 1.5 million surviving Red Army soldiers imprisoned by the Nazis were sent to the Gulag.
Poland and Soviet Ukraine conducted population exchanges following the imposition of a new Poland-Soviet border at the Curzon Line in 1944. About 2,100,000 Poles were expelled west of the new border (see Repatriation of Poles), while about 450,000 Ukrainians were expelled to the east of the new border. The population transfer to Soviet Ukraine occurred from September 1944 to May 1946 (see Repatriation of Ukrainians). A further 200,000 Ukrainians left southeast Poland more or less voluntarily between 1944 and 1945.
Due to the report of the U.S. Committee for Refugees (1995), 10 to 15 percent of 7,5 million Azerbaijani population were refugees or displaced people. Most of them were 228,840 refugee people of Azerbaijan who fled from Armenia in 1988 as a result of deportation policy of Armenia against ethnic Azerbaijanis.
The International Refugee Organization (IRO) was founded on 20 April 1946, and took over the functions of the United Nations Relief and Rehabilitation Administration, which was shut down in 1947. While the handover was originally planned to take place at the beginning of 1947, it did not occur until July 1947. The International Refugee Organization was a temporary organization of the United Nations (UN), which itself had been founded in 1945, with a mandate to largely finish the UNRRA's work of repatriating or resettling European refugees. It was dissolved in 1952 after resettling about one million refugees. The definition of a refugee at this time was an individual with either a Nansen passport or a "Certificate of identity" issued by the International Refugee Organization.
The Constitution of the International Refugee Organization, adopted by the United Nations General Assembly on 15 December 1946, specified the agency's field of operations. Controversially, this defined "persons of German ethnic origin" who had been expelled, or were to be expelled from their countries of birth into the postwar Germany, as individuals who would "not be the concern of the Organization." This excluded from its purview a group that exceeded in number all the other European displaced persons put together. Also, because of disagreements between the Western allies and the Soviet Union, the IRO only worked in areas controlled by Western armies of occupation.
With the occurrence of major instances of diaspora and forced migration, the study of their causes and implications has emerged as a legitimate interdisciplinary area of research, and began to rise by mid to late 20th century, after World War II. Although significant contributions had been made before, the latter half of the 20th century saw the establishment of institutions dedicated to the study of refugees, such as the Association for the Study of the World Refugee Problem, which was closely followed by the founding of the United Nations High Commissioner for Refugees. In particular, the 1981 volume of the International Migration Review defined refugee studies as "a comprehensive, historical, interdisciplinary and comparative perspective which focuses on the consistencies and patterns in the refugee experience." Following its publishing, the field saw a rapid increase in academic interest and scholarly inquiry, which has continued to the present. Most notably in 1988, the Journal of Refugee Studies was established as the field's first major interdisciplinary journal.
The emergence of refugee studies as a distinct field of study has been criticized by scholars due to terminological difficulty. Since no universally accepted definition for the term "refugee" exists, the academic respectability of the policy-based definition, as outlined in the 1951 Refugee Convention, is disputed. Additionally, academics have critiqued the lack of a theoretical basis of refugee studies and dominance of policy-oriented research. In response, scholars have attempted to steer the field toward establishing a theoretical groundwork of refugee studies through "situating studies of particular refugee (and other forced migrant) groups in the theories of cognate areas (and major disciplines), [providing] an opportunity to use the particular circumstances of refugee situations to illuminate these more general theories and thus participate in the development of social science, rather than leading refugee studies into an intellectual cul-de-sac." Thus, the term refugee in the context of refugee studies can be referred to as "legal or descriptive rubric", encompassing socioeconomic backgrounds, personal histories, psychological analyses, and spiritualities.
UN Refugee Agency
This section needs additional citations for verification. (February 2010) (Learn how and when to remove this template message)
Headquartered in Geneva, Switzerland, the Office of the United Nations High Commissioner for Refugees (UNHCR) was established on 14 December 1950. It protects and supports refugees at the request of a government or the United Nations and assists in providing durable solutions, such as return or resettlement. All refugees in the world are under UNHCR mandate except Palestinian refugees, who fled the current state of Israel between 1947 and 1949, as a result of the 1948 Palestine War. These refugees are assisted by the United Nations Relief and Works Agency (UNRWA). However, Palestinian Arabs who fled the West Bank and Gaza after 1949 (for example, during the 1967 Six Day war) are under the jurisdiction of the UNHCR. Moreover, the UNHCR also provides protection and assistance to other categories of displaced persons: asylum seekers, refugees who returned home voluntarily but still need help rebuilding their lives, local civilian communities directly affected by large refugee movements, stateless people and so-called internally displaced people (IDPs), as well as people in refugee-like and IDP-like situations.
The agency is mandated to lead and co-ordinate international action to protect refugees and to resolve refugee problems worldwide. Its primary purpose is to safeguard the rights and well-being of refugees. It strives to ensure that everyone can exercise the right to seek asylum and find safe refuge in another state or territory and to offer "durable solutions" to refugees and refugee hosting countries.
Acute and temporary protection
A refugee camp is a place built by governments or NGOs (such as the Red Cross) to receive refugees, internally displaced persons or sometimes also other migrants. It is usually designed to offer acute and temporary accommodation and services and any more permanent facilities and structures often banned. People may stay in these camps for many years, receiving emergency food, education and medical aid until it is safe enough to return to their country of origin. There, refugees are at risk of disease, child soldier and terrorist recruitment, and physical and sexual violence. There are estimated to be 700 refugee camp locations worldwide.
Not all refugees who are supported by the UNHCR live in refugee camps. A significant number, actually more than half, live in urban settings, such as the ~60,000 Iraqi refugees in Damascus (Syria), and the ~30,000 Sudanese refugees in Cairo (Egypt).
The residency status in the host country whilst under temporary UNHCR protection is very uncertain as refugees are only granted temporary visas that have to be regularly renewed. Rather than only safeguarding the rights and basic well-being of refugees in the camps or in urban settings on a temporary basis the UNHCR's ultimate goal is to find one of the three durable solutions for refugees: integration, repatriation, resettlement.
Integration and naturalisation
Local integration is aiming at providing the refugee with the permanent right to stay in the country of asylum, including, in some situations, as a naturalized citizen. It follows the formal granting of refugee status by the country of asylum. It is difficult to quantify the number of refugees who settled and integrated in their first country of asylum and only the number of naturalisations can give an indication. In 2014 Tanzania granted citizenship to 162,000 refugees from Burundi and in 1982 to 32,000 Rwandan refugees. Mexico naturalised 6,200 Guatemalan refugees in 2001.
Voluntary return of refugees into their country of origin, in safety and dignity, is based on their free will and their informed decision. In the last couple of years parts of or even whole refugee populations were able to return to their home countries: e.g. 120,000 Congolese refugees returned from the Republic of Congo to the DRC, 30,000 Angolans returned home from the DRC and Botswana, Ivorian refugees returned from Liberia, Afghans from Pakistan, and Iraqis from Syria. In 2013, the governments of Kenya and Somalia also signed a tripartite agreement facilitating the repatriation of refugees from Somalia. The UNHCR and the IOM offer assistance to refugees who want to return voluntarily to their home countries. Many developed countries also have Assisted Voluntary Return (AVR) programmes for asylum seekers who want to go back or were refused asylum.
Third country resettlement
Third country resettlement involves the assisted transfer of refugees from the country in which they have sought asylum to a safe third country that has agreed to admit them as refugees. This can be for permanent settlement or limited to a certain number of years. It is the third durable solution and it can only be considered once the two other solutions have proved impossible. The UNHCR has traditionally seen resettlement as the least preferable of the "durable solutions" to refugee situations. However, in April 2000 the then UN High Commissioner for Refugees, Sadako Ogata, stated "Resettlement can no longer be seen as the least-preferred durable solution; in many cases it is the only solution for refugees."
Internally displaced person
UNHCR's mandate has gradually been expanded to include protecting and providing humanitarian assistance to internally displaced persons (IDPs) and people in IDP-like situations. These are civilians who have been forced to flee their homes, but who have not reached a neighboring country. IDPs do not fit the legal definition of a refugee under the 1951 Refugee Convention, 1967 Protocol and the 1969 Organization for African Unity Convention, because they have not left their country. As the nature of war has changed in the last few decades, with more and more internal conflicts replacing interstate wars, the number of IDPs has increased significantly.
The term refugee is often used in different contexts: in everyday usage it refers to a forcibly displaced person who has fled their country of origin; in a more specific context it refers to such a person who was, on top of that, granted refugee status in the country the person fled to. Even more exclusive is the Convention refugee status which is given only to persons who fall within the refugee definition of the 1951 Convention and the 1967 Protocol.
To receive refugee status, a person must have applied for asylum, making them—while waiting for a decision—an asylum seeker. However, a displaced person otherwise legally entitled to refugee status may never apply for asylum, or may not be allowed to apply in the country they fled to and thus may not have official asylum seeker status.
Once a displaced person is granted refugee status they enjoy certain rights as agreed in the 1951 Refugee convention. Not all countries have signed and ratified this convention and some countries do not have a legal procedure for dealing with asylum seekers.
An asylum seeker is a displaced person or immigrant who has formally sought the protection of the state they fled to as well as the right to remain in this country and who is waiting for a decision on this formal application. An asylum seeker may have applied for Convention refugee status or for complementary forms of protection. Asylum is thus a category that includes different forms of protection. Which form of protection is offered depends on the legal definition that best describes the asylum seeker's reasons to flee. Once the decision was made the asylum seeker receives either Convention refugee status or a complementary form of protection, and can stay in the country—or is refused asylum, and then often has to leave. Only after the state, territory or the UNHCR—wherever the application was made—recognises the protection needs does the asylum seeker officially receive refugee status. This carries certain rights and obligations, according to the legislation of the receiving country.
Quota refugees do not need to apply for asylum on arrival in the third countries as they already went through the UNHCR refugee status determination process whilst being in the first country of asylum and this is usually accepted by the third countries.
Refugee status determination
To receive refugee status, a displaced person must go through a Refugee Status Determination (RSD) process, which is conducted by the government of the country of asylum or the UNHCR, and is based on international, regional or national law. RSD can be done on a case by case basis as well as for whole groups of people. Which of the two processes is used often depends on the size of the influx of displaced persons.
There is no specific method mandated for RSD (apart from the commitment to the 1951 Refugee Convention) and it is subject to the overall efficacy of the country's internal administrative and judicial system as well as the characteristics of the refugee flow to which the country responds. This lack of a procedural direction could create a situation where political and strategic interests override humanitarian considerations in the RSD process. There are also no fixed interpretations of the elements in the 1951 Refugee Convention and countries may interpret them differently (see also refugee roulette).
However, in 2013, the UNHCR conducted them in more than 50 countries and co-conducted them parallel to or jointly with governments in another 20 countries, which made it the second largest RSD body in the world The UNHCR follows a set of guidelines described in the Handbook and Guidelines on Procedures and Criteria for Determining Refugee Status to determine which individuals are eligible for refugee status.
Refugee rights encompass both customary law, peremptory norms, and international legal instruments. If the entity granting refugee status is a state that has signed the 1951 Refugee Convention then the refugee has the right to employment. Further rights include the following rights and obligations for refugees:
Right of return
Even in a supposedly "post-conflict" environment, it is not a simple process for refugees to return home. The UN Pinheiro Principles are guided by the idea that people not only have the right to return home, but also the right to the same property. It seeks to return to the pre-conflict status quo and ensure that no one profits from violence. Yet this is a very complex issue and every situation is different; conflict is a highly transformative force and the pre-war status-quo can never be reestablished completely, even if that were desirable (it may have caused the conflict in the first place). Therefore, the following are of particular importance to the right to return:
- May never have had property (e.g., in Afghanistan)
- Cannot access what property they have (Colombia, Guatemala, South Africa and Sudan)
- Ownership is unclear as families have expanded or split and division of the land becomes an issue
- Death of owner may leave dependents without clear claim to the land
- People settled on the land know it is not theirs but have nowhere else to go (as in Colombia, Rwanda and Timor-Leste)
- Have competing claims with others, including the state and its foreign or local business partners (as in Aceh, Angola, Colombia, Liberia and Sudan).
Refugees who were resettled to a third country will likely lose the indefinite leave to remain in this country if they return to their country of origin or the country of first asylum.
Right to non-refoulement
Non-refoulement is the right not to be returned to a place of persecution and is the foundation for international refugee law, as outlined in the 1951 Convention Relating to the Status of Refugees. The right to non-refoulement is distinct from the right to asylum. To respect the right to asylum, states must not deport genuine refugees. In contrast, the right to non-refoulement allows states to transfer genuine refugees to third party countries with respectable human rights records. The portable procedural model, proposed by political philosopher Andy Lamey, emphasizes the right to non-refoulement by guaranteeing refugees three procedural rights (to a verbal hearing, to legal counsel, and to judicial review of detention decisions) and ensuring those rights in the constitution. This proposal attempts to strike a balance between the interest of national governments and the interests of refugees.
Right to family reunification
Family reunification (which can also be a form of resettlement) is a recognized reason for immigration in many countries. Divided families have the right to be reunited if a family member with permanent right of residency applies for the reunification and can prove the people on the application were a family unit before arrival and wish to live as a family unit since separation. If application is successful this enables the rest of the family to immigrate to that country as well.
Right to travel
Those states that signed the Convention Relating to the Status of Refugees are obliged to issue travel documents (i.e. "Convention Travel Document") to refugees lawfully residing in their territory.[D] It is a valid travel document in place of a passport, however, it cannot be used to travel to the country of origin, i.e. from where the refugee fled.
Restriction of onward movement
Once refugees or asylum seekers have found a safe place and protection of a state or territory outside their territory of origin they are discouraged from leaving again and seeking protection in another country. If they do move onward into a second country of asylum this movement is also called "irregular movement" by the UNHCR (see also asylum shopping). UNHCR support in the second country may be less than in the first country and they can even be returned to the first country.
World Refugee Day
World Refugee Day has occurred annually on 20 June since 2000 by a special United Nations General Assembly Resolution. 20 June had previously been commemorated as "African Refugee Day" in a number of African countries.
In the United Kingdom World Refugee Day is celebrated as part of Refugee Week. Refugee Week is a nationwide festival designed to promote understanding and to celebrate the cultural contributions of refugees, and features many events such as music, dance and theatre.
Displacement is a long lasting reality for most refugees. Two-thirds of all refugees around the world have been displaced for over three years, which is known as being in 'protracted displacement'. 50% of refugees – around 10 million people – have been displaced for over ten years.
The Overseas Development Institute has found that aid programmes need to move from short-term models of assistance (such as food or cash handouts) to more sustainable long-term programmes that help refugees become more self-reliant. This can involve tackling difficult legal and economic environments, by improving social services, job opportunities and laws.
Refugees typically report poorer levels of health, compared to other immigrants and the non-immigrant population.
Apart from physical wounds or starvation, a large percentage of refugees develop symptoms of post-traumatic stress disorder (PTSD) or depression. These long-term mental problems can severely impede the functionality of the person in everyday situations; it makes matters even worse for displaced persons who are confronted with a new environment and challenging situations. They are also at high risk for suicide.
Among other symptoms, post-traumatic stress disorder involves anxiety, over-alertness, sleeplessness, chronic fatigue syndrome, motor difficulties, failing short term memory, amnesia, nightmares and sleep-paralysis. Flashbacks are characteristic to the disorder: the patient experiences the traumatic event, or pieces of it, again and again. Depression is also characteristic for PTSD-patients and may also occur without accompanying PTSD.
PTSD was diagnosed in 34.1% of Palestinian children, most of whom were refugees, males, and working. The participants were 1,000 children aged 12 to 16 years from governmental, private, and United Nations Relief Work Agency UNRWA schools in East Jerusalem and various governorates in the West Bank.
Another study showed that 28.3% of Bosnian refugee women had symptoms of PTSD three or four years after their arrival in Sweden. These women also had significantly higher risks of symptoms of depression, anxiety, and psychological distress than Swedish-born women. For depression the odds ratio was 9.50 among Bosnian women.
A study by the Department of Pediatrics and Emergency Medicine at the Boston University School of Medicine demonstrated that twenty percent of Sudanese refugee minors living in the United States had a diagnosis of post-traumatic stress disorder. They were also more likely to have worse scores on all the Child Health Questionnaire subscales.
In a study for the United Kingdom, refugees were found to be 4 percentage points more likely to report a mental health problem compared to the non-immigrant population. This contrasts with the results for other immigrant groups, which were less likely to report a mental health problem compared to the non-immigrant population.
Many more studies illustrate the problem. One meta-study was conducted by the psychiatry department of Oxford University at Warneford Hospital in the United Kingdom. Twenty surveys were analyzed, providing results for 6,743 adult refugees from seven countries. In the larger studies, 9% were diagnosed with post-traumatic stress disorder and 5% with major depression, with evidence of much psychiatric co-morbidity. Five surveys of 260 refugee children from three countries yielded a prevalence of 11% for post-traumatic stress disorder. According to this study, refugees resettled in Western countries could be about ten times more likely to have PTSD than age-matched general populations in those countries. Worldwide, tens of thousands of refugees and former refugees resettled in Western countries probably have post-traumatic stress disorder.
Refugees are often more susceptible to illness for several reasons, including a lack of immunity to local strains of malaria and other diseases. Displacement of a people can create favorable conditions for disease transmission. Refugee camps are typically heavily populated with poor sanitary conditions. The removal of vegetation for space, building materials or firewood also deprives mosquitoes of their natural habitats, leading them to more closely interact with humans. In the 1970s, Afghani refugees that were relocated to Pakistan were going from a country with an effective malaria control strategy, to a country with a less effective system.
The refugee camps were built near rivers or irrigation sites had higher malaria prevalence than refugee camps built on dry lands. The location of the camps lent themselves to better breeding grounds for mosquitoes, and thus a higher likelihood of malaria transmission. Children aged 1–15 were the most susceptible to malaria infection, which is a significant cause of mortality in children younger than 5. Malaria was the cause of 16% of the deaths in refugee children younger than 5 years of age. Malaria is one of the most commonly reported causes of death in refugees and displaced persons. Since 2014, reports of malaria cases in Germany had doubled compared to previous years, with the majority of cases found in refugees from Eritrea.
The World Health Organization recommends that all people in areas that are endemic for malaria use long-lasting insecticide nets. A cohort study found that within refugee camps in Pakistan, insecticide treated bed nets were very useful in reducing malaria cases. A single treatment of the nets with the insecticide permethrin remained protective throughout the 6 month transmission season.
Access to healthcare services
Access to services depends on many factors, including whether a refugee has received official status, is situated within a refugee camp, or is in the process of third country resettlement. The UNHCR recommends integrating access to primary care and emergency health services with the host country in as equitable a manner as possible. Prioritized services include areas of maternal and child health, immunizations, tuberculosis screening and treatment, and HIV/AIDS-related services. Despite inclusive stated policies for refugee access to health care on the international levels, potential barriers to that access include language, cultural preferences, high financial costs, administrative hurdles, and physical distance. Specific barriers and policies related to health service access also emerge based on the host country context. For example, primaquine, an often recommended malaria treatment is not currently licensed for use in Germany and must be ordered from outside the country.
In Canada, barriers to healthcare access include the lack of adequately trained physicians, complex medical conditions of some refugees and the bureaucracy of medical coverage. There are also individual barriers to access such as language and transportation barriers, institutional barriers such as bureaucratic burdens and lack of entitlement knowledge, and systems level barriers such as conflicting policies, racism and physician workforce shortage.
In the US, all officially designated Iraqi refugees had health insurance coverage compared to a little more than half of non-Iraqi immigrants in a Dearborn, Michigan, study. However, greater barriers existed around transportation, language and successful stress coping mechanisms for refugees versus other immigrants, in addition, refugees noted greater medical conditions. The study also found that refugees had higher healthcare utilization rate (92.1%) as compared to the US overall population (84.8%) and immigrants (58.6%) in the study population.
Within Australia, officially designated refugees who qualify for temporary protection and offshore humanitarian refugees are eligible for health assessments, interventions and access to health insurance schemes and trauma-related counseling services. Despite being eligible to access services, barriers include economic constraints around perceived and actual costs carried by refugees. In addition, refugees must cope with a healthcare workforce unaware of the unique health needs of refugee populations. Perceived legal barriers such as fear that disclosing medical conditions prohibiting reunification of family members and current policies which reduce assistance programs may also limit access to health care services.
Providing access to healthcare for refugees through integration into the current health systems of host countries may also be difficult when operating in a resource limited setting. In this context, barriers to healthcare access may include political aversion in the host country and already strained capacity of the existing health system. Political aversion to refugee access into the existing health system may stem from the wider issue of refugee resettlement. One approach to limiting such barriers is to move from a parallel administrative system in which UNHCR refugees may receive better healthcare than host nationals but is unsustainable financially and politically to that of an integrated care where refugee and host nationals receive equal and more improved care all around. In the 1980s, Pakistan attempted to address Afghan refugee healthcare access through the creation of Basic Health Units inside the camps. Funding cuts closed many of these programs, forcing refugees to seek healthcare from the local government. In response to a protracted refugee situation in the West Nile district, Ugandan officials with UNHCR created an integrative healthcare model for the mostly Sudanese refugee population and Ugandan citizens. Local nationals now access health care in facilities initially created for refugees.
One potential argument for limiting refugee access to healthcare is associated with costs with states desire to decrease health expenditure burdens. However, Germany found that restricting refugee access led to an increase actual expenditures relative to refugees which had full access to healthcare services. The legal restrictions on access to health care and the administrative barriers in Germany have been criticized since the 1990s for leading to delayed care, for increasing direct costs and administrative costs of health care, and for shifting the responsibility for care from the less expensive primary care sector to costly treatments for acute conditions in the secondary and tertiary sector.
Refugee populations consist of people who are terrified and are away from familiar surroundings. There can be instances of exploitation at the hands of enforcement officials, citizens of the host country, and even United Nations peacekeepers. Instances of human rights violations, child labor, mental and physical trauma/torture, violence-related trauma, and sexual exploitation, especially of children, have been documented. In many refugee camps in three war-torn West African countries, Sierra Leone, Guinea, and Liberia, young girls were found to be exchanging sex for money, a handful of fruit, or even a bar of soap. Most of these girls were between 13 and 18 years of age. In most cases, if the girls had been forced to stay, they would have been forced into marriage. They became pregnant around the age of 15 on average. This happened as recently as in 2001. Parents tended to turn a blind eye because sexual exploitation had become a "mechanism of survival" in these camps.
Large groups of displaced persons could be abused as "weapons" to threaten or political enemies or neighbouring countries.
Very rarely, refugees have been used and recruited as refugee militants or terrorists, and the humanitarian aid directed at refugee relief has very rarely been utilized to fund the acquisition of arms. Support from a refugee-receiving state has rarely been used to enable refugees to mobilize militarily, enabling conflict to spread across borders.
Historically, refugee populations have often been portrayed as a security threat. In the U.S and Europe, there has been much focus on the narrative that terrorists maintain networks amongst transnational, refugee, and migrant populations. This fear has been exaggerated into a modern-day Islamist terrorism Trojan Horse in which terrorists hide among refugees and penetrate host countries. 'Muslim-refugee-as-an-enemy-within' rhetoric is relatively new, but the underlying scapegoating of out-groups for domestic societal problems, fears and ethno-nationalist sentiment is not new. In the 1890s, the influx of Eastern European Jewish refugees to London coupled with the rise of anarchism in the city led to a confluence of threat-perception and fear of the refugee out-group. Populist rhetoric then too propelled debate over migration control and protecting national security.
Cross-national empirical verification, or rejection, of populist suspicion and fear of refugees' threat to national security and terror-related activities is relatively scarce. Case studies suggest that the threat of an Islamist refugee Trojan House is highly exaggerated. Of the 800,000 refugees vetted through the resettlement program in the United States between 2001 and 2016, only five were subsequently arrested on terrorism charges; and 17 of the 600,000 Iraqis and Syrians who arrived in Germany in 2015 were investigated for terrorism. One study found that European jihadists tend to be 'homegrown': over 90% were residents of a European country and 60% had European citizenship. While the statistics do not support the rhetoric, a PEW Research Center survey of ten European countries (Hungary, Poland, Netherlands, Germany, Italy, Sweden, Greece, UK, France, and Spain) released on 11 July 2016, finds that the majority (ranges from 52% to 76%) of respondents in eight countries (Hungary, Poland, Netherlands, Germany, Italy, Sweden, Greece, and UK) think refugees increase the likelihood of terrorism in their country. Since 1975, in the U.S., the risk of dying in a terror attack by a refugee is 1 in 3.6 billion per year; whereas, the odds of dying in a motor vehicle crash are 1 in 113, by state sanctioned execution 1 in 111,439, or by dog attack 1 in 114,622.
In Europe, fear of immigration, Islamification and job and welfare benefits competition has fueled an increase in violence. Immigrants are perceived as a threat to ethno-nationalist identity and increase concerns over criminality and insecurity.
In the PEW survey previously referenced, 50% of respondents believe that refugees are a burden due to job and social benefit competition. When Sweden received over 160,000 asylum seekers in 2015, it was accompanied by 50 attacks against asylum-seekers, which was more than four times the number of attacks that occurred in the previous four years. At the incident level, the 2011 Utøya Norway terror attack by Breivik demonstrates the impact of this threat perception on a country's risk from domestic terrorism, in particular ethno-nationalist extremism. Breivik portrayed himself as a protector of Norwegian ethnic identity and national security fighting against immigrant criminality, competition and welfare abuse and an Islamic takeover.
According to a 2018 study in the Journal of Peace Research, states often resort to anti-refugee violence in response to terrorist attacks or security crises. The study notes that there is evidence to suggest that "the repression of refugees is more consistent with a scapegoating mechanism than the actual ties and involvement of refugees in terrorism."
The category of “refugee” tends to have a universalizing effect on those classified as such. It draws upon the common humanity of a mass of people in order to inspire public empathy, but doing so can have the unintended consequence of silencing refugee stories and erasing the political and historical factors that led to their present state. Humanitarian groups and media outlets often rely on images of refugees that evoke emotional responses and are said to speak for themselves. The refugees in these images, however, are not asked to elaborate on their experiences, and thus, their narratives are all but erased. From the perspective of the international community, “refugee” is a performative status equated with injury, ill health, and poverty. When people no longer display these traits, they are no longer seen as ideal refugees, even if they still fit the legal definition. For this reason, there is a need to improve current humanitarian efforts by acknowledging the “narrative authority, historical agency, and political memory” of refugees alongside their shared humanity. Dehistorizing and depoliticizing refugees can have dire consequences. Rwandan refugees in Tanzanian camps, for example, were pressured to return to their home country before they believed it was truly safe to do so. Despite the fact that refugees, drawing on their political history and experiences, claimed that Tutsi forces still posed a threat to them in Rwanda, their narrative was overshadowed by the U.N. assurances of safety. When the refugees did return home, reports of reprisals against them, land seizures, disappearances, and incarceration abounded, as they had feared.
Integrating refugees into the workforce is one of the most important steps to overall integration of this particular migrant group. Many refugees are unemployed, under-employed, under-paid and work in the informal economy, if not receiving public assistance. Refugees encounter many barriers in receiving countries in finding and sustaining employment commensurate with their experience and expertise. A systemic barrier that is situated across multiple levels (i.e. institutional, organizational and individual levels) is coined "canvas ceiling".
Refugee children come from many different backgrounds, and their reasons for resettlement are even more diverse. The number of refugee children has continued to increase as conflicts interrupt communities at a global scale. In 2014 alone, there were approximately 32 armed conflicts in 26 countries around the world, and this period saw the highest number of refugees ever recorded Refugee children experience traumatic events in their lives that can affect their learning capabilities, even after they have resettled in first or second settlement countries. Educators such as teachers, counselors, and school staff, along with the school environment, are key in facilitating socialization and acculturation of recently arrived refugee and immigrant children in their new schools.
The experiences children go through during times of armed conflict can impede their ability to learn in an educational setting. Schools experience drop-outs of refugee and immigrant students from an array of factors such as: rejection by peers, low self-esteem, antisocial behavior, negative perceptions of their academic ability, and lack of support from school staff and parents. Because refugees come from various regions globally with their own cultural, religious, linguistic, and home practices, the new school culture can conflict with the home culture, causing tension between the student and their family.
Aside from students, teachers and school staff also face their own obstacles in working with refugee students. They have concerns about their ability to meet the mental, physical, emotional, and educational needs of students. One study of newly arrived Bantu students from Somalia in a Chicago school questioned whether schools were equipped to provide them with a quality education that met the needs of the pupils. The students were not aware of how to use pencils, which caused them to break the tips requiring frequent sharpening. Teachers may even see refugee students as different from other immigrant groups, as was the case with the Bantu pupils. Teachers may sometimes feel that their work is made harder because of the pressures to meet state requirements for testing. With refugee children falling behind or struggling to catch up, it can overwhelm teachers and administrators. Further leading to Anger
Not all students adjust the same way to their new setting. One student may take only three months, while others may take four years. One study found that even in their fourth year of schooling, Lao and Vietnamese refugee students in the US were still in a transitional status. Refugee students continue to encounter difficulties throughout their years in schools that can hinder their ability to learn. Furthermore, to provide proper support, educators must consider the experiences of students before they settled the US.
In their first settlement countries, refugee students may encounter negative experiences with education that they can carry with them post settlement. For example:
- Frequent disruption in their education as they move from place to place
- Limited access to schooling
- Language barriers
- Little resources to support language development and learning, and more
Statistics found that in places such as Uganda and Kenya, there were gaps in refugee students attending schools. It found that 80% of refugees in Uganda were attending schools, whereas only 46% of students were attending schools in Kenya. Furthermore, for secondary levels, the numbers were much lower. There was only 1.4% of refugee students attending schools in Malaysia. This trend is evident across several first settlement countries and carry negative impacts on students once they arrive to their permanent settlement homes, such as the US, and have to navigate a new education system. Unfortunately, some refugees do not have a chance to attend schools in their first settlement countries because they are considered undocumented immigrants in places like Malaysia for Rohingya refugees. In other cases, such as Burundians in Tanzania, refugees can get more access to education while in displacement than in their home countries.
All students need some form of support to help them overcome obstacles and challenges they may face in their lives, especially refugee children who may experience frequent disruptions. There are a few ways in which schools can help refugee students overcome obstacles to attain success in their new homes.
- Respect the cultural differences amongst refugees and the new home culture
- Individual efforts to welcome refugees to prevent feelings of isolation
- Educator support
- Student centered pedagogy as opposed to teacher centered
- Building relationships with the students
- Offering praise and providing affirmations
- Providing extensive support and designing curriculum for students to read, write, and speak in their native languages.
One school in NYC has found a method that works for them to help refugee students succeed. This school creates support for language and literacies, which promotes students using English and their native languages to complete projects. Furthermore, they have a learning centered pedagogy, which promotes the idea that there are multiple entry points to engage the students in learning. Both strategies have helped refugee students succeed during their transition into US schools.
Various websites contain resources that can help school staff better learn to work with refugee students such as Bridging Refugee Youth and Children's Services. With the support of educators and the school community, education can help rebuild the academic, social, and emotional well being of refugee students who have suffered from past and present trauma, marginalization, and social alienation.
It is important to understand the cultural differences amongst newly arrived refugees and school culture, such as that of the U.S. This can be seen as problematic because of the frequent disruptions that it can create in a classroom setting.
In addition, because of the differences in language and culture, students are often placed in lower classes due to their lack of English proficiency. Students can also be made to repeat classes because of their lack of English proficiency, even if they have mastered the content of the class. When schools have the resources and are able to provide separate classes for refugee students to develop their English skills, it can take the average refugee students only three months to catch up with their peers. This was the case with Somali refugees at some primary schools in Nairobi.
The histories of refugee students are often hidden from educators, resulting in cultural misunderstandings. However, when teachers, school staff, and peers help refugee students develop a positive cultural identity, it can help buffer the negative effects refugees' experiences have on them, such as poor academic performance, isolation, and discrimination.
Refugee crisis can refer to movements of large groups of displaced persons, who could be either internally displaced persons, refugees or other migrants. It can also refer to incidents in the country of origin or departure, to large problems whilst on the move or even after arrival in a safe country that involve large groups of displaced persons.
In 2018, the United Nations estimated the number of forcibly displaced people to be 68.5 million worldwide. Of those, 25.4 million are refugees while 40 million are internally displaced within a nation state and 3.1 million are classified as asylum seekers. 85% of refugees are hosted in developed countries, with 57% coming from Syria, Afghanistan and South Sudan. Turkey is the top hosting country of refugees with 3.5 million displaced people within its borders.
In 2006, there were 8.4 million UNHCR registered refugees worldwide, the lowest number since 1980. At the end of 2015, there were 16.1 million refugees worldwide. When adding the 5.2 million Palestinian refugees who are under UNRWA's mandate there were 21.3 million refugees worldwide. The overall forced displacement worldwide has reached a total of 65.3 million displaced persons at the end of 2015, while it was 59.5 million 12 months earlier. One in every 113 people globally is an asylum seeker or a refugee. In 2015, the total number of displaced people worldwide, including refugees, asylum seekers and internally displaced persons, was at its highest level on record.
Among them, Syrian refugees were the largest group in 2015 at 4.9 million. In 2014, Syrians had overtaken Afghan refugees (2.7 million), who had been the largest refugee group for three decades. Somalis were the third largest group with one million. The countries hosting the largest number of refugees according to UNHCR were Turkey (2.5 million), Pakistan (1.6 million), Lebanon (1.1 million) and Iran (1 million). the countries that had the largest numbers of internally displaced people were Colombia at 6.9, Syria at 6.6 million and Iraq at 4.4 million.
Children were 51% of refugees in 2015 and most of them were separated from their parents or travelling alone. In 2015, 86 per cent of the refugees under UNHCR's mandate were in low and middle-income countries that themselves are close to situations of conflict. Refugees have historically tended to flee to nearby countries with ethnic kin populations and a history of accepting other co-ethnic refugees. The religious, sectarian and denominational affiliation has been an important feature of debate in refugee-hosting nations.
|Region (UN major area)||2018||2017||2016||2014||2013||2012||2011||2010||2009||2008|
|Latin America and the Caribbean||215,924||252,288||322,403||352,700||382,000||380,700||377,800||373,900||367,400||350,300|
- Asylum shopping
- Conservation refugee, people displaced when conservation areas are created
- Diaspora, a mass movement of population, usually forced by war or natural disaster
- Emergencybnb, a website to find accommodation for refugees
- Emergency evacuation
- Forced displacement in popular culture
- Homo sacer, a banned person who may be killed by anybody
- Human migration
- Language analysis for the determination of origin
- List of refugees
- List of people granted asylum
- Mehran Karimi Nasseri, an Iranian refugee who lived in Charles de Gaulle Airport
- Migrant literature
- No person is illegal, network that represents non-resident immigrants
- Open borders
- Political asylum
- Private Sponsorship of Refugees Program - resettling refugues with the support and funding from private or joint government-private sponsorship
- Queer migration
- Refugee and Asylum Participatory Action Research
- Refugee health
- Refugee Nation, a plan to create a nation for refugees
- Refugee Radio
- Refugee Studies Centre
- Refugees United
- Refugee Olympic Athletes at the 2016 Summer Olympics
- Right of asylum
- The I Live Here Projects, a nonprofit storytelling organization
- Refugee children and refugee women
- Refugee Nation
- Third country resettlement
- The "Convention Concerning the Exchange of Greek and Turkish Populations" was signed at Lausanne, Switzerland, on 30 January 1923, by the governments of Greece and Turkey.
- Bankier, David "Nuremberg Laws" pages 1076–1077 from The Encyclopedia of the Holocaust Volume 3 edited by Israel Gutman, New York: Macmillan, 1990 page 1076
- Text in League of Nations Treaty Series, vol. 171, p. 77.
- Under Article 28 of the Convention.
- "Populations | Global Focus".
- Convention Protocol relating 1967.
- Truth about asylum.
- "UNRWA | United Nations Relief and Works Agency for Palestine Refugees in the Near East". UNRWA. Retrieved 23 August 2017.
- Militant Islamist Ideology: Understanding the Global Threat
- In The Shadow Of The Sword: The Battle for Global Empire and the End of the Ancient World
- La vraye et entière histoire des troubles et guerres civiles advenues de nostre temps, tant en France qu'en Flandres & pays circonvoisins, depuis l'an mil cinq cens soixante, jusques à présent.
- Base de données du refuge huguenot
- Gwynn, Robin (5 May 1985). "England's 'First Refugees'". History Today. 35 (5). Retrieved 18 January 2019.
- Assembly of Heads of State and Government (Sixth Ordinary Session) 1969.
- Cartagena Declaration.
- Office of the United Nations High Commissioner for Refugees (UNHCR) 2011, p. 19.
- McCarthy 1995.
- Greek Turkish refugees.
- Hassell 1991.
- Humanisten Nansen (in.
- Nansen International Office.
- Old fears over 2006.
- U S Constitution. sfn error: no target: CITEREFU_S_Constitution (help)
- Nobel Peace Prize.
- Reich Citizenship Law.
- Forced displacement.
- Gelber 1993, pp. 323–39.
- Spanish Civil War.
- Refugees: Save Us! 1979.
- Statistisches Bundesamt, Die 1958.
- Forced Resettlement", "Population, 2003.
- Naimark 1995.
- de Zayas 1977.
- de Zayas 2006.
- Elliott 1973, pp. 253–275.
- Repatriation Dark Side.
- Forced Repatriation to.
- Final Compensation Pending.
- Forced Labor.
- Nazi Ostarbeiter (Eastern.
- Soviet Prisoners Forgotten.
- Soviet Prisoners-of-War.
- James D. Morrow, "The Institutional Features of the Prisoners of War Treaties," International Organization 55, no. 4 (2001), 984, http://www.jstor.org/stable/pdf/3078622.
- Patriots ignore greatest 2007, p. 2.
- Forced migration.
- Refugees, United Nations High Commissioner for. "Refworld – UNHCR CDR Background Paper on Refugees and Asylum Seekers from Azerbaijan".
- "ECRI REPORT ON AZERBAIJAN" (PDF). 31 May 2011.
- United Nations Relief 1994.
- International Refugee Organization 1994.
- Stein, Barry N., and Silvano M. Tomasi. "Foreword." The International Migration Review, vol. 15, no. 1/2, 1981, pp. 5–7. JSTOR, JSTOR, http://www.jstor.org/stable/2545317.
- Black, Richard. "Fifty years of refugee studies: From theory to policy." International Migration Review 35.1 (2001): 57-78.
- Malkki, Liisa H. (1995). "Refugees and Exile: From "Refugee Studies" to the National Order of Things". Annual Review of Anthropology. 24 (1): 495–523. doi:10.1146/annurev.an.24.100195.002431.
- United Nations High Commissioner for Refugees.
- Dehghanpisheh 2013.
- "Refugees solutions". UNHCR. Retrieved 26 August 2018.
- Markus 2014.
- Goldberg 2001.
- Schmitt 2014. sfn error: multiple targets (2×): CITEREFSchmitt2014 (help)
- Nairobi to open 2014.
- What is resettlement?.
- Resettlement: new beginning.
- Understanding Resettlement to 2004.
- UNHCR 2015.
- Refugee Status Determination.
- Higgins 2016, pp. 71–93.
- Refugees, United Nations High Commissioner for. "Handbook on Procedures and Criteria for Determining Refugee Status under the 1951 Convention and the 1967 Protocol relating to the Status of Refugees" (PDF).
- Sara Pantuliano (2009) Uncharted Territory: Land, Conflict and Humanitarian Action Overseas Development Institute
- Convention relating to.
- Lamey 2011, pp. 232–266.
- Executive Committee of the High Commissioner's Programme 1989.
- "Refugee Week (UK) About Us". Refugee Week. Retrieved 24 July 2018.
- "Day 10, Year of #Mygration: Pope Francis World Day of Migrants and Refugees, 14 January 2018". Research at The Open University. 12 January 2018. Retrieved 2 April 2018.
- Crawford N. et al. (2015) Protracted displacement: uncertain paths to self-reliance in exile Overseas Development Institute
- Giuntella, O.; Kone, Z.L.; Ruiz, I.; C. Vargas-Silva (2018). "Reason for immigration and immigrants' health". Public Health. 158: 102–109. doi:10.1016/j.puhe.2018.01.037. PMID 29576228.
- Suicide pact 2002.
- Khamis 2005, pp. 81–95.
- Sundquist et al. 2005, pp. 158–64.
- Geltman et al. 2005, pp. 585–91.
- Fazel, Wheeler & Danesh 2005, pp. 1309–14.
- Kazmi & Pandit 2001, pp. 1043–1055. sfn error: multiple targets (2×): CITEREFKazmiPandit2001 (help)
- Rowland et al. 2002, pp. 2061–2072.
- Karim et al. 2016, pp. 1–12.
- Mertans & Hall 2000, pp. 103–9.
- Roggelin et al. 2016, p. 325.
- Fact sheet Malaria.
- Kolaczinski 2004, pp. 15.
- United Nations High Commissioner for Refugees (UNHCR) (2011). "Ensuring Access to Health Care: Operational Guidance on Refugee Protection and Solutions in Urban Areas".Retrieved 11 February 2017}
- Roggelin, L; Tappe, D; Noack, B; Addo, M; Tannich, E; Rothe, C (2016). "Sharp increase of imported Plasmodium vivax malaria seen in migrants from Eritrea in Hamburg, Germany". Malaria. 15 (1): 325. doi:10.1186/s12936-016-1366-7. PMC 4912711. PMID 27316351.
- McMurray, J; Breward, K; Breward, M; Alder, R; Arya, N (2014). "Integrated Primary Care Improves Access to Healthcare for Newly Arrived Refugees in Canada". Journal of Immigrant and Minority Health. 16 (4): 576–585. doi:10.1007/s10903-013-9954-x. PMID 24293090.
- Elsouhag, D; Arnetz, B; Jamil, H; Lumley, MA; Broadbridge, CL; Arnetz, J (2015). "Factors Associated with Healthcare Utilization Among Arab Immigrants and Iraqi Refugees". Journal of Immigrant and Minority Health. 17 (5): 1305–1312. doi:10.1007/s10903-014-0119-3. PMC 4405449. PMID 25331684.
- Murray, SB; Skull, SA (2005). "Hurdles to health:Immigrant and refugee healthcare in Australia". Australian Health Review. 29 (1): 25–29. doi:10.1071/ah050025. PMID 15683352.
- Gany, F; De Bocanegra, H (1996). "Overcoming barriers to improving the health of immigrant women". J Am Med Womens Assoc. 51 (4): 155–60. PMID 8840732.
- Tuepker, A; Chi, CH (2009). "Evaluating integrated healthcare for refugees and hosts in an African context". Health Economics, Policy and Law. 4 (2): 159–178. doi:10.1017/s1744133109004824. PMID 19187568.
- Lawrie, N; van Damme, W (2003). "The importance of refugee-host relations: Guinea 1990–2003". The Lancet. 362 (9383): 575. doi:10.1016/s0140-6736(03)14124-4. PMID 12938671.
- Kazmi, JH; Pandit, K (2001). "Disease and dislocation: the impact of refugee movements on the geography of malaria in NWFP, Pakistan". Social Science & Medicine. 52 (7): 1043–1055. doi:10.1016/S0277-9536(01)00341-0. PMID 12406471.
- Rowley, EA; Burnham, GM; Drabe, RM (2006). "Evaluating integrated healthcare for refugees and hosts in an African context". Journal of Refugee Studies. 19 (2): 158–186. doi:10.1093/jrs/fej019.
- Bozorgmehr, K; Razum, O (2015). "Effect of restricting access to health care on health expenditures among asylum-seekers and refugees: a quasi-experimental study in Germany, 1994–2013". PLOS ONE. 10 (7): e0131483. Bibcode:2015PLoSO..1031483B. doi:10.1371/journal.pone.0131483. PMC 4511805. PMID 26201017.
- Pross, C (1998). "Third Class Medicine: Health Care for Refugees in Germany". Health and Human Rights. 3 (2): 40–53. doi:10.2307/4065298. JSTOR 4065298.
- Aggrawal 2005, pp. 514–525.
- United Nations High Commissioner for Refugees (UNHCR) 1999.
- Crisp 1999.
- Weiss 1999, pp. 1–22.
- Schmid, Alex (2016). "Links Between Terrorism and Migration: An Exploration" (PDF). Terrorism and Counter-Terrorism Studies. doi:10.19165/2016.1.04.
- Coser, Lewis (1956). The Functions of Social Conflict. The Free Press.
- Michael Collyer is a Research Fellow in the Department of Geography; Sussex, the Sussex Centre for Migration Research at the University of (1 March 2005). "Secret agents: Anarchists, Islamists and responses to politically active refugees in London". Ethnic and Racial Studies. 28 (2): 278–303. doi:10.1080/01419870420000315852. ISSN 0141-9870.
- Milton, Daniel; Spencer, Megan; Findley, Michael (1 November 2013). "Radicalism of the Hopeless: Refugee Flows and Transnational Terrorism". International Interactions. 39 (5): 621–645. doi:10.1080/03050629.2013.834256. ISSN 0305-0629.
- Messari, N.; Klaauw, J. van der (1 December 2010). "Counter-Terrorism Measures and Refugee Protection in North Africa". Refugee Survey Quarterly. 29 (4): 83–103. doi:10.1093/rsq/hdq034. ISSN 1020-4067.
- Wilner, Alex S.; Dubouloz, Claire-Jehanne (1 February 2010). "Homegrown terrorism and transformative learning: an interdisciplinary approach to understanding radicalization". Global Change, Peace & Security. 22 (1): 33–51. doi:10.1080/14781150903487956. ISSN 1478-1158.
- Wike, Richard, Bruce Stokes, and Katie Simmons. "Europeans fear wave of refugees will mean more terrorism, fewer jobs." Pew Research Center 11 (2016).
- Nowrasteh, Alex (13 September 2016). "Terrorism and Immigration: A Risk Analysis". SSRN 2842277. Cite journal requires
- "Injury Facts Chart". www.nsc.org. Retrieved 29 March 2017.
- McGowan, Lee (3 July 2014). "Right-Wing Violence in Germany: Assessing the Objectives, Personalities and Terror Trail of the National Socialist Underground and the State's Response to It". German Politics. 23 (3): 196–212. doi:10.1080/09644008.2014.967224. ISSN 0964-4008.
- Wiggen, Mette (1 December 2012). "Rethinking Anti-Immigration Rhetoric after the Oslo and Utøya Terror Attacks". New Political Science. 34 (4): 585–604. doi:10.1080/07393148.2012.729744. ISSN 0739-3148.
- Savun, Burcu; Gineste, Christian (2019). "From protection to persecution: Threat environment and refugee scapegoating". Journal of Peace Research. 56: 88–102. doi:10.1177/0022343318811432.
- Malkki, Liisa H. (1996). "Speechless Emissaries: Refugees, Humanitarianism, and Dehistoricization". Cultural Anthropology. 11 (3): 377–404. doi:10.1525/can.1996.11.3.02a00050.
- Feldman, Allen (1994). "On Cultural Anesthesia: From Desert Storm to Rodney King". American Ethnologist. 21 (2): 408–18.
- Fiddian-Qasmiyeh, Elena; et al. (2014). The Oxford Handbook of Refugee and Forced Migration Studies. Oxford University Press.
- Malkki, Liisa H. (1996). "Speechless Emissaries: Refugees, Humanitarianism, and Dehistoricization". Cultural Anthropology. 11 (3): 398.
- Lee, Eun Su; Szkudlarek, Betina; Nguyen, Duc Cuong; Nardon, Luciara. "Unveiling the Canvas Ceiling: A Multidisciplinary Literature Review of Refugee Employment and Workforce Integration". International Journal of Management Reviews. n/a (n/a). doi:10.1111/ijmr.12222. ISSN 1468-2370.
- Dryden-Peterson, S. (2015). The Educational Experiences of Refugee Children in Countries of First Asylum (Rep.). Washington, DC: Migration Policy Institute.
- Mcbrien, J. L. (2005). "Educational Needs and Barriers for Refugee Students in the United States: A Review of the Literature". Review of Educational Research. 75 (3): 329–364. CiteSeerX 10.1.1.459.5997. doi:10.3102/00346543075003329.
- Birman, D., & Tran, N. (2015). The Academic Engagement of Newly Arriving Somali Bantu Students in a U.S. Elementary School. Washington, DC: Migration Policy Institute.
- Liem Thanh Nguyen, & Henkin, A. (1980). Reconciling Differences: Indochinese Refugee Students in American Schools. The Clearinghouse, 54(3), 105–108. Retrieved from http://www.jstor.org/stable/30185415
- Fransen, S.; Vargas-Silva, C.; M. Siegel (2018). "The impact of refugee experiences on education: evidence from Burundi". IZA Journal of Development and Migration. 8. doi:10.1186/s40176-017-0112-4.
- Mendenhall, M.; Bartlett, L.; Ghaffar-Kucher, A. (2016). ""If You Need Help, They are Always There for us": Education for Refugees in an International High School in NYC". The Urban Review. 49 (1): 1–25. doi:10.1007/s11256-016-0379-4.
- "UNHCR Figures at a Glance".
- Refugees at highest 2016.
- Global Trends: Forced 2016.
- Unhcr 2015.
- Refugees. sfn error: multiple targets (3×): CITEREFRefugees (help)
- Rüegger & Bohnet 2015.
- Bassel 2012, p. 84.
- "Global forced displacement trends. 2018 (Annexes)" (PDF). United Nations Convention Relating to the Status of Refugees. 2018.
- "Global forced displacement trends. 2017 (Annexes)" (PDF). United Nations Convention Relating to the Status of Refugees. 2017.
- Global forced displacement 2016.
- Global forced displacement 2014.
- Global forced displacement 2013.
- Global forced displacement 2012.
- Global forced displacement 2011.
- Global forced displacement 2010.
- Global forced displacement 2009.
- Global forced displacement 2008.
- Aggrawal A (2005). "Refugee Medicine". In Payne-James JJ, Byard RW, Corey TS, Henderson C (eds.). Encyclopedia of Forensic and Legal Medicine. 3. London: Elsevier Academic Press. pp. 514–525.CS1 maint: ref=harv (link)
- Assembly of Heads of State and Government (Sixth Ordinary Session) (September 1969). "OAU convention governing the specific aspects of refugee problems in Africa".CS1 maint: ref=harv (link)
- Bassel, Leah (2012). Refugee Women: Beyond Gender Versus Culture.CS1 maint: ref=harv (link)
- "Cartagena Declaration on Refugees".
- Convention and Protocol relating to the Status of Refugees (PDF), Geneva, Switzerland: Office of the United Nations High Commissioner for Refugees (UNHCR), Communications and Public Information Service, 1967
- "Convention relating to the Status of Refugees". www.ohchr.org. Retrieved 28 September 2015.
- Crisp, J. (1999). "A State of Insecurity: The Political Economy of Violence in Refugee-Populated Areas of Kenya". New Issues in Refugee Research (Working Paper No. 16).CS1 maint: ref=harv (link)
- de Zayas, Alfred (1977). Nemesis at Potsdam. London and Boston: Routledge.CS1 maint: ref=harv (link)
- de Zayas, Alfred (2006). A Terrible Revenge. Palgrave/Macmillan.CS1 maint: ref=harv (link)
- Dehghanpisheh, Babak (10 April 2013). "Iraqi refugees in Syria feel new strains of war". The Washington Post. Retrieved 18 December 2015.CS1 maint: ref=harv (link)
- "Detainee children 'in suicide pact'". CNN. 28 January 2002. Retrieved 22 May 2010.
- Elliott, Mark (June 1973). "The United States and Forced Repatriation of Soviet Citizens, 1944–47". Political Science Quarterly. 88 (2): 253–275. doi:10.2307/2149110. JSTOR 2149110.CS1 maint: ref=harv (link)
- Executive Committee of the High Commissioner's Programme (13 October 1989). "Problem of Refugees and Asylum-Seekers Who Move in an Irregular Manner from a Country in Which They Had Already Found Protection".CS1 maint: ref=harv (link)
- "Fact sheet about Malaria. (n.d.)". Retrieved 26 October 2016.
- Fazel, M; Wheeler, J; Danesh, J (2005). "Prevalence of serious mental disorder in 7000 refugees resettled in western countries: a systematic review". Lancet. 365 (9467): 1309–14. doi:10.1016/s0140-6736(05)61027-6. PMID 15823380.CS1 maint: ref=harv (link)
- "Final Compensation Pending for Former Nazi Forced Laborers".
- "Forced Labor at Ford Werke AG during the Second World War". Archived from the original on 14 October 2007.
- "Forced Repatriation to the Soviet Union: The Secret Betrayal" (PDF). Retrieved 7 March 2017.
- "Forced Resettlement", "Population, Expulsion and Transfer", "Repatriation"". Encyclopaedia of Public International Law. 1–5. Amsterdam: North Holland Publishers. 1993–2003.[clarification needed]
- "Forced displacement of Czech population under Nazis in 1938 and 1943". Radio Prague.
- "Forced migration in the 20th century". Archived from the original on 21 October 2015.
- "France". Holocaust Encyclopedia. United States Holocaust Memorial Museum.
- Gelber, Yoav (1993). "The Historical Role of the Central European Immigration to Israel". The Leo Baeck Institute Year Book. 38 (1): 323–39. doi:10.1093/leobaeck/38.1.323. eISSN 1758-437X. ISSN 0075-8744.CS1 maint: ref=harv (link)
- Geltman, PL; Grant-Knight, W; Mehta, SD; Lloyd-Travaglini, C; Lustig, S; Landgraf, JM; Wise, PH (2005). "The "lost boys of Sudan": functional and behavioral health of unaccompanied refugee minors re-settled in the United States". Archives of Pediatrics and Adolescent Medicine. 159 (6): 585–91. doi:10.1001/archpedi.159.6.585. PMID 15939860.CS1 maint: ref=harv (link)
- "Global Trends: Forced Displacement 2015". UNHCR. 20 June 2016. Retrieved 20 June 2016.
- "Global forced displacement trends. (Annexes) UNHCR Statistical Yearbook". United Nations Convention Relating to the Status of Refugees. 2008. Retrieved 15 May 2016.
- "Global forced displacement trends. (Annexes) UNHCR Statistical Yearbook". United Nations Convention Relating to the Status of Refugees. 2009. Retrieved 15 May 2016.
- "Global forced displacement trends. (Annexes) UNHCR Statistical Yearbook". United Nations Convention Relating to the Status of Refugees. 2010. Retrieved 15 May 2016.
- "Global forced displacement trends. (Annexes) UNHCR Statistical Yearbook". United Nations Convention Relating to the Status of Refugees. 2011. Retrieved 15 May 2016.
- "Global forced displacement trends. (Annexes) UNHCR Statistical Yearbook". United Nations Convention Relating to the Status of Refugees. 2012. Retrieved 15 May 2016.
- "Global forced displacement trends. (Annexes) UNHCR Statistical Yearbook". United Nations Convention Relating to the Status of Refugees. 2013. Retrieved 15 May 2016.
- "Global forced displacement trends. (Annexes) UNHCR Statistical Yearbook". United Nations Convention Relating to the Status of Refugees. 2014. Retrieved 15 May 2016.
- "Global forced displacement trends. (Annexes) UNHCR Statistical Yearbook" (PDF). United Nations Convention Relating to the Status of Refugees. 2016. Retrieved 8 May 2018.
- Goldberg, Diana (29 November 2001). "From refugee to citizen: a Guatemalan in Mexico". UNHCR. Retrieved 9 July 2016.CS1 maint: ref=harv (link)
- "Greek and Turkish refugees and deportees 1912–1924" (PDF). Universiteit Leiden. Archived from the original (PDF) on 16 July 2007.
- Hassell, James E. (1991). Russian Refugees in France and the United States Between the World Wars. American Philosophical Society. p. 1. ISBN 978-0-87169-817-9.CS1 maint: ref=harv (link)
- Higgins, C. (2016). "New evidence on refugee status determination in Australia, 1978–1983". Refugee Survey Quarterly. 35 (3): 71–93. doi:10.1093/rsq/hdw008.CS1 maint: ref=harv (link)
- "Humanisten Nansen (in Norwegian)". Arkivverket.no. Archived from the original on 26 January 2013.
- "International Refugee Organization". Infoplease 2000–2006 Pearson Education. The Columbia Electronic Encyclopedia. 1994. Retrieved 13 October 2006.
- Karim, AM; Hussain, I; Malik, SK; Lee, JH; Cho, IH; Kim, YB; Lee, SH (2016). "Epidemiology and the Clinical Burden of Malaria in the War-Torn Area, Orakzai Agency in Pakistan". PLOS Neglected Tropical Diseases. 10 (1): 1–12. doi:10.1371/journal.pntd.0004399. PMC 4725727. PMID 26809063.CS1 maint: ref=harv (link)
- Kazmi, JH; Pandit, K (2001). "Disease and dislocation: the impact of refugee movements on the geography of malaria in NWFP, Pakistan". Social Science & Medicine. 52 (7): 1043–1055. doi:10.1016/S0277-9536(01)00341-0. PMID 12406471.CS1 maint: ref=harv (link)
- Khamis, V (2005). "Post-traumatic stress disorder among school age Palestinian children". Child Abuse Negl. 29 (1): 81–95. doi:10.1016/j.chiabu.2004.06.013. PMID 15664427.CS1 maint: ref=harv (link)
- Kolaczinski, J. H. (2004). "Subsidized Sales of Insecticide-Treated Nets in Afghan Refugee Camps Demonstrate the Feasibility of a Transition from Humanitarian Aid Towards Sustainability". Malaria Journal. 3: 15. doi:10.1186/1475-2875-3-15. PMC 434525. PMID 15191614.CS1 maint: ref=harv (link)
- Lamey, Andy (2011). Frontier Justice. Canada: Anchor Canada. ISBN 978-0-385-66255-0.CS1 maint: ref=harv (link)
- Learn. "Raising the voice of the invisible Urban Refugees | Raising the voice of the invisible". Urban Refugees. Retrieved 18 December 2015.CS1 maint: ref=harv (link)
- Mahmoud, Hala W. "Shattered dreams of Sudanese refugees in Cairo" (PDF). Forced Migration Review (FMR). Retrieved 9 July 2016.CS1 maint: ref=harv (link)
- Markus, Francis (17 October 2014). "Tanzania grants citizenship to 162,000 Burundian refugees in historic decision". UNHCR. Retrieved 9 July 2016.CS1 maint: ref=harv (link)
- McCarthy, Justin (1995). Death and Exile: The Ethnic Cleansing of Ottoman Muslims, 1821–1922. Darwin Press. ISBN 978-0-87850-094-9.CS1 maint: ref=harv (link)
- Mertans, P; Hall, L (2000). "Malaria on the move: human population movement and malaria transmission". Emerging Infectious Diseases. 6 (2): 103–9. doi:10.3201/eid0602.000202. PMC 2640853. PMID 10756143.CS1 maint: ref=harv (link)
- Naimark, Norman (1995). The Russians in Germany. Harvard University Press.CS1 maint: ref=harv (link)
- "Nairobi to open mission in Mogadishu". Standard Digital. 19 February 2014. Retrieved 18 June 2016.
- "Nansen International Office for Refugee: The Nobel Peace Prize 1938". The Nobel Foundation.
- "The Nazi Ostarbeiter (Eastern Worker) Program". Collectinghistory.net.
- "The Nobel Peace Prize 1938: Nansen International Office for Refugees". Nobelprize.org.
- Office of the United Nations High Commissioner for Refugees (UNHCR) (July 2011). "UNHCR Resettlement Handbook" (PDF). p. 19.CS1 maint: ref=harv (link)
- "Old fears over new faces". The Seattle Times. 21 September 2006.
- "Patriots ignore greatest brutality". The Sydney Morning Herald. 13 August 2007.
- "Refugee Status Determination". unhcr.org. UNHCR. Retrieved 9 July 2016.
- "Refugees at highest ever level, reaching 65m, says UN". BBC News. 20 June 2016. Retrieved 20 June 2016.
- "Refugees: Save Us! Save Us!". Time. 9 July 1979.
- Refugees, United Nations High Commissioner for. "Global forced displacement hits record high". Retrieved 21 August 2016.CS1 maint: ref=harv (link)
- "Refugee". Online Etymological Dictionary. Retrieved 15 May 2016.
- "Reich Citizenship Law (English translation at the University of the West of England)". Archived from the original on 28 August 2012.
- "Repatriation -- The Dark Side of World War II". Archived from the original on 17 January 2012.
- "Resettlement: A new beginning in a third country". UNHCR. Retrieved 19 July 2009.
- Roggelin, L.; Tappe, D.; Noack, B.; Addo, M.M.; Tannich, E.; Rothe, C. (2016). "Sharp increase of imported Plasmodium vivax malaria seen in migrants from Eritrea in Hamburg, Germany". Malaria Journal. 15 (1): 325. doi:10.1186/s12936-016-1366-7. PMC 4912711. PMID 27316351.
- Rowland, M; Rab, MA; Freeman, T; Durrani, N; Rehman, N (2002). "Afghan refugees and the temporal and spatial distribution of malaria in Pakistan". Social Science & Medicine. 55 (11): 2061–2072. doi:10.1016/S0277-9536(01)00341-0. PMID 12406471.CS1 maint: ref=harv (link)
- Rüegger, Seraina; Bohnet, Heidrun (16 November 2015). "The Ethnicity of Refugees (ER): A new dataset for understanding flight patterns". Conflict Management and Peace Science. 35: 65–88. doi:10.1177/0738894215611865. ISSN 0738-8942.CS1 maint: ref=harv (link)
- Schmitt, Celine (28 August 2014). "Angola Repatriation: Antonio returns home after 40 years in DR Congo". UNHCR. Retrieved 9 July 2016.CS1 maint: ref=harv (link)
- Schmitt, Celine (5 August 2014). "UNHCR completes challenging repatriation of almost 120,000 Congolese refugees". UNHCR. Retrieved 9 July 2016.CS1 maint: ref=harv (link)
- "Soviet Prisoners of War: Forgotten Nazi Victims of World War II". Archived from the original on 30 March 2008.
- "Soviet Prisoners-of-War".
- "Spanish Civil War fighters look back". 28 February 2003.
- Statistisches Bundesamt, Die Deutschen Vertreibungsverluste. Wiesbaden. 1958.
- Sundquist, K; Johansson, LM; DeMarinis, V; Johansson, SE; Sundquist, J (2005). "Posttraumatic stress disorder and psychiatric co-morbidity: symptoms in a random sample of female Bosnian refugees". Eur Psychiatry. 20 (2): 158–64. doi:10.1016/j.eurpsy.2004.12.001. PMID 15797701.CS1 maint: ref=harv (link)
- The truth about asylum — Who's who: Refugee, Asylum Seeker, Refused asylum seeker, Economic migrant, London, England: Refugee Council, retrieved 7 September 2015
- UNHCR (8 December 2015). "UNHCR Statistical Yearbook 2014, 14th edition". UNHCR. Retrieved 7 March 2017.CS1 maint: ref=harv (link)
- UNHCR (20 June 2016). "UNHCR Global Trends Forced Displacement 2015" (PDF). UNHCR. Retrieved 11 January 2017.CS1 maint: ref=harv (link)
- US Centers for Disease Control and Prevention (CDC), Department of Health and Human Services (HHS) (2017). "Control of Communicable Diseases" (PDF) (Federal Register 82, no. 6890).
- "Understanding Resettlement to the UK: A Guide to the Gateway Protection Programme". Refugee Council on behalf of the Resettlement Inter-Agency Partnership. June 2004. Retrieved 19 July 2009.
- United Nations High Commissioner for Refugees (UNHCR) (2015). "UNHCR – Global Trends –Forced Displacement in 2014". UNHCR.
- United Nations High Commissioner for Refugees (UNHCR) (1999). "The Security and Civilian and Humanitarian Character of Refugee Camps and Settlements" (UNHCR EXCOM Report).CS1 maint: ref=harv (link)
- United Nations High Commissioner for Refugees (UNHCR) (2011). "Ensuring Access to Health Care: Operational Guidance on Refugee Protection and Solutions in Urban Areas".CS1 maint: ref=harv (link)
- United Nations High Commissioner for Refugees. "The UN Refugee Agency". UNHCR. Retrieved 18 December 2015.CS1 maint: ref=harv (link)
- "United Nations Relief and Rehabilitation Administration". Infoplease 2000–2006 Pearson Education. The Columbia Electronic Encyclopedia. 1994. Retrieved 13 October 2006.
- Weiss, Thomas G. (1999). "Principles, politics, and humanitarian action". Ethics & International Affairs. 13 (1): 1–22. doi:10.1111/j.1747-7093.1999.tb00322.x.CS1 maint: ref=harv (link)
- "What is resettlement? A new challenge". UNHCR. Retrieved 19 July 2009.
- Fell, Peter and Debra Hayes (2007), "What are they doing here? A critical guide to asylum and immigration." Venture Press.
- Gibney, Matthew J. (2004), "The Ethics and Politics of Asylum: Liberal Democracy and the Response to Refugees"', Cambridge University Press.
- Schaeffer, P (2010), 'Refugees: On the economics of political migration.' International Migration 48(1): 1–22.
- Refugee number statistics taken from 'Refugee', Encyclopædia Britannica CD Edition (2004).
- Waters, Tony (2001), Bureaucatizing the Good Samaritan, Westview Press.
- UNHCR (2001). Refugee protection: A Guide to International Refugee Law UNHCR, Inter-Parliamentary Union
|Library resources about | |
2.1 PRESSURE AT A POINT • Average pressure: dividing the normal force pushing against a plane area by the area. • Pressure at a point: the limit of the ratio of normal force to area as the area approaches zero size at the point. • At a point: a fluid at rest has the same pressure in all directions an element δA of very small area, free to rotate about its center when submerged in a fluid at rest, will have a force of constant magnitude acting on either side of it, regardless of its orientation. • To demonstrate this, a small wedge-shaped free body of unit width is taken at the point (x, y) in a fluid at rest (Fig.2.1)
There can be no shear forces the only forces are the normal surface forces and gravity the equations of motion in the x and y directions px, py, ps are the average pressures on the three faces, γ is the unit gravity force of the fluid, ρ is its density, and ax, ay are the accelerations • When the limit is taken as the free body is reduced to zero size by allowing the inclined face to approach (x, y) while maintaining the same angle θ, and using the equations simplify to Last term of the second equation – infinitestimal of higher of smallness, may be neglected
When divided by δy and δx, respectively, the equations can be combined (2.1.1) • θ is any arbitrary angle this equation proves that the pressure is the same in all directions at a point in a static fluid • Although the proof was carried out for a two-dimensional case, it may be demonstrated for the three-dimensional case with the equilibrium equations for a small tetrahedron of fluid with three faces in the coordinate planes and the fourth face inclined arbitrarily. • If the fluid is in motion (one layer moves relative lo an adjacent layer), shear stresses occur and the normal stresses are no longer the same in all directions at a point the pressure is defined as the average of any three mutually perpendicular normal compressive stresses at a point, • Fictitious fluid of zero viscosity (frictionless fluid): no shear stresses can occur at a point the pressure is the same in all directions
2.2 BASIC EQUATION OF FLUID STATICS Pressure Variation in a Static Fluid • Force balance: • The forces acting on an element of fluid at rest (Fig. 2.2): surface forces and body forces. • With gravity the only body force acting, and by taking the y axis vertically upward, it is -γδx δy δz in the y direction • With pressure p at its center (x, y, z) the approximate force exerted on the side normal to the y axis closest to the origin and the opposite e side are approximately δy/2 – the distance from center to a face normal to y
Summing the forces acting on the element in the y direction • For the x and z directions, since no body forces act, • The elemental force vector δF • If the element is reduced to zero size, alter dividing through by δx δy δz = δV, the expression becomes exact. (2.2.1) • This is the resultant force per unit volume at a point, which must be equated to zero for a fluid at rest. • The gradient ∇ is (2.2.2)
-∇p is the vector field f or the surface pressure force per unit volume (2.2.3) • The fluid static law or variation of pressure is then (2.2.4) • For an inviscid fluid in motion, or a fluid so moving that the shear stress is everywhere zero, Newton's second law takes the form (2.2.5) a is the acceleration of the fluid element, f - jγ is the resultant fluid force when gravity is the only body force acting
In component form, Eq. (2.2.4) becomes (2.2.6) The partials, for variation in horizontal directions, are one form of Pascal's law; they state that two points at the same elevation in the same continuous mass or fluid at rest have the same pressure. • Since p is a function of y only, (2.2.7) relates the change of pressure to unit gravity force and change of elevation and holds for both compressible and incompressible fluids • For fluids that may be considered homogeneous and incompressible, γ is constant, and the above equation, when integrated, becomes (2.2.8) in which c is the constant of integration. The hydrostatic law of variation of pressure is frequently written in the form h = -y, p is the increase in pressure from that at the free surface
Example 2.1 An oceanographer is to design a sea lab 5 m high to withstand submersion to 100 m, measured from sea level to the top of the sea lab. Find the pressure variation on a side of the container and the pressure on the top if the relative density of salt water is 1.020. At the top, h = 100 m, and If y is measured from the top of the sea lab downward, the pressure variation is
Pressure Variation in a Compressible Fluid • When the fluid is a perfect gas at rest at constant temperature (2.2.9) • When the value of γ in Eq. (2.2.7) is replaced by ρg and ρ is eliminated between Eqs. (2.2.7) and (2.2.9), (2.2.10) • If P = P0 when ρ = ρ0, integration between limits - the equation for variation of pressure with elevation in an isothermal gas • - constant temperature gradient of atmosphere (2.2.11)
Example 2.2 Assuming isothermal conditions to prevail in the atmosphere, compute the pressure and density at 2000 m elevation if P = 105Pa, ρ = 1.24 kg/m3 at sea level. From Eq. (2.2.12) Then, from Eq. (2.2.9)
2.3 UNITS AND SCALES OF PRESSURE MEASUREMENT • Pressure may be expressed with reference to any arbitrary datum • absolute zero • local atmospheric pressure • Absolute pressure: difference between its value and a complete vacuum • Gage pressure: difference between its value and the local atmospheric pressure
The bourdon gage (Fig. 2.3): typical of the devices used for measuring gage pressures • pressure element is a hollow, curved, flat metallic tube closed at one end; the other end is connected to the pressure to be measured • when the internal pressure is increased, the tube tends to straighten, pulling on a linkage to which is attached a pointer and causing the pointer to move • the dial reads zero when the inside and outside of the tube are at the same pressure, regardless of its particular value • the gage measures pressure relative to the pressure of the medium surrounding the tube, which is the local atmosphere
Figure 2.4:the data and the relations of the common units of pressure measurement • Standard atmospheric pressure is the mean pressure at sea level, 760 mm Hg. • A pressure expressed in terms of the length of a column of liquid is equivalent to the force per unit area at the base of the column. The relation for variation of pressure with altitude in a liquid p = γh [Eq. (2.2.8)] (p is in pascals, γ in newtons per cubic metre, and h in meters) • With the unit gravity force of any liquid expressed as its relative density S times the unit gravity force of water: (2.3.1) • Water: γ may be taken as 9806 N/m3.
Local atmospheric pressure is measured by • mercury barometer • aneroid barometer (measures the difference in pressure between the atmosphere and an evacuated box or tube in a manner analogous to the bourdon gage except that the tube is evacuated and sealed) • Mercury barometer: glass tube closed at one end, filled with mercury, and inverted so that the open end is submerged in mercury. • It has a scale: the height of column R can be determined • The space above the mercury contains mercury vapor. If the pressure of the mercury vapor hv is given in millimetres of mercury and R is measured in the same units, the pressure at A may be expressed as (mm Hg) Figure 2.5 Mercury barometer
Figure 2.4: a pressure may be located vertically on the chart, which indicates its relation to absolute zero and to local atmospheric pressure. • If the point is below the local-atmospheric-pressure line and is referred to gage datum, it is called negative, suction, or vacuum. • Example: the pressure 460 mm Hg abs, as at 1, with barometer reading 720 mm, may be expressed as -260 mm Hg, 260 mm Hg suction, or 260 mm Hg vacuum. • Note: Pabs = pbar + pgage • Absolute pressures : P, gage pressures : p.
Example 2.3 The rate of temperature change in the atmosphere with change in elevation is called its lapse rate. The motion of a parcel of air depends on the density of the parcel relative to the density of the surrounding (ambient) air. However, as the parcel ascends through the atmosphere, the air pressure decreases, the parcel expands, and its temperature decreases at a rate known as the dry adiabatic lapse rate. A firm wants to burn a large quantity of refuse. It is estimated that the temperature of the smoke plume at 10 m above the ground will be 11oC greater than that of the ambient air. For the following conditions determine what will happen to the smoke. (a) At standard atmospheric lapse rate β = -0.00651oC per meter and t0 =20oC. (b) At an inverted lapse rate β = 0.00365oC per meter.
By combining Eqs. (2.2.7) and (2.2.14), The relation between pressure and temperature for a mass of gas expanding without heat transfer (isentropic relation, Sec. 6.1) is in which T1 is the initial smoke absolute temperature and P0 the initial absolute pressure; k is the specific heat ratio, 1.4 for air and other diatomic gases. Eliminating P/P0 in the last two equations Since the gas will rise until its temperature is equal to the ambient temperature, the last two equations may be solved for y. Let Then For β = -0.00651oC per metre, R = 287 m·N/(kg·K), a = 2.002, and y = 3201 m. For the atmospheric temperature inversion β = -0.00365oC per metre, a = -0.2721, and y = 809.2 m.
2.4 MANOMETERS • Manometers are devices that employ liquid columns for determining differences in pressure. • Figure 2.6a: the most elementary manometer – piezometer • It measures the pressure in a liquid when it is above zero gage • Glass tube is mounted vertically so that it is connected to the space within the container • Liquid rises in the tube until equilibrium is reached • The pressure is then given by the vertical distance h from the meniscus (liquid surface) to the point where the pressure is to be measured, expressed in units of length of the liquid in the container. • Piezometer would not work for negative gage pressures, because air would flow into the container through the tube
Figure 2.6b: for small negative or positive gage pressures in a liquid • With this arrangement the meniscus may come to rest below A, as shown. Since the pressure at the meniscus is zero gage and since pressure decreases with elevation, units of length H2O • Figure 2.6c: for greater negative or positive gage pressures (a second liquid of greater relative density employed) • It must be immiscible in the first fluid, which may now be a gas • If the relative density of the fluid at A is S1 (based on water) and the relative density of the manometer liquid is S2, the equation for pressure at A hA - the unknown pressure, expressed in length units of water, h1, h2 - in length units
A general procedure in working all manometer problems : • Start at one end (or any meniscus if the circuit is continuous) and write the pressure there in an appropriate unit (say pascals) or in an appropriate symbol if it is unknown. • Add to this the change in pressure, in the same unit, from one meniscus to the next (plus if the next meniscus is lower, minus if higher). (For pascals this is the product of the difference in elevation in metres and the unit gravity force of the fluid in newtons per cubic metre.) • Continue until the other end of the gage (or the starting meniscus) is reached and equate the expression to the pressure at that point, known or unknown. • The expression will contain one unknown for a simple manometer or will give a difference in pressures for the differential manometer. In equation form,
A differential manometer (Fig. 2.7) determines the difference in pressures at two points A and B when the actual pressure at any point in the system cannot be determined • Application of the procedure outlined above to Fig. 2.7a produces • For Fig. 2.7b: • If the pressures at A and B are expressed in length of the water column, the above results can be written, for Fig. 2.7a, • For Fig 2.7b:
Example 2.4 In Fig. 2.7a the liquids at A and B are water and the manometer liquid is oil. S = 0.80; h1 = 300 mm; h2 = 200 mm; and h3 = 600 mm. (a) Determine pA - pB, in pacals. (b) If pB = 50kPa and the barometer reading is 730 mm Hg, find the pressure at A, in meters of water absolute. (a) (b) (a)
Micromanometers • For determining very small differences in pressure or determining large pressure differences precisely – several types of manometers • One type very accurately measures the differences in elevation of two menisci of a manometer. • By means of small telescopes with horizontal cross hairs mounted along the tubes on a rack which is raised and lowered by a pinion and slow motion screw so that the cross hairs can be set accurately, the difference in elevation of menisci (the gage difference) can be read with verniers.
Fig. 2.8: two gage liquids, immiscible in each other and in the fluid to be measured a large gage difference R can be produced for a small pressure difference. • The heavier gage liquid fills the lower U tube up to 0-0; then the lighter gage liquid is added to both sides, filling the larger reservoirs up to 1-1. • The gas or liquid in the system fills the space above 1-1. When the pressure at C is slightly greater than at D, the menisci move as indicated in Fig. 2.8. • The volume of liquid displaced in each reservoir equals the displacement in the U tube • Manometer equation γ1, γ2 and γ3 are the unit gravity force
Example 2.5 In the micromanometer of Fig 2.8 the pressure difference is wanted, in pascals, when air is in the system, S2 = 1.0, S3 = 1.10, a/A = 0.01, R = 5 mm, t = 20oC, and the barometer reads 760 mm Hg. The term γ1(a/A) may be neglected. Substituting into Eq. (2.4.1) gives
Figure 2.9 Inclined manometer • The inclined manometer: frequently used for measuring small differences in gas pressures. • Adjusted to read zero, by moving the inclined scale, when A and B are open. Since the inclined tube requires a greater displacement of the meniscus for given pressure difference than a vertical tube, it affords greater accuracy in reading the scale. • Surface tension causes a capillary rise in small tubes. If a U tube is used with a meniscus in each leg, the surface-tension effects cancel.
2.5 FORCES ON PLANE AREAS • In the preceding sections variations oF pressure throughout a fluid have been considered. • The distributed forces resulting from the action of fluid on a finite area may be conveniently replaced by a resultant force, insofar as external reactions to the force system are concerned. • In this section the magnitude of resultant force and its line of action (pressure center) are determined by integration, by formula, and by use of the concept of the pressure prism.
Horizontal Surfaces • A plane surface in a horizontal position in a fluid at rest is subjected to a constant pressure. • The magnitude of the force acting on one side of the surface is • The elemental forces pdA acting on A are all parallel and in the same sense a scalar summation of all such elements yields the magnitude of the resultant force. Its direction is normal to the surface and toward the surface if p is positive. • Fig. 2.10: arbitrary xy axes - to find the line of action of the resultant, i.e., the point in the area where the moment of the distributed force about any axis through the point is zero, • Then, since the moment of the resultant must equal the moment of the distributed force system about any axis, say the y axis, x’ – the distance from the y axis to the resultant
Figure 2.10 Notation for determining the line of action of a force
Inclined Surfaces • Fig. 2.11: a plane surface is indicated by its trace A'B‘;it is inclined θo from the horizontal. x axis: intersection of the plane of the area and the free surface. y axis: taken in the plane of the area, with origin O in the free surface. The xy plane portrays the arbitrary inclined area. The magnitude, direction, and line of action of the resultant force due to the liquid, acting on one side of the area, are sought. • For δA: (2.5.1) • Since all such elemental forces are parallel, the integral over the area yields the magnitude of force F, acting on one side of the area, • Magnitude of force exerted on one side of a plane area submerged in a liquid is the product of the area and the pressure at its centroid • The presence of a free surface is unnecessary
Figure 2.11 Notation for force of liquid on one side of a plane inclined area.
Center of Pressure • Fig. 2.11: the line of action of the resultant force has its piercing point in the surface at a point called the pressure center, with coordinates (xp, yp). Center of pressure of an inclined surface is not at the centroid. To find the pressure center, the moments of the resultant xpF, ypF are equated to the moment of the distributed forces about the y axis and x axis, respectively - may be evaluated conveniently through graphical integration, for simple areas they may be transformed into general formulas:
When either of the centroidal axes is an axis of symmetry for the surface, vanishes and the pressure center lies on x=x- . Since may be either positive or negative, the pressure center may lie on either side of the line x = x-. To determine yp by formula, with Eqs. (2.5.2) and (2.5.6) • In the parallel-axis theorem for moments of inertia in which IG is the second moment or the area about its horizontal centroidal axis. If IG is eliminated from Eq. (2.5.9)
Example 2.6 The triangular gate CDE (Fig. 2.12) is hinged along CD and is opened by a normal force P applied at E. It holds oil, relative density 0.80, above it and is open to the atmosphere on its lower side Neglecting the weight of the gate, find (a) the magnitude of force exerted on the gate by integration and by Eq. (2.5.2); (b) the location of pressure center; (c) the force P needed to open the gate. Figure 2.12 Triangular gate
(a) By integration with reference to Fig. 2.12, When y = 4, x = 0, and when y = 6.5, x = 3, with x varying linearly with y; thus in which the coordinates have been substituted to find x in terms of y. Solving for a and b gives Similarly, y = 6.5, x = 3; y = 9, x = 0; and x = 6/5(9 - y). Hence, Integrating and substituting for γsinθ leads to By Eq. (2.5.2)
(b) With the axes as shown, In Eq. (2.5.8) I-xyis zero owing to symmetry about the centroidal axis parallel to the x axis; hence In Eq. (2.5.11), i.e., the pressure center is 0.16 m below the centroid, measured in the plane of the area. (c) When moments about CD are taken and the action of the oil is replaced by the resultant,
The Pressure Prism • Pressure prism: another approach to determine the resultant force and line of action of the force on a plane surface - prismatic volume with its base the given surface area and with altitude at any point of the base given by p = γh. h is the vertical distance to the free surface, Fig. 2.13. (An imaginary free surface may be used to define h if no real free surface exists.) (in the figure, γh may be laid off to any convenient scale such that its trace is OM) • The force acting on an elemental area δA is (2.5.12) - an element of volume of the pressure prism. After integrating, F = ϑ • From Eqs. (2.5.5) and (2.5.6), (2.5.13) xp, yp are distances to the centroid of the pressure prism the line of action of the resultant passes through the centroid of the pressure prism
Effects of Atmospheric Pressure on Forces on Plane Areas • In the discussion of pressure forces the pressure datum was not mentioned : p = γh the datum taken was gage pressure zero, or the local atmospheric pressure • When the opposite side of the surface is open to the atmosphere, a force is exerted on it by the atmosphere equal to the product of the atmospheric pressure P0 and the area, or P0A , based on absolute zero as datum. On the liquid side the force is • The effect P0A of the atmosphere acts equally on both sides and in no way contributes to the resultant force or its location • So long as the same pressure datum is selected for all sides of a free body, the resultant and moment can be determined by constructing a free surface at pressure zero on this datum and using the above methods
Example 2.8 An application of pressure forces on plane areas is given in the design of a gravity dam. The maximum and minimum compressive stresses in the base of the dam are computed from the forces which act on the dam. Figure 2.15 shows a cross section through a concrete dam where the unit gravity force of concrete has been taken as 2.5γ and γ is the unit gravity force of water. A 1 m section of dam is considered as a free body; the forces are due to the concrete, the water, the foundation pressure, and the hydrostatic uplift. Determining amount of hydrostatic uplift is beyond the scope of this treatment. but it will be assumed to be one-half the hydrostatic head at the upstream edge, decreasing linearly to zero at the downstream edge of the dam. Enough friction or shear stress must be developed at the base of the dam to balance the thrust due to the water that is Rx = 5000γ. The resultant upward force on the base equals the gravity force of the dam less the hydrostatic uplift Ry = 6750γ + 2625γ - 1750γ = 7625γ N. The position of Ry is such that the free body is in equilibrium. For moments around O,
It is customary to assume that the foundation pressure varies linearly over the base of the dam, i.e., that the pressure prism is a trapezoid with a volume equal to Ry; thus in which σmax, σmin are the maximum and minimum compressive stresses in pascals. The centroid of the pressure prism is at the point where x = 44.8 m. By taking moments about 0 to express the position of the centroid in terms of σmax and σmin, Simplifying gives When the resultant falls within the middle third of the base of the dam, σmin will always be a compressive stress. Owing to the poor tensile properties of concrete, good design requires the resultant to fall within the middle third of the base. |
The third sorting algorithm is bubble sort. The basic idea of this algorithm is that we bring the smaller elements upward in the array step by step and as a result, the larger elements go downward. If we think about array as a vertical one, we do bubble sort. The smaller elements come upward and the larger elements go downward in the array. Thus it seems a bubbling phenomenon. Due to this bubbling nature, this is called the bubble sort. Thus the basic idea is that the lighter bubbles (smaller numbers) rise to the top. This is for the sorting in ascending order. We can do this in the reverse order for the descending order.
The steps in the bubble sort can be described as below
• Exchange neighboring items until the largest item reaches the end of the array
• Repeat the above step for the rest of the array
In this sort algorithm, we do not search the array for the smallest number like in the other two algorithms. Also we do not insert the element by shifting the other elements. In this algorithm, we do pair-wise swapping. We will take first the elements and swap the smaller with the larger number. Then we do the swap between the next pair. By repeating this process, the larger number will be going to the end of the array and smaller elements come to the start of the array.
Let’s try to understand this phenomenon with the help of figures how bubble sort works. Consider the same previous array that has elements 19, 12, 5 and 7.
First of all, we compare the first pair i.e. 19 and 5. As 5 is less than 19, we swap these elements. Now 5 is at its place and we take the next pair. This pair is 19, 12 and not 12, 7. In this pair 12 is less than 19, we swap 12 and 19. After this, the next pair is 19, 7. Here 7 is less than 19 so we swap it. Now 7 is at its place as compared to 19 but it is not at its final position. The element 19 is at its final position. Now we repeat the pair wise swapping on the array from index 0 to 2 as the value at index 3 is at its position. So we compare 5 and 12. As 5 is less than 12 so it is at its place (that is before 12) and we need not to swap them. Now we take the next pair that is 12 and 7. In this pair, 7 is less than 12 so we swap these elements. Now 7 is at its position with respect to the pair 12 and 7. Thus we have sorted the array up to index 2 as 12 is now at its final position. The element 19 is already at its final position. Note that here in the bubble sort, we are not using additional storage (array). Rather, we are replacing the elements in the same array. Thus bubble sort is also an in place algorithm. Now as index 2 and 3 have their final values, we do the swap process up to the index 1. Here, the first pair is 5 and 7 and in this pair, we need no swapping as 5 is less than 7 and is at its position (i.e. before 7). Thus 7 is also at its final position and the array is sorted.
Following is the code of bubble sort algorithm in C++.
|void bubbleSort(int *arr, int N) |
int i, temp, bound = N-1;
int swapped = 1;
while (swapped > 0 )
swapped = 0;
for(i=0; i < bound; i++)
if ( arr[i] > arr[i+1] )
temp = arr[i];
arr[i] = arr[i+1];
arr[i+1] = temp;
swapped = i;
bound = swapped;
In line with the previous two sort methods, the bubbleSort method also takes an array and size of the array as arguments. There is i, temp, bound and swapped variables declared in the function. We initialize the variable bound with N–1. This N-1 is our upper limit for the swapping process. The outer loop that is the while loop executes as long as swapping is being done. In the loop, we initialize the swapped variable with zero. When it is not changed in the for loop, it means that the array is now in sorted form and we exit the loop. The inner for loop executes from zero to bound-1. In this loop, the if statement compares the value at index i and i+1. If I (element on left side in the array) is greater than the element at i+1 (element on right side in the array) then we swap these elements. We assign the value of i to the swapped variable that being greater than zero indicates that swapping has been done. Then after the for loop, we put the value of swapped variable in the bound to know that up to this index, swapping has taken place. After the for loop, if the value of swapped is not zero, the while loop will continue execution. Thus the while loop will continue till the time, the swapping is taking place.
Now let’s see the time complexity of bubble sort algorithm.
Bubble Sort Analysis
In this algorithm, we see that there is an outer loop and an inner loop in the code. The outer loop executes N times, as it has to pass through the whole array. Then the inner loop executes for N times at first, then for N-1 and for N-2 times. Thus its range decreases with each of the iteration of the outer loop. In the first iteration, we do the swapping up to N elements. And as a result the largest elements come at the last position. The next iteration passes through the N-1 elements. Thus the part of the array in which swapping is being done decreases after iteration. At the end, there remains only one element where no swapping is required. Now if we sum up these iterations i.e. 1 + 2 + 3 + ……… + N-1 + N. Then this summation becomes N (1 + N) / 2 = O (N2). Thus in this equation, the term N2 dominates as the value of N increases. It becomes ignorable as compared to N2. Thus when the value of N increases, the time complexity of this algorithm increases proportional to N2.
Technorati Tags: bubble sort,bubble sort algorithm,bubble sort algo,data structure and algorithm tutorial,ds tutotrial |
Tests of general relativity
|Part of a series of articles about|
Tests of general relativity serve to establish observational evidence for the theory of general relativity. The first three tests, proposed by Einstein in 1915, concerned the "anomalous" precession of the perihelion of Mercury, the bending of light in gravitational fields, and the gravitational redshift. The precession of Mercury was already known; experiments showing light bending in accordance with the predictions of general relativity were performed in 1919, with increasingly precise measurements made in subsequent tests; and scientists claimed to have measured the gravitational redshift in 1925, although measurements sensitive enough to actually confirm the theory were not made until 1954. A more accurate program starting in 1959 tested general relativity in the weak gravitational field limit, severely limiting possible deviations from the theory.
In the 1970s, scientists began to make additional tests, starting with Irwin Shapiro's measurement of the relativistic time delay in radar signal travel time near the sun. Beginning in 1974, Hulse, Taylor and others studied the behaviour of binary pulsars experiencing much stronger gravitational fields than those found in the Solar System. Both in the weak field limit (as in the Solar System) and with the stronger fields present in systems of binary pulsars the predictions of general relativity have been extremely well tested.
In February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a black hole merger. This discovery, along with additional detections announced in June 2016 and June 2017, tested general relativity in the very strong field limit, observing to date no deviations from theory.
- 1 Classical tests
- 2 Modern tests
- 3 Strong field tests
- 4 Cosmological tests
- 5 See also
- 6 References
- 7 External links
- the perihelion precession of Mercury's orbit
- the deflection of light by the Sun
- the gravitational redshift of light
In the letter to the London Times on November 28, 1919, he described the theory of relativity and thanked his English colleagues for their understanding and testing of his work. He also mentioned three classical tests with comments:
- "The chief attraction of the theory lies in its logical completeness. If a single one of the conclusions drawn from it proves wrong, it must be given up; to modify it without destroying the whole structure seems to be impossible."
Perihelion precession of Mercury
Under Newtonian physics, a two-body system consisting of a lone object orbiting a spherical mass would trace out an ellipse with the center of mass of the system at a focus. The point of closest approach, called the periapsis (or, because the central body in the Solar System is the Sun, perihelion), is fixed. A number of effects in the Solar System cause the perihelia of planets to precess (rotate) around the Sun. The principal cause is the presence of other planets which perturb one another's orbit. Another (much less significant) effect is solar oblateness.
Mercury deviates from the precession predicted from these Newtonian effects. This anomalous rate of precession of the perihelion of Mercury's orbit was first recognized in 1859 as a problem in celestial mechanics, by Urbain Le Verrier. His reanalysis of available timed observations of transits of Mercury over the Sun's disk from 1697 to 1848 showed that the actual rate of the precession disagreed from that predicted from Newton's theory by 38″ (arc seconds) per tropical century (later re-estimated at 43″ by Simon Newcomb in 1882). A number of ad hoc and ultimately unsuccessful solutions were proposed, but they tended to introduce more problems.
In general relativity, this remaining precession, or change of orientation of the orbital ellipse within its orbital plane, is explained by gravitation being mediated by the curvature of spacetime. Einstein showed that general relativity agrees closely with the observed amount of perihelion shift. This was a powerful factor motivating the adoption of general relativity.
Although earlier measurements of planetary orbits were made using conventional telescopes, more accurate measurements are now made with radar. The total observed precession of Mercury is 574.10″±0.65 per century relative to the inertial ICRF. This precession can be attributed to the following causes:
|Amount (arcsec/Julian century)||Cause|
|532.3035||Gravitational tugs of other solar bodies|
|0.0286||Oblateness of the Sun (quadrupole moment)|
|42.9799||Gravitoelectric effects (Schwarzschild-like), a General Relativity effect|
The correction by 42.98″ is 3/2 multiple of classical prediction with PPN parameters . Thus the effect can be fully explained by general relativity. More recent calculations based on more precise measurements have not materially changed the situation.
In general relativity the perihelion shift σ, expressed in radians per revolution, is approximately given by:
The other planets experience perihelion shifts as well, but, since they are farther from the Sun and have longer periods, their shifts are lower, and could not be observed accurately until long after Mercury's. For example, the perihelion shift of Earth's orbit due to general relativity is of 3.84″ per century, and Venus's is 8.62″. Both values have now been measured, with results in good agreement with theory. The periapsis shift has also now been measured for binary pulsar systems, with PSR 1913+16 amounting to 4.2º per year. These observations are consistent with general relativity. It is also possible to measure periapsis shift in binary star systems which do not contain ultra-dense stars, but it is more difficult to model the classical effects precisely – for example, the alignment of the stars' spin to their orbital plane needs to be known and is hard to measure directly. A few systems, such as DI Herculis, have been measured as test cases for general relativity.
Deflection of light by the Sun
Henry Cavendish in 1784 (in an unpublished manuscript) and Johann Georg von Soldner in 1801 (published in 1804) had pointed out that Newtonian gravity predicts that starlight will bend around a massive object. The same value as Soldner's was calculated by Einstein in 1911 based on the equivalence principle alone. However, Einstein noted in 1915 in the process of completing general relativity, that his 1911 result (and thus Soldner's 1801 result) is only half of the correct value. Einstein became the first to calculate the correct value for light bending: 1.75 arcseconds for light that grazes the Sun.
The first observation of light deflection was performed by noting the change in position of stars as they passed near the Sun on the celestial sphere. The observations were performed by Arthur Eddington and his collaborators (see Eddington experiment) during the total solar eclipse of May 29, 1919, when the stars near the Sun (at that time in the constellation Taurus) could be observed. Observations were made simultaneously in the cities of Sobral, Ceará, Brazil and in São Tomé and Príncipe on the west coast of Africa. The result was considered spectacular news and made the front page of most major newspapers. It made Einstein and his theory of general relativity world-famous. When asked by his assistant what his reaction would have been if general relativity had not been confirmed by Eddington and Dyson in 1919, Einstein famously made the quip: "Then I would feel sorry for the dear Lord. The theory is correct anyway."
The early accuracy, however, was poor. The results were argued by some to have been plagued by systematic error and possibly confirmation bias, although modern reanalysis of the dataset suggests that Eddington's analysis was accurate. The measurement was repeated by a team from the Lick Observatory in the 1922 eclipse, with results that agreed with the 1919 results and has been repeated several times since, most notably in 1953 by Yerkes Observatory astronomers and in 1973 by a team from the University of Texas. Considerable uncertainty remained in these measurements for almost fifty years, until observations started being made at radio frequencies. While the Sun is too close by for an Einstein ring to lie outside its corona, such a ring formed by the deflection of light from distant galaxies has been observed for a nearby star.
Gravitational redshift of light
Einstein predicted the gravitational redshift of light from the equivalence principle in 1907, and it was predicted that this effect might be measured in the spectral lines of a white dwarf star, which has a very high gravitational field. Initial attempts to measure the gravitational redshift of the spectrum of Sirius-B, were done by Walter Sydney Adams in 1925, but the result was criticized as being unusable due to the contamination from light from the (much brighter) primary star, Sirius. The first accurate measurement of the gravitational redshift of a white dwarf was done by Popper in 1954, measuring a 21 km/sec gravitational redshift of 40 Eridani B.
The redshift of Sirius B was finally measured by Greenstein et al. in 1971, obtaining the value for the gravitational redshift of 89±19 km/sec, with more accurate measurements by the Hubble Space Telescope showing 80.4±4.8 km/sec.
Tests of special relativity
The general theory of relativity incorporates Einstein's special theory of relativity, and hence test of special relativity are also testing aspects of general relativity. As a consequence of the equivalence principle, Lorentz invariance holds locally in non-rotating, freely falling reference frames. Experiments related to Lorentz invariance special relativity (that is, when gravitational effects can be neglected) are described in tests of special relativity.
The modern era of testing general relativity was ushered in largely at the impetus of Dicke and Schiff who laid out a framework for testing general relativity. They emphasized the importance not only of the classical tests, but of null experiments, testing for effects which in principle could occur in a theory of gravitation, but do not occur in general relativity. Other important theoretical developments included the inception of alternative theories to general relativity, in particular, scalar-tensor theories such as the Brans–Dicke theory; the parameterized post-Newtonian formalism in which deviations from general relativity can be quantified; and the framework of the equivalence principle.
Experimentally, new developments in space exploration, electronics and condensed matter physics have made additional precise experiments possible, such as the Pound–Rebka experiment, laser interferometry and lunar rangefinding.
Post-Newtonian tests of gravity
Early tests of general relativity were hampered by the lack of viable competitors to the theory: it was not clear what sorts of tests would distinguish it from its competitors. General relativity was the only known relativistic theory of gravity compatible with special relativity and observations. Moreover, it is an extremely simple and elegant theory.[according to whom?] This changed with the introduction of Brans–Dicke theory in 1960. This theory is arguably simpler, as it contains no dimensionful constants, and is compatible with a version of Mach's principle and Dirac's large numbers hypothesis, two philosophical ideas which have been influential in the history of relativity. Ultimately, this led to the development of the parametrized post-Newtonian formalism by Nordtvedt and Will, which parametrizes, in terms of ten adjustable parameters, all the possible departures from Newton's law of universal gravitation to first order in the velocity of moving objects (i.e. to first order in , where v is the velocity of an object and c is the speed of light). This approximation allows the possible deviations from general relativity, for slowly moving objects in weak gravitational fields, to be systematically analyzed. Much effort has been put into constraining the post-Newtonian parameters, and deviations from general relativity are at present severely limited.
The experiments testing gravitational lensing and light time delay limits the same post-Newtonian parameter, the so-called Eddington parameter γ, which is a straightforward parametrization of the amount of deflection of light by a gravitational source. It is equal to one for general relativity, and takes different values in other theories (such as Brans–Dicke theory). It is the best constrained of the ten post-Newtonian parameters, but there are other experiments designed to constrain the others. Precise observations of the perihelion shift of Mercury constrain other parameters, as do tests of the strong equivalence principle.
One of the goals of the BepiColombo mission to Mercury, is to test the general relativity theory by measuring the parameters gamma and beta of the parametrized post-Newtonian formalism with high accuracy. The experiment is part of the Mercury Orbiter Radio science Experiment (MORE). The spacecraft was launched in October 2018 and is expected to enter orbit around Mercury in December 2025.
One of the most important tests is gravitational lensing. It has been observed in distant astrophysical sources, but these are poorly controlled and it is uncertain how they constrain general relativity. The most precise tests are analogous to Eddington's 1919 experiment: they measure the deflection of radiation from a distant source by the Sun. The sources that can be most precisely analyzed are distant radio sources. In particular, some quasars are very strong radio sources. The directional resolution of any telescope is in principle limited by diffraction; for radio telescopes this is also the practical limit. An important improvement in obtaining positional high accuracies (from milli-arcsecond to micro-arcsecond) was obtained by combining radio telescopes across Earth. The technique is called very long baseline interferometry (VLBI). With this technique radio observations couple the phase information of the radio signal observed in telescopes separated over large distances. Recently, these telescopes have measured the deflection of radio waves by the Sun to extremely high precision, confirming the amount of deflection predicted by general relativity aspect to the 0.03% level. At this level of precision systematic effects have to be carefully taken into account to determine the precise location of the telescopes on Earth. Some important effects are Earth's nutation, rotation, atmospheric refraction, tectonic displacement and tidal waves. Another important effect is refraction of the radio waves by the solar corona. Fortunately, this effect has a characteristic spectrum, whereas gravitational distortion is independent of wavelength. Thus, careful analysis, using measurements at several frequencies, can subtract this source of error.
The entire sky is slightly distorted due to the gravitational deflection of light caused by the Sun (the anti-Sun direction excepted). This effect has been observed by the European Space Agency astrometric satellite Hipparcos. It measured the positions of about 105 stars. During the full mission about 3.5×106 relative positions have been determined, each to an accuracy of typically 3 milliarcseconds (the accuracy for an 8–9 magnitude star). Since the gravitation deflection perpendicular to the Earth–Sun direction is already 4.07 milliarcseconds, corrections are needed for practically all stars. Without systematic effects, the error in an individual observation of 3 milliarcseconds, could be reduced by the square root of the number of positions, leading to a precision of 0.0016 milliarcseconds. Systematic effects, however, limit the accuracy of the determination to 0.3% (Froeschlé, 1997).
Launched in 2013, the Gaia spacecraft will conduct a census of one billion stars in the Milky Way and measure their positions to an accuracy of 24 microarcseconds. Thus it will also provide stringent new tests of gravitational deflection of light caused by the Sun which was predicted by General relativity.
Light travel time delay testing
Irwin I. Shapiro proposed another test, beyond the classical tests, which could be performed within the Solar System. It is sometimes called the fourth "classical" test of general relativity. He predicted a relativistic time delay (Shapiro delay) in the round-trip travel time for radar signals reflecting off other planets. The mere curvature of the path of a photon passing near the Sun is too small to have an observable delaying effect (when the round-trip time is compared to the time taken if the photon had followed a straight path), but general relativity predicts a time delay that becomes progressively larger when the photon passes nearer to the Sun due to the time dilation in the gravitational potential of the Sun. Observing radar reflections from Mercury and Venus just before and after they are eclipsed by the Sun agrees with general relativity theory at the 5% level. More recently, the Cassini probe has undertaken a similar experiment which gave agreement with general relativity at the 0.002% level. However, the following detailed studies revealed that the measured value of the PPN parameter gamma is affected by gravitomagnetic effect caused by the orbital motion of Sun around the barycenter of the solar system. The gravitomagnetic effect in the Cassini radioscience experiment was implicitly postulated by B. Berotti as having a pure general relativistic origin but its theoretical value has never been tested in the experiment which effectively makes the experimental uncertainty in the measured value of gamma actually larger (by a factor of 10) than 0.002% claimed by B. Berotti and co-authors in Nature.
The equivalence principle
The equivalence principle, in its simplest form, asserts that the trajectories of falling bodies in a gravitational field should be independent of their mass and internal structure, provided they are small enough not to disturb the environment or be affected by tidal forces. This idea has been tested to extremely high precision by Eötvös torsion balance experiments, which look for a differential acceleration between two test masses. Constraints on this, and on the existence of a composition-dependent fifth force or gravitational Yukawa interaction are very strong, and are discussed under fifth force and weak equivalence principle.
A version of the equivalence principle, called the strong equivalence principle, asserts that self-gravitation falling bodies, such as stars, planets or black holes (which are all held together by their gravitational attraction) should follow the same trajectories in a gravitational field, provided the same conditions are satisfied. This is called the Nordtvedt effect and is most precisely tested by the Lunar Laser Ranging Experiment. Since 1969, it has continuously measured the distance from several rangefinding stations on Earth to reflectors on the Moon to approximately centimeter accuracy. These have provided a strong constraint on several of the other post-Newtonian parameters.
Another part of the strong equivalence principle is the requirement that Newton's gravitational constant be constant in time, and have the same value everywhere in the universe. There are many independent observations limiting the possible variation of Newton's gravitational constant, but one of the best comes from lunar rangefinding which suggests that the gravitational constant does not change by more than one part in 1011 per year. The constancy of the other constants is discussed in the Einstein equivalence principle section of the equivalence principle article.
The first of the classical tests discussed above, the gravitational redshift, is a simple consequence of the Einstein equivalence principle and was predicted by Einstein in 1907. As such, it is not a test of general relativity in the same way as the post-Newtonian tests, because any theory of gravity obeying the equivalence principle should also incorporate the gravitational redshift. Nonetheless, confirming the existence of the effect was an important substantiation of relativistic gravity, since the absence of gravitational redshift would have strongly contradicted relativity. The first observation of the gravitational redshift was the measurement of the shift in the spectral lines from the white dwarf star Sirius B by Adams in 1925, discussed above, and follow-on measurements of other white dwarfs. Because of the difficulty of the astrophysical measurement, however, experimental verification using a known terrestrial source was preferable.
Experimental verification of gravitational redshift using terrestrial sources took several decades, because it is difficult to find clocks (to measure time dilation) or sources of electromagnetic radiation (to measure redshift) with a frequency that is known well enough that the effect can be accurately measured. It was confirmed experimentally for the first time in 1959 using measurements of the change in wavelength of gamma-ray photons generated with the Mössbauer effect, which generates radiation with a very narrow line width. The Pound–Rebka experiment measured the relative redshift of two sources situated at the top and bottom of Harvard University's Jefferson tower. The result was in excellent agreement with general relativity. This was one of the first precision experiments testing general relativity. The experiment was later improved to better than the 1% level by Pound and Snider.
The blueshift of a falling photon can be found by assuming it has an equivalent mass based on its frequency (where h is Planck's constant) along with , a result of special relativity. Such simple derivations ignore the fact that in general relativity the experiment compares clock rates, rather than energies. In other words, the "higher energy" of the photon after it falls can be equivalently ascribed to the slower running of clocks deeper in the gravitational potential well. To fully validate general relativity, it is important to also show that the rate of arrival of the photons is greater than the rate at which they are emitted. A very accurate gravitational redshift experiment, which deals with this issue, was performed in 1976, where a hydrogen maser clock on a rocket was launched to a height of 10,000 km, and its rate compared with an identical clock on the ground. It tested the gravitational redshift to 0.007%.
Although the Global Positioning System (GPS) is not designed as a test of fundamental physics, it must account for the gravitational redshift in its timing system, and physicists have analyzed timing data from the GPS to confirm other tests. When the first satellite was launched, some engineers resisted the prediction that a noticeable gravitational time dilation would occur, so the first satellite was launched without the clock adjustment that was later built into subsequent satellites. It showed the predicted shift of 38 microseconds per day. This rate of discrepancy is sufficient to substantially impair function of GPS within hours if not accounted for. An excellent account of the role played by general relativity in the design of GPS can be found in Ashby 2003.
Other precision tests of general relativity, not discussed here, are the Gravity Probe A satellite, launched in 1976, which showed gravity and velocity affect the ability to synchronize the rates of clocks orbiting a central mass and the Hafele–Keating experiment, which used atomic clocks in circumnavigating aircraft to test general relativity and special relativity together.
Tests of the Lense–Thirring precession, consisting of small secular precessions of the orbit of a test particle in motion around a central rotating mass, for example, a planet or a star, have been performed with the LAGEOS satellites, but many aspects of them remain controversial. The same effect may have been detected in the data of the Mars Global Surveyor (MGS) spacecraft, a former probe in orbit around Mars; also such a test raised a debate. First attempts to detect the Sun's Lense–Thirring effect on the perihelia of the inner planets have been recently reported as well. Frame dragging would cause the orbital plane of stars orbiting near a supermassive black hole to precess about the black hole spin axis. This effect should be detectable within the next few years via astrometric monitoring of stars at the center of the Milky Way galaxy. By comparing the rate of orbital precession of two stars on different orbits, it is possible in principle to test the no-hair theorems of general relativity.
The Gravity Probe B satellite, launched in 2004 and operated until 2005, detected frame-dragging and the geodetic effect. The experiment used four quartz spheres the size of ping pong balls coated with a superconductor. Data analysis continued through 2011 due to high noise levels and difficulties in modelling the noise accurately so that a useful signal could be found. Principal investigators at Stanford University reported on May 4, 2011, that they had accurately measured the frame dragging effect relative to the distant star IM Pegasi, and the calculations proved to be in line with the prediction of Einstein's theory. The results, published in Physical Review Letters measured the geodetic effect with an error of about 0.2 percent. The results reported the frame dragging effect (caused by Earth's rotation) added up to 37 milliarcseconds with an error of about 19 percent. Investigator Francis Everitt explained that a milliarcsecond "is the width of a human hair seen at the distance of 10 miles".
In January 2012, LARES satellite was launched on a Vega rocket to measure Lense–Thirring effect with an accuracy of about 1%, according to its proponents. This evaluation of the actual accuracy obtainable is a subject of debate.
Tests of the gravitational potential at small distances
It is possible to test whether the gravitational potential continues with the inverse square law at very small distances. Tests so far have focused on a divergence from GR in the form of a Yukawa potential , but no evidence for a potential of this kind has been found. The Yukawa potential with has been ruled out down to m.
Strong field tests
The very strong gravitational fields that are present close to black holes, especially those supermassive black holes which are thought to power active galactic nuclei and the more active quasars, belong to a field of intense active research. Observations of these quasars and active galactic nuclei are difficult, and interpretation of the observations is heavily dependent upon astrophysical models other than general relativity or competing fundamental theories of gravitation, but they are qualitatively consistent with the black hole concept as modeled in general relativity.
Pulsars are rapidly rotating neutron stars which emit regular radio pulses as they rotate. As such they act as clocks which allow very precise monitoring of their orbital motions. Observations of pulsars in orbit around other stars have all demonstrated substantial periapsis precessions that cannot be accounted for classically but can be accounted for by using general relativity. For example, the Hulse–Taylor binary pulsar PSR B1913+16 (a pair of neutron stars in which one is detected as a pulsar) has an observed precession of over 4° of arc per year (periastron shift per orbit only about 10−6). This precession has been used to compute the masses of the components.
Similarly to the way in which atoms and molecules emit electromagnetic radiation, a gravitating mass that is in quadrupole type or higher order vibration, or is asymmetric and in rotation, can emit gravitational waves. These gravitational waves are predicted to travel at the speed of light. For example, planets orbiting the Sun constantly lose energy via gravitational radiation, but this effect is so small that it is unlikely it will be observed in the near future (Earth radiates about 200 watts (see gravitational waves) of gravitational radiation).
The radiation of gravitational waves has been inferred from the Hulse–Taylor binary (and other binary pulsars). Precise timing of the pulses shows that the stars orbit only approximately according to Kepler's Laws: over time they gradually spiral towards each other, demonstrating an energy loss in close agreement with the predicted energy radiated by gravitational waves. For their discovery of the first binary pulsar and measuring its orbital decay due to gravitational-wave emission, Hulse and Taylor won the 1993 Nobel Prize in Physics.
A "double pulsar" discovered in 2003, PSR J0737-3039, has a periastron precession of 16.90° per year; unlike the Hulse–Taylor binary, both neutron stars are detected as pulsars, allowing precision timing of both members of the system. Due to this, the tight orbit, the fact that the system is almost edge-on, and the very low transverse velocity of the system as seen from Earth, J0737−3039 provides by far the best system for strong-field tests of general relativity known so far. Several distinct relativistic effects are observed, including orbital decay as in the Hulse–Taylor system. After observing the system for two and a half years, four independent tests of general relativity were possible, the most precise (the Shapiro delay) confirming the general relativity prediction within 0.05% (nevertheless the periastron shift per orbit is only about 0.0013% of a circle and thus it is not a higher-order relativity test).
In 2013, an international team of astronomers reported new data from observing a pulsar-white dwarf system PSR J0348+0432, in which they have been able to measure a change in the orbital period of 8 millionths of a second per year, and confirmed GR predictions in a regime of extreme gravitational fields never probed before; but there are still some competing theories that would agree with these data.
Direct detection of gravitational waves
A number of gravitational-wave detectors have been built with the intent of directly detecting the gravitational waves emanating from such astronomical events as the merger of two neutron stars or black holes. In February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a stellar binary black hole merger, with additional detections announced in June 2016, June 2017, and August 2017.
General relativity predicts gravitational waves, as does any theory of gravitation in which changes in the gravitational field propagate at a finite speed. Since gravitational waves can be directly detected, it is possible to use them to learn about the Universe. This is gravitational-wave astronomy. Gravitational-wave astronomy can test general relativity by verifying that the observed waves are of the form predicted (for example, that they only have two transverse polarizations), and by checking that black holes are the objects described by solutions of the Einstein field equations. Gravitational-wave astronomy can also test Maxwell-Einstein field equations. This version of the field equations predicts that spinning Magnetars (i.e., Neutron stars with extremely strong magnetic dipole field) should emit gravitational waves. However, quantum considerations suggest otherwise and seemingly point to a specific version of Einstein field equations. Thus, gravitational-wave astronomy could be used not only for confirmation of the existing theory, but rather it could be used for deciding which version of the Einstein field equations is correct.
"These amazing observations are the confirmation of a lot of theoretical work, including Einstein's general theory of relativity, which predicts gravitational waves," said Stephen Hawking.
Direct observation of a black hole
The Galaxy M87 was the subject of observation by the Event Horizon Telescope (EHT) in 2017; the 10 April 2019 issue of Astrophysical Journal Letters (vol. 875, No. 1) was dedicated to the EHT results, publishing six open-access papers. The event horizon of the black hole at the center of M87 was directly imaged at the wavelength of radio waves by the EHT; the image was revealed in a press conference on 10 April 2019, the first image of a black hole's event horizon.
Gravitational redshift in light from the S2 star orbiting the supermassive black hole Sagittarius A* in the center of the Milky Way has been measured with the Very Large Telescope using GRAVITY, NACO and SIFONI instruments.
Strong equivalence principle
The strong equivalence principle of general relativity requires universality of free fall to apply even to bodies with strong self-gravity. Direct tests of this principle using Solar System bodies are limited by the weak self-gravity of the bodies, and tests using pulsar–white-dwarf binaries have been limited by the weak gravitational pull of the Milky Way. With the discovery of a triple star system called PSR J0337+1715, located about 4,200 light-years from Earth, the strong equivalence principle can be tested with a high accuracy. This system contains a neutron star in a 1.6-day orbit with a white dwarf star, and the pair in a 327-day orbit with another white dwarf further away. This system permits a test that compares how the gravitational pull of the outer white dwarf affects the pulsar, which has strong self-gravity, and the inner white dwarf. The result shows that the accelerations of the pulsar and its nearby white-dwarf companion differ fractionally by no more than 2.6×10−6.
This technique is based on the idea that photon trajectories are modified in the presence of a gravitational body. A very common astrophysical system in the universe is a black hole surrounded by an accretion disk. The radiation from the general neighborhood, including the accretion disk, is affected by the nature of the central black hole. Assuming Einstein's theory is correct, astrophysical black holes are described by the Kerr metric. (A consequence of the no-hair theorems.) Thus, by analyzing the radiation from such systems, it is possible to test Einstein's theory.
Most of the radiation from these black hole - accretion disk systems (e.g., black hole binaries and active galactic nuclei) arrives in the form of X-rays. When modeled, the radiation is decomposed into several components. Tests of Einstein's theory are possible with the thermal spectrum (only for black hole binaries) and the reflection spectrum (for both black hole binaries and active galactic nuclei). The former is not expected to provide strong constraints, while the latter is much more promising. In both cases, systematic uncertainties might make such tests more challenging.
Tests of general relativity on the largest scales are not nearly so stringent as Solar System tests. The earliest such test was the prediction and discovery of the expansion of the universe. In 1922, Alexander Friedmann found that the Einstein equations have non-stationary solutions (even in the presence of the cosmological constant). In 1927, Georges Lemaître showed that static solutions of the Einstein equations, which are possible in the presence of the cosmological constant, are unstable, and therefore the static universe envisioned by Einstein could not exist (it must either expand or contract). Lemaître made an explicit prediction that the universe should expand. He also derived a redshift-distance relationship, which is now known as the Hubble Law. Later, in 1931, Einstein himself agreed with the results of Friedmann and Lemaître. The expansion of the universe discovered by Edwin Hubble in 1929 was then considered by many (and continues to be considered by some now) as a direct confirmation of general relativity. In the 1930s, largely due to the work of E. A. Milne, it was realised that the linear relationship between redshift and distance derives from the general assumption of uniformity and isotropy rather than specifically from general relativity. However the prediction of a non-static universe was non-trivial, indeed dramatic, and primarily motivated by general relativity.
Some other cosmological tests include searches for primordial gravitational waves generated during cosmic inflation, which may be detected in the cosmic microwave background polarization or by a proposed space-based gravitational-wave interferometer called the Big Bang Observer. Other tests at high redshift are constraints on other theories of gravity, and the variation of the gravitational constant since Big Bang nucleosynthesis (it varied by no more than 40% since then).
In August 2017, the findings of tests conducted by astronomers using the European Southern Observatory's Very Large Telescope (VLT), among other instruments, were released, and which positively demonstrated gravitational effects predicted by Albert Einstein. One of which tests observed the orbit of the stars circling around Sagittarius A*, a black hole about 4 million times as massive as the sun. Einstein's theory suggested that large objects bend the space around them, causing other objects to diverge from the straight lines they would otherwise follow. Although previous studies have validated Einstein's theory, this was the first time his theory had been tested on such a gigantic object. The findings were published in The Astrophysical Journal.
Astronomers using the Hubble Space Telescope and the Very Large Telescope have made precise tests of general relativity on galactic scales. The nearby galaxy ESO 325-G004 acts as a strong gravitational lens, distorting light from a distant galaxy behind it to create an Einstein ring around its centre. By comparing the mass of ESO 325-G004 (from measurements of the motions of stars inside this galaxy) with the curvature of space around it, astronomers found that gravity behaves as predicted by general relativity on these astronomical length-scales.
- Castelvecchi, Davide; Witze, Witze (February 11, 2016). "Einstein's gravitational waves found at last". Nature News. doi:10.1038/nature.2016.19361. Retrieved 2016-02-11.
- Conover, Emily, LIGO snags another set of gravitational waves, Science News, June 1, 2017. Retrieved 8 June 2017.
- Einstein, Albert (1916). "The Foundation of the General Theory of Relativity" (PDF). Annalen der Physik. 49 (7): 769–822. Bibcode:1916AnP...354..769E. doi:10.1002/andp.19163540702. Retrieved 2006-09-03.
- Einstein, Albert (1916). "The Foundation of the General Theory of Relativity" (English HTML, contains link to German PDF). Annalen der Physik. 49 (7): 769–822. Bibcode:1916AnP...354..769E. doi:10.1002/andp.19163540702.
- Einstein, Albert (1919). "What Is The Theory Of Relativity?" (PDF). German History in Documents and Images. Retrieved 7 June 2013.
- U. Le Verrier (1859), (in French), "Lettre de M. Le Verrier à M. Faye sur la théorie de Mercure et sur le mouvement du périhélie de cette planète", Comptes rendus hebdomadaires des séances de l'Académie des sciences (Paris), vol. 49 (1859), pp.379–383.
- Clemence, G. M. (1947). "The Relativity Effect in Planetary Motions". Reviews of Modern Physics. 19 (4): 361–364. Bibcode:1947RvMP...19..361C. doi:10.1103/RevModPhys.19.361.
- Park, Ryan S., et al. "Precession of Mercury's Perihelion from Ranging to the MESSENGER Spacecraft." The Astronomical Journal 153.3 (2017): 121.
- http://www.tat.physik.uni-tuebingen.de/~kokkotas/Teaching/Experimental_Gravity_files/Hajime_PPN.pdf - Perihelion shift of Mercury, page 11
- Dediu, Adrian-Horia; Magdalena, Luis; Martín-Vide, Carlos (2015). Theory and Practice of Natural Computing: Fourth International Conference, TPNC 2015, Mieres, Spain, December 15-16, 2015. Proceedings (illustrated ed.). Springer. p. 141. ISBN 978-3-319-26841-5. Extract of page 141
- Biswas, Abhijit; Mani, Krishnan R. S. (2008). "Relativistic perihelion precession of orbits of Venus and the Earth". Central European Journal of Physics. v1. 6 (3): 754–758. arXiv:0802.0176. Bibcode:2008CEJPh...6..754B. doi:10.2478/s11534-008-0081-6.
- Matzner, Richard Alfred (2001). Dictionary of geophysics, astrophysics, and astronomy. CRC Press. p. 356. Bibcode:2001dgaa.book.....M. ISBN 978-0-8493-2891-6.
- Weisberg, J.M.; Taylor, J.H. (July 2005). "The Relativistic Binary Pulsar B1913+16: Thirty Years of Observations and Analysis". Written at San Francisco. In F.A. Rasio; I.H. Stairs (eds.). Binary Radio Pulsars. ASP Conference Series. 328. Aspen, Colorado, USA: Astronomical Society of the Pacific. p. 25. arXiv:astro-ph/0407149. Bibcode:2005ASPC..328...25W.
- Naeye, Robert, "Stellar Mystery Solved, Einstein Safe", Sky and Telescope, September 16, 2009. See also MIT Press Release, September 17, 2009. Accessed 8 June 2017.
- Soldner, J. G. V. (1804). . Berliner Astronomisches Jahrbuch: 161–172.
- Soares, Domingos S. L. (2009). "Newtonian gravitational deflection of light revisited". arXiv:physics/0508030.
- Will, C.M. (December 2014). "The Confrontation between General Relativity and Experiment". Living Rev. Relativ. 17 (1): 4. arXiv:gr-qc/0510072. Bibcode:2006LRR.....9....3W. doi:10.12942/lrr-2014-4. PMC 5255900. PMID 28179848. (ArXiv version here: arxiv.org/abs/1403.7377.)
- Ned Wright: Deflection and Delay of Light
- Dyson, F. W.; Eddington, A. S.; Davidson C. (1920). "A determination of the deflection of light by the Sun's gravitational field, from observations made at the total eclipse of 29 May 1919". Philosophical Transactions of the Royal Society. 220A (571–581): 291–333. Bibcode:1920RSPTA.220..291D. doi:10.1098/rsta.1920.0009.
- Stanley, Matthew (2003). "'An Expedition to Heal the Wounds of War': The 1919 Eclipse and Eddington as Quaker Adventurer". Isis. 94 (1): 57–89. Bibcode:2003Isis...94...57S. doi:10.1086/376099. PMID 12725104.
- Rosenthal-Schneider, Ilse: Reality and Scientific Truth. Detroit: Wayne State University Press, 1980. p 74. See also Calaprice, Alice: The New Quotable Einstein. Princeton: Princeton University Press, 2005. p 227.
- Harry Collins and Trevor Pinch, The Golem, ISBN 0-521-47736-0
- Daniel Kennefick (2007). "Not Only Because of Theory: Dyson, Eddington and the Competing Myths of the 1919 Eclipse Expedition". Studies in History and Philosophy of Science Part A. 44: 89–101. arXiv:0709.0685. Bibcode:2007arXiv0709.0685K. doi:10.1016/j.shpsa.2012.07.010.
- Ball, Philip (2007). "Arthur Eddington was innocent!". News@nature. doi:10.1038/news070903-20.
- D. Kennefick, "Testing relativity from the 1919 eclipse- a question of bias", Physics Today, March 2009, pp. 37–42.
- van Biesbroeck, G.: The relativity shift at the 1952 February 25 eclipse of the Sun., Astronomical Journal, vol. 58, page 87, 1953.
- Texas Mauritanian Eclipse Team: Gravitational deflection of-light: solar eclipse of 30 June 1973 I. Description of procedures and final results., Astronomical Journal, vol. 81, page 452, 1976.
- Titov, O.; Girdiuk, A. (2015). Z. Malkin & N. Capitaine (ed.). The deflection of light induced by the Sun's gravitational field and measured with geodetic VLBI. Proceedings of the Journées 2014 "Systèmes de référence spatio-temporels": Recent developments and prospects in ground-based and space astrometry. Pulkovo Observatory, St. Petersburg, Russia. pp. 75–78. arXiv:1502.07395. Bibcode:2015jsrs.conf...75T. ISBN 978-5-9651-0873-2.
- Drake, Nadia (7 June 2017). "Einstein's 'Impossible' Experiment Finally Performed". National Geographic. Retrieved 9 June 2017.
- Hetherington, N. S., "Sirius B and the gravitational redshift - an historical review", Quarterly Journal Royal Astronomical Society, vol. 21, Sept. 1980, p. 246-252. Accessed 6 April 2017.
- Holberg, J. B., "Sirius B and the Measurement of the Gravitational Redshift", Journal for the History of Astronomy, Vol. 41, 1, 2010, p. 41-64. Accessed 6 April 2017.
- Dicke, R. H. (March 6, 1959). "New Research on Old Gravitation: Are the observed physical constants independent of the position, epoch, and velocity of the laboratory?". Science. 129 (3349): 621–624. Bibcode:1959Sci...129..621D. doi:10.1126/science.129.3349.621. PMID 17735811.
- Dicke, R. H. (1962). "Mach's Principle and Equivalence". Evidence for gravitational theories: proceedings of course 20 of the International School of Physics "Enrico Fermi" ed C. Møller.
- Schiff, L. I. (April 1, 1960). "On Experimental Tests of the General Theory of Relativity". American Journal of Physics. 28 (4): 340–343. Bibcode:1960AmJPh..28..340S. doi:10.1119/1.1935800.
- Brans, C. H.; Dicke, R. H. (November 1, 1961). "Mach's Principle and a Relativistic Theory of Gravitation". Physical Review. 124 (3): 925–935. Bibcode:1961PhRv..124..925B. doi:10.1103/PhysRev.124.925.
- "Fact Sheet".
- Testing general relativity with the BepiColombo radio science experiment. (PDF) A. Milani, David Vokroulicky, Daniela Villani, Claudio Bonanno. Physical Review D 66(8); October 2002. doi:10.1103/PhysRevD.66.082001
- Testing General Relativity with the Radio Science Experiment of the BepiColombo mission to Mercury. Giulia Schettino, and Giacomo Tommei. Universe 2016, 2(3), 21; doi:10.3390/universe2030021.
- The Mercury Orbiter Radio Science Experiment (MORE) on board the ESA/JAXA BepiColombo MIssion to Mercury. SERRA, DANIELE; TOMMEI, GIACOMO; MILANI COMPARETTI, ANDREA. Università di Pisa, 2017.
- Fomalont, E.B.; Kopeikin S.M.; Lanyi, G.; Benson, J. (July 2009). "Progress in Measurements of the Gravitational Bending of Radio Waves Using the VLBA". Astrophysical Journal. 699 (2): 1395–1402. arXiv:0904.3992. Bibcode:2009ApJ...699.1395F. doi:10.1088/0004-637X/699/2/1395.
- esa. "Gaia overview".
- Shapiro, I. I. (December 28, 1964). "Fourth test of general relativity". Physical Review Letters. 13 (26): 789–791. Bibcode:1964PhRvL..13..789S. doi:10.1103/PhysRevLett.13.789.
- Shapiro, I. I.; Ash M. E.; Ingalls R. P.; Smith W. B.; Campbell D. B.; Dyce R. B.; Jurgens R. F. & Pettengill G. H. (May 3, 1971). "Fourth Test of General Relativity: New Radar Result". Physical Review Letters. 26 (18): 1132–1135. Bibcode:1971PhRvL..26.1132S. doi:10.1103/PhysRevLett.26.1132.
- Bertotti B.; Iess L.; Tortora P. (2003). "A test of general relativity using radio links with the Cassini spacecraft". Nature. 425 (6956): 374–376. Bibcode:2003Natur.425..374B. doi:10.1038/nature01997. PMID 14508481.
- Kopeikin S.~M.; Polnarev A.~G.; Schaefer G.; Vlasov I.Yu. (2007). "Gravimagnetic effect of the barycentric motion of the Sun and determination of the post-Newtonian parameter γ in the Cassini experiment". Physics Letters A. 367 (4–5): 276–280. arXiv:gr-qc/0604060. Bibcode:2007PhLA..367..276K. doi:10.1016/j.physleta.2007.03.036.
- Kopeikin S.~M. (2009). "Post-Newtonian limitations on measurement of the PPN parameters caused by motion of gravitating bodies". Monthly Notices of the Royal Astronomical Society. 399 (3): 1539–1552. arXiv:0809.3433. Bibcode:2009MNRAS.399.1539K. doi:10.1111/j.1365-2966.2009.15387.x.
- Fomalont, E.B.; Kopeikin S.M. (November 2003). "The Measurement of the Light Deflection from Jupiter: Experimental Results". Astrophysical Journal. 598 (1): 704–711. arXiv:astro-ph/0302294. Bibcode:2003ApJ...598..704F. doi:10.1086/378785.
- Kopeikin, S.M.; Fomalont E.B. (October 2007). "Gravimagnetism, causality, and aberration of gravity in the gravitational light-ray deflection experiments". General Relativity and Gravitation. 39 (10): 1583–1624. arXiv:gr-qc/0510077. Bibcode:2007GReGr..39.1583K. doi:10.1007/s10714-007-0483-6.
- Fomalont, E.B.; Kopeikin, S. M.; Jones, D.; Honma, M.; Titov, O. (January 2010). "Recent VLBA/VERA/IVS tests of general relativity". Proceedings of the International Astronomical Union, IAU Symposium. 261 (S261): 291–295. arXiv:0912.3421. Bibcode:2010IAUS..261..291F. doi:10.1017/S1743921309990536.
- Nordtvedt, Jr., K. (May 25, 1968). "Equivalence Principle for Massive Bodies. II. Theory". Physical Review. 169 (5): 1017–1025. Bibcode:1968PhRv..169.1017N. doi:10.1103/PhysRev.169.1017.
- Nordtvedt, Jr., K. (June 25, 1968). "Testing Relativity with Laser Ranging to the Moon". Physical Review. 170 (5): 1186–1187. Bibcode:1968PhRv..170.1186N. doi:10.1103/PhysRev.170.1186.
- Williams, J. G.; Turyshev, Slava G.; Boggs, Dale H. (December 29, 2004). "Progress in Lunar Laser Ranging Tests of Relativistic Gravity". Physical Review Letters. 93 (5): 1017–1025. arXiv:gr-qc/0411113. Bibcode:2004PhRvL..93z1101W. doi:10.1103/PhysRevLett.93.261101.
- Uzan, J. P. (2003). "The fundamental constants and their variation: Observational status and theoretical motivations". Reviews of Modern Physics. 75 (5): 403–. arXiv:hep-ph/0205340. Bibcode:2003RvMP...75..403U. doi:10.1103/RevModPhys.75.403.
- Pound, R. V.; Rebka, Jr. G. A. (November 1, 1959). "Gravitational Red-Shift in Nuclear Resonance". Physical Review Letters. 3 (9): 439–441. Bibcode:1959PhRvL...3..439P. doi:10.1103/PhysRevLett.3.439.
- Pound, R. V.; Rebka Jr. G. A. (April 1, 1960). "Apparent weight of photons". Physical Review Letters. 4 (7): 337–341. Bibcode:1960PhRvL...4..337P. doi:10.1103/PhysRevLett.4.337.
- Pound, R. V.; Snider J. L. (November 2, 1964). "Effect of Gravity on Nuclear Resonance". Physical Review Letters. 13 (18): 539–540. Bibcode:1964PhRvL..13..539P. doi:10.1103/PhysRevLett.13.539.
- Vessot, R. F. C.; M. W. Levine; E. M. Mattison; E. L. Blomberg; T. E. Hoffman; G. U. Nystrom; B. F. Farrel; R. Decher; et al. (December 29, 1980). "Test of Relativistic Gravitation with a Space-Borne Hydrogen Maser". Physical Review Letters. 45 (26): 2081–2084. Bibcode:1980PhRvL..45.2081V. doi:10.1103/PhysRevLett.45.2081.
- Neil, Ashby (28 January 2003). "Relativity in the Global Positioning System". Living Reviews in Relativity. 6 (1): 1. Bibcode:2003LRR.....6....1A. doi:10.12942/lrr-2003-1. PMC 5253894.
- "Gravitational Physics with Optical Clocks in Space" (PDF). S. Schiller (PDF). Heinrich Heine Universität Düsseldorf. 2007. Retrieved 19 March 2015.
- Hafele, J. C.; Keating, R. E. (July 14, 1972). "Around-the-World Atomic Clocks: Predicted Relativistic Time Gains". Science. 177 (4044): 166–168. Bibcode:1972Sci...177..166H. doi:10.1126/science.177.4044.166. PMID 17779917.
- Hafele, J. C.; Keating, R. E. (July 14, 1972). "Around-the-World Atomic Clocks: Observed Relativistic Time Gains". Science. 177 (4044): 168–170. Bibcode:1972Sci...177..168H. doi:10.1126/science.177.4044.168. PMID 17779918.
- Ciufolini I. & Pavlis E.C. (2004). "A confirmation of the general relativistic prediction of the Lense–Thirring effect". Nature. 431 (7011): 958–960. Bibcode:2004Natur.431..958C. doi:10.1038/nature03007. PMID 15496915.
- Krogh K. (2007). "Comment on 'Evidence of the gravitomagnetic field of Mars'". Classical and Quantum Gravity. 24 (22): 5709–5715. arXiv:astro-ph/0701653. Bibcode:2007CQGra..24.5709K. doi:10.1088/0264-9381/24/22/N01.
- Merritt, D.; Alexander, T.; Mikkola, S.; Will, C. (2010). "Testing Properties of the Galactic Center Black Hole Using Stellar Orbits". Physical Review D. 81 (6): 062002. arXiv:0911.4718. Bibcode:2010PhRvD..81f2002M. doi:10.1103/PhysRevD.81.062002.
- Will, C. (2008). "Testing the General Relativistic "No-Hair" Theorems Using the Galactic Center Black Hole Sagittarius A*". Astrophysical Journal Letters. 674 (1): L25–L28. arXiv:0711.1677. Bibcode:2008ApJ...674L..25W. doi:10.1086/528847.
- Everitt; et al. (2011). "Gravity Probe B: Final Results of a Space Experiment to Test General Relativity". Physical Review Letters. 106 (22): 221101. arXiv:1105.3456. Bibcode:2011PhRvL.106v1101E. doi:10.1103/PhysRevLett.106.221101. PMID 21702590.
- Ker Than (2011-05-05). "Einstein Theories Confirmed by NASA Gravity Probe". News.nationalgeographic.com. Retrieved 2011-05-08.
- "Prepping satellite to test Albert Einstein".
- Ciufolini, I.; et al. (2009). "Towards a One Percent Measurement of Frame Dragging by Spin with Satellite Laser Ranging to LAGEOS, LAGEOS 2 and LARES and GRACE Gravity Models". Space Science Reviews. 148 (1–4): 71–104. Bibcode:2009SSRv..148...71C. doi:10.1007/s11214-009-9585-7.
- Ciufolini, I.; Paolozzi A.; Pavlis E. C.; Ries J. C.; Koenig R.; Matzner R. A.; Sindoni G. & Neumayer H. (2009). "Towards a One Percent Measurement of Frame Dragging by Spin with Satellite Laser Ranging to LAGEOS, LAGEOS 2 and LARES and GRACE Gravity Models". Space Science Reviews. 148 (1–4): 71–104. Bibcode:2009SSRv..148...71C. doi:10.1007/s11214-009-9585-7.
- Ciufolini, I.; Paolozzi A.; Pavlis E. C.; Ries J. C.; Koenig R.; Matzner R. A.; Sindoni G. & Neumayer H. (2010). "Gravitomagnetism and Its Measurement with Laser Ranging to the LAGEOS Satellites and GRACE Earth Gravity Models". General Relativity and John Archibald Wheeler. Astrophysics and Space Science Library. 367. SpringerLink. pp. 371–434. doi:10.1007/978-90-481-3735-0_17. ISBN 978-90-481-3734-3.
- Paolozzi, A.; Ciufolini I.; Vendittozzi C. (2011). "Engineering and scientific aspects of LARES satellite". Acta Astronautica. 69 (3–4): 127–134. Bibcode:2011AcAau..69..127P. doi:10.1016/j.actaastro.2011.03.005. ISSN 0094-5765.
- Kapner; Adelberger (8 January 2007). "Tests of the Gravitational Inverse-Square Law below the Dark-Energy Length Scale". Physical Review Letters. 98 (2): 021101. arXiv:hep-ph/0611184. Bibcode:2007PhRvL..98b1101K. doi:10.1103/PhysRevLett.98.021101. PMID 17358595.
- In general relativity, a perfectly spherical star (in vacuum) that expands or contracts while remaining perfectly spherical cannot emit any gravitational waves (similar to the lack of e/m radiation from a pulsating charge), as Birkhoff's theorem says that the geometry remains the same exterior to the star. More generally, a rotating system will only emit gravitational waves if it lacks the axial symmetry with respect to the axis of rotation.
- Stairs, Ingrid H. (2003). "Testing General Relativity with Pulsar Timing". Living Reviews in Relativity. 6 (1): 5. arXiv:astro-ph/0307536. Bibcode:2003LRR.....6....5S. doi:10.12942/lrr-2003-5. PMC 5253800. PMID 28163640.
- Weisberg, J. M.; Taylor, J. H.; Fowler, L. A. (October 1981). "Gravitational waves from an orbiting pulsar". Scientific American. 245 (4): 74–82. Bibcode:1981SciAm.245d..74W. doi:10.1038/scientificamerican1081-74.
- Weisberg, J. M.; Nice, D. J.; Taylor, J. H. (2010). "Timing Measurements of the Relativistic Binary Pulsar PSR B1913+16". Astrophysical Journal. 722 (2): 1030–1034. arXiv:1011.0718v1. Bibcode:2010ApJ...722.1030W. doi:10.1088/0004-637X/722/2/1030.
- "Press Release: The Nobel Prize in Physics 1993". Nobel Prize. 13 October 1993. Retrieved 6 May 2014.
- Kramer, M.; et al. (2006). "Tests of general relativity from timing the double pulsar". Science. 314 (5796): 97–102. arXiv:astro-ph/0609417. Bibcode:2006Sci...314...97K. doi:10.1126/science.1132305. PMID 16973838.
- Antoniadis, John; et al. (2013). "A Massive Pulsar in a Compact Relativistic Binary". Science. 340 (6131): 1233232. arXiv:1304.6875. Bibcode:2013Sci...340..448A. doi:10.1126/science.1233232. PMID 23620056.
- Cowen, Ron (25 April 2013). "Massive double star is latest test for Einstein's gravity theory". Ron Cowen. doi:10.1038/nature.2013.12880. Retrieved 7 May 2013.
- B. P. Abbott; et al. (2016). "Observation of Gravitational Waves from a Binary Black Hole Merger". Physical Review Letters. 116 (6): 061102. arXiv:1602.03837. Bibcode:2016PhRvL.116f1102A. doi:10.1103/PhysRevLett.116.061102. PMID 26918975.
- "Gravitational waves detected 100 years after Einstein's prediction | NSF - National Science Foundation". www.nsf.gov. Retrieved 2016-02-11.
- Choi, Charles Q. "Gravitational Waves Detected from Neutron-Star Crashes: The Discovery Explained". Space.com. Purch. Retrieved 1 November 2017.
- Schutz, Bernard F. (1984). "Gravitational waves on the back of an envelope" (PDF). American Journal of Physics. 52 (5): 412–419. Bibcode:1984AmJPh..52..412S. doi:10.1119/1.13627.
- Gair, Jonathan; Vallisneri, Michele; Larson, Shane L.; Baker, John G. (2013). "Testing General Relativity with Low-Frequency, Space-Based Gravitational-Wave Detectors". Living Reviews in Relativity. 16 (1): 7. arXiv:1212.5575. Bibcode:2013LRR....16....7G. doi:10.12942/lrr-2013-7. PMC 5255528. PMID 28163624.
- Yunes, Nicolás; Siemens, Xavier (2013). "Gravitational-Wave Tests of General Relativity with Ground-Based Detectors and Pulsar-Timing Arrays". Living Reviews in Relativity. 16 (1): 9. arXiv:1304.3473. Bibcode:2013LRR....16....9Y. doi:10.12942/lrr-2013-9. PMC 5255575. PMID 28179845.
- Abbott, Benjamin P.; et al. (LIGO Scientific Collaboration and Virgo Collaboration) (2016). "Tests of general relativity with GW150914". Physical Review Letters. 116 (221101): 221101. arXiv:1602.03841. Bibcode:2016PhRvL.116v1101A. doi:10.1103/PhysRevLett.116.221101. PMID 27314708.
- Corsi, A.; Meszaros, P. (8 Nov 2018). "GRB Afterglow Plateaus and gravitational waves: multi-messenger signature of a millisecond Magnetar?". Astrophys. J. 702: 1171–1178. arXiv:0907.2290v2. doi:10.1088/0004-637X/702/2/1171.
- see Nemirovsky, J.; Cohen, E.; Kaminer, I. (30 Dec 2018). "Spin Spacetime Censorship". arXiv:1812.11450v2 [gr-qc]. page 11 and page 18
- The Event Horizon Telescope Collaboration (2019). "First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole". The Astrophysical Journal. 875 (1): L1. arXiv:1906.11238. Bibcode:2019ApJ...875L...1E. doi:10.3847/2041-8213/ab0ec7.
- "Focus on the First Event Horizon Telescope Results". Shep Doeleman. The Astrophysical Journal. 10 April 2019. Retrieved 14 April 2019.
- "First Successful Test of Einstein's General Relativity Near Supermassive Black Hole". Hämmerle, Hannelore. Max Planck Institute for Extraterrestrial Physics. 26 July 2018. Retrieved 28 July 2018.
- GRAVITY Collaboration (26 July 2018). "Detection of the gravitational redshift in the orbit of the star S2 near the Galactic centre massive black hole". Astronomy & Astrophysics. 615 (L15): L15. arXiv:1807.09409. Bibcode:2018A&A...615L..15G. doi:10.1051/0004-6361/201833718.
- Anne M. Archibald; et al. (4 July 2018). "Universality of free fall from the orbital motion of a pulsar in a stellar triple system". Nature. 559 (7712): 73–76. arXiv:1807.02059. Bibcode:2018Natur.559...73A. doi:10.1038/s41586-018-0265-1.
- "Even Phenomenally Dense Neutron Stars Fall like a Feather - Einstein Gets It Right Again". Charles Blue, Paul Vosteen. NRAO. 4 July 2018. Retrieved 28 July 2018.
- Kong, Lingyao; Li, Zilong; Bambi, Cosimo (2014). "Constraints on the Spacetime Geometry around 10 Stellar-mass Black Hole Candidates from the Disk's Thermal Spectrum". The Astrophysical Journal. 797 (2): 78. arXiv:1405.1508. Bibcode:2014ApJ...797...78K. doi:10.1088/0004-637X/797/2/78. ISSN 0004-637X.
- Bambi, Cosimo (2017-04-06). "Testing black hole candidates with electromagnetic radiation". Reviews of Modern Physics. 89 (2): 025001. arXiv:1509.03884. Bibcode:2017RvMP...89b5001B. doi:10.1103/RevModPhys.89.025001.
- Krawczynski, Henric (2018-07-24). "Difficulties of quantitative tests of the Kerr-hypothesis with X-ray observations of mass accreting black holes". General Relativity and Gravitation. 50 (8): 100. arXiv:1806.10347. Bibcode:2018GReGr..50..100K. doi:10.1007/s10714-018-2419-8. ISSN 0001-7701.
- Peebles, P. J. E. (December 2004). "Probing General Relativity on the Scales of Cosmology". Testing general relativity on the scales of cosmology. General Relativity and Gravitation. pp. 106–117. arXiv:astro-ph/0410284. Bibcode:2005grg..conf..106P. doi:10.1142/9789812701688_0010. ISBN 978-981-256-424-5.
- Rudnicki, 1991, p. 28. The Hubble Law was viewed by many as an observational confirmation of General Relativity in the early years
- W.Pauli, 1958, pp. 219–220
- Kragh, 2003, p. 152
- Kragh, 2003, p. 153
- Rudnicki, 1991, p. 28
- Chandrasekhar, 1980, p. 37
- Hand, Eric (2009). "Cosmology: The test of inflation". Nature. 458 (7240): 820–824. doi:10.1038/458820a. PMID 19370005.
- Reyes, Reinabelle; et al. (2010). "Confirmation of general relativity on large scales from weak lensing and galaxy velocities". Nature. 464 (7286): 256–258. arXiv:1003.2185. Bibcode:2010Natur.464..256R. doi:10.1038/nature08857. PMID 20220843.
- Guzzo, L.; et al. (2008). "A test of the nature of cosmic acceleration using galaxy redshift distortions". Nature. 451 (7178): 541–544. arXiv:0802.1944. Bibcode:2008Natur.451..541G. doi:10.1038/nature06555. PMID 18235494.
- Patel, Neel V. (9 August 2017). "The Milky Way's Supermassive Black Hole is Proving Einstein Correct". Inverse via Yahoo.news. Retrieved 9 August 2017.
- Duffy, Sean (10 August 2017). "Black Hole Indicates Einstein Was Right: Gravity Bends Space". Courthouse News Service. Retrieved 10 August 2017.
- "Einstein proved right in another galaxy". Press Office. University of Portsmouth. 22 June 2018. Retrieved 28 July 2018.
- Thomas E. Collett; et al. (22 June 2018). "A precise extragalactic test of General Relativity". Science. 360 (6395): 1342–1346. arXiv:1806.08300. Bibcode:2018Sci...360.1342C. doi:10.1126/science.aao2469. PMID 29930135.
Other research papers
- Bertotti, B.; Iess, L.; Tortora, P. (2003). "A test of general relativity using radio links with the Cassini spacecraft". Nature. 425 (6956): 374–6. Bibcode:2003Natur.425..374B. doi:10.1038/nature01997. PMID 14508481.
- Kopeikin, S.; Polnarev, A.; Schaefer, G.; Vlasov, I. (2007). "Gravimagnetic effect of the barycentric motion of the Sun and determination of the post-Newtonian parameter γ in the Cassini experiment". Physics Letters A. 367 (4–5): 276–280. arXiv:gr-qc/0604060. Bibcode:2007PhLA..367..276K. doi:10.1016/j.physleta.2007.03.036.
- Brans, C.; Dicke, R. H. (1961). "Mach's principle and a relativistic theory of gravitation". Phys. Rev. 124 (3): 925–35. Bibcode:1961PhRv..124..925B. doi:10.1103/PhysRev.124.925.
- A. Einstein, "Über das Relativitätsprinzip und die aus demselben gezogene Folgerungen", Jahrbuch der Radioaktivitaet und Elektronik 4 (1907); translated "On the relativity principle and the conclusions drawn from it", in The collected papers of Albert Einstein. Vol. 2 : The Swiss years: writings, 1900–1909 (Princeton University Press, Princeton, New Jersey, 1989), Anna Beck translator. Einstein proposes the gravitational redshift of light in this paper, discussed online at The Genesis of General Relativity.
- A. Einstein, "Über den Einfluß der Schwerkraft auf die Ausbreitung des Lichtes", Annalen der Physik 35 (1911); translated "On the Influence of Gravitation on the Propagation of Light" in The collected papers of Albert Einstein. Vol. 3 : The Swiss years: writings, 1909–1911 (Princeton University Press, Princeton, New Jersey, 1994), Anna Beck translator, and in The Principle of Relativity, (Dover, 1924), pp 99–108, W. Perrett and G. B. Jeffery translators, ISBN 0-486-60081-5. The deflection of light by the sun is predicted from the principle of equivalence. Einstein's result is half the full value found using the general theory of relativity.
- Shapiro, S. S.; Davis, J. L.; Lebach, D. E.; Gregory J.S. (26 March 2004). "Measurement of the solar gravitational deflection of radio waves using geodetic very-long-baseline interferometry data, 1979–1999". Physical Review Letters. 92 (121101): 121101. Bibcode:2004PhRvL..92l1101S. doi:10.1103/PhysRevLett.92.121101. PMID 15089661.
- M. Froeschlé, F. Mignard and F. Arenou, "Determination of the PPN parameter γ with the Hipparcos data" Hipparcos Venice '97, ESA-SP-402 (1997).
- Will, Clifford M. (2006). "Was Einstein Right? Testing Relativity at the Centenary". Annalen der Physik. 15 (1–2): 19–33. arXiv:gr-qc/0504086. Bibcode:2006AnP...518...19W. doi:10.1002/andp.200510170.
- Rudnicki, Conrad (1991). "What are the Empirical Bases of the Hubble Law" (PDF). Apeiron (9–10): 27–36. Retrieved 2009-06-23.
- Chandrasekhar, S. (1980). "The Role of General Relativity in Astronomy: Retrospect and Prospect" (PDF). J. Astrophys. Astron. 1 (1): 33–45. Bibcode:1980JApA....1...33C. doi:10.1007/BF02727948. Retrieved 2009-06-23.
- Kragh, Helge; Smith, Robert W. (2003). "Who discovered the expanding universe". History of Science. 41 (2): 141–62. Bibcode:2003HisSc..41..141K. doi:10.1177/007327530304100202.
- S. M. Carroll, Spacetime and Geometry: an Introduction to General Relativity, Addison-Wesley, 2003. A graduate-level general relativity textbook.
- A. S. Eddington, Space, Time and Gravitation, Cambridge University Press, reprint of 1920 ed.
- A. Gefter, "Putting Einstein to the Test", Sky and Telescope July 2005, p. 38. A popular discussion of tests of general relativity.
- H. Ohanian and R. Ruffini, Gravitation and Spacetime, 2nd Edition Norton, New York, 1994, ISBN 0-393-96501-5. A general relativity textbook.
- Pauli, Wolfgang Ernst (1958). "Part IV. General Theory of Relativity". Theory of Relativity. Courier Dover Publications. ISBN 978-0-486-64152-2.
- C. M. Will, Theory and Experiment in Gravitational Physics, Cambridge University Press, Cambridge (1993). A standard technical reference.
- C. M. Will, Was Einstein Right?: Putting General Relativity to the Test, Basic Books (1993). This is a popular account of tests of general relativity.
Living Reviews papers
- N. Ashby, "Relativity in the Global Positioning System", Living Reviews in Relativity (2003).
- C. M. Will, The Confrontation between General Relativity and Experiment, Living Reviews in Relativity (2014). An online, technical review, covering much of the material in Theory and experiment in gravitational physics. It is less comprehensive but more up to date. (ArXiv version here: arxiv.org/abs/1403.7377 ) |
An action of a subject, in relation to an object, is expressed in two ways. These two ways of expressing the action of a subject are known as Voices.
1. Active Voice
2. Passive Voice
Examples for Active and Passive Voice
- I write a letter. (Active Voice)
- A letter is written by me. (Passive Voice)
The structure of the same sentence changes when expressed as Active Voice or Passive Voice. The meaning of a sentence either expressed as Active Voice or Passive Voice, remains the same.
Difference between Active Voice and Passive Voice.
The meaning or main idea of sentence, either expressed as Active Voice or Passive Voice does not change. The structure of a same sentence changes for Active Voice and Passive Voice. We know that every sentence have a subject, a verb and an object. Subject is an agent who works on an object in a sentence. In the above example, “I” is the subject of the sentence that is doing some work on the object ‘letter’ in the same sentence.
To understand the difference in both voices, we should focus on the subject and the object of a sentence. In Active Voice, the subject acts upon the object. In Passive Voice, the object is acted upon by the subject. The meaning remains the same in both Voices but the sequence of the words (subject & object) changes. The sequence, of subject and object as in Active Voice, is reversed when it is expressed in Passive Voice. Read the following example to better understanding this difference.
|Active Voice||Passive Voice|
|I eat an apple.||An apple is eaten by me.|
|He bought a car.||A car was bought by him.|
The sequence of the subject and the object of the sentence is reversed while converting the sentence from Active Voice to Passive Voice.
The structures of the same sentence, for both Voices, are as follows:
Active Voice: Subject + Verb + Object
Passive Voice: Object + Verb + Subject
Apart from reversing the sequence of subject and object, the form of the verb of the sentence also changes in both Voices. In the above example, you can see the change in the main verb as well as an auxiliary verb of the same sentence in both Voices. The only form of the verb used in Passive Voice is the 3rd form of Verb which is also called Past Participle. Hence, the rule for changing a verb for converting a sentence from Active Voice into Passive Voice is to use only the 3rd form of Verb in Passive Voice. For changing the auxiliary verb for converting a sentence from Active Voice into Passive Voice, there are rules varying for tenses. To learn these rules, read the rules for Tenses as given in the links on this page.
Active and Passive Voice Rule
Rule No. 1. As mentioned earlier, the structure of the sentence will be reversed in Passive Voice. The places of the Subject and the object will interchange. The subject will shift to the place of Object and the object will take the place of Subject in Passive Voice.
Active Voice: He buys a camera.
Passive Voice: A camera is bought by him.
Rule No. 2. Only Past Participle Form or 3rd form of a verb (e.g. eaten etc) will always be used as the main verb in Passive voices for all tenses. No other form of the verb will be used as the main verb. It can be seen in all the examples given on this page.
Rule No. 3. The word “by” will be used before the subject in the Passive voice.
Active Voice: She drinks water.
Passive Voice: Water is drunk by her.
Rule No. 4. Other words such as ‘with’ or ‘to’ may also be used instead of the word ‘by’ depending upon the subject of the sentence. These words are used in very few cases. The word ‘by’ is used in most cases.
Active Voice:: I know him.
Passive Voice: He is known to me.
Active Voice: Water fills a tub.
Passive Voice: A tub is filled with water.
Rule No. 5. The auxiliary verb will be changed in Passive Voice depending upon the tense of the sentence in its Active Voice. There are rules for changing the auxiliary for each tense which can also be studied on this website.
Rule No. 6. The subject may not be always mentioned in Passive Voice. A passive voice for some sentences can also be written without having a subject, if it gives a clear idea about the subject. Read the following examples.
Passive Voice: Women are not treated as equals.
Passive Voice: Sugar is sold in kilograms.
Note: The above rules, except rule No. 5, are the basic rules for changing Active Voices into Passive Voice and apply to all types of sentences. Rule No. 5 is about the usage of auxiliary verbs in Passive Voices which differs for each tense of the sentence. These rules for each tense have also been explained on this website.
Download Now Active and Passive Voice Question and Answer PDF
You May Also Like
|Best 5000+ GK in Hindi Questions Answers PDF Notes|| |
100 Expected Number Series Questions Free PDF Download Now
|160+ Circular Arrangement Puzzle Free PDF Download Now|
You May Also Like Some of Our Best E-Books & Practice Sets
- Quadratic Equation PDF With Solution for All Bank Exam
- English Practice Book PDF Notes For SBI Clerk Prelims
- Economic Questions For RRB NTPC, SSC, UPSC Exam
- 40+ RRB NTPC Officials Exam Paper PDF Hindi and English
- Ratio and Proportion Practice Questions For All Exams
- Reasoning New Pattern Coding-Decoding
- RRB NTPC Arithmetic Practice Set in Hindi And English
- RRB NTPC Physics Questions General Science
- 300+ Haryana GK One Liner Capsules For HSSC HTET HPSC and Other Exams
- 100+Puzzles for RRB PO and Clerk Exam PDF Download Now
- Quadratic Equation PDF With Solution for All Bank Exam
|To Join us Instagram Account||Click Here|
|To Join us Facebook Page||Click Here|
|To Join us Telegram Channel||Click Here|
India’s Most Affordable Premium Practice Set |
About This Chapter
ISTEP+ Grade 7 Math: Geometric Constructions - Chapter Summary
Set your students on the path to success on the ISTEP+ Grade 7 Math exam by reviewing geometric constructions with this chapter. You can count on these lessons to help students understand:
- How geometric construction works
- Line segment bisection and the midpoint theorem
- Methods for constructing parallel and perpendicular lines
- Ways to construct angle bisectors, the median of a triangle and triangles
These video lessons include timelines that allow students to navigate directly to steps in the geometric construction process that they need to review. Important vocabulary words are emphasized throughout the chapter, further preparing students for success.
1. Geometric Constructions Using Lines and Angles
Watch this video lesson to learn about geometric construction and how you can copy line segments and angles without using any numbers. All you need is a straight edge and a compass.
2. Line Segment Bisection & Midpoint Theorem: Geometric Construction
Watch this video lesson and you will learn how to bisect a line segment without measurements using just a compass and a straight edge. You'll also learn about the midpoint theorem, which lets you calculate the midpoint if you have the measurements.
3. Constructing a Parallel Line Using a Point Not on the Given Line
Watch this video lesson, and you will learn how to draw parallel lines with just a compass and a straightedge. Also, learn why you would want to be able to do this in real life.
4. Constructing Perpendicular Lines in Geometry
In this video lesson, you will see how to use geometric construction to draw perpendicular lines. Learn why geometric construction is useful. Also find out the only two tools in addition to your pencil that you need to do this.
5. Constructing an Angle Bisector in Geometry
Watch this video lesson to learn a cool way to split an angle in half. You can use this method to split a pie slice in half where you can be sure each side is equal.
6. Constructing the Median of a Triangle
If you have one triangle and want to divide it, you can use a median line. Medians have special geometric properties that we'll learn about in this lesson.
7. Constructing Triangles: Types of Geometric Construction
When you're asked to construct a triangle, it's time to break out that compass and straight edge! In this lesson, find out how to construct triangles no matter what you're given.
Earning College Credit
Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Other chapters within the ISTEP+ Grade 7 - Math: Test Prep & Practice course
- ISTEP+ Grade 7 Math: Prime Factorization
- ISTEP+ Grade 7 Math: Square Roots
- ISTEP+ Grade 7 Math: Rational & Irrational Numbers
- ISTEP+ Grade 7 Math: Operations with Rational Numbers
- ISTEP+ Grade 7 Math: Math Properties
- ISTEP+ Grade 7 Math: Algebraic Expressions
- ISTEP+ Grade 7 Math: Graphing Basics
- ISTEP+ Grade 7 Math: Linear Equations
- ISTEP+ Grade 7 Math: Inequalities
- ISTEP+ Grade 7 Math: Slope & Rate of Change
- ISTEP+ Grade 7 Math: Ratios & Proportions
- ISTEP+ Grade 7 Math: Lines & Angles
- ISTEP+ Grade 7 Math: Triangles
- ISTEP+ Grade 7 Math: Similar Figures
- ISTEP+ Grade 7 Math: Area & Circumference
- ISTEP+ Grade 7 Math: Complex Figures
- ISTEP+ Grade 7 Math: Populations & Statistics
- ISTEP+ Grade 7 Math: Data Analysis & Statistics
- ISTEP+ Grade 7 Math: Probability
- ISTEP+ Grade 7 Math: Mathematical Process
- ISTEP+ Grade 7 - Math Flashcards |
Ordinary matter including our bodies ultimately consists of electrons and up and down quarks. A neutrino is an elementary particle categorized with these matter particles. Neutrinos are 9 to 10 orders of magnitude more abundant than the other matter particles in the universe. Their characteristics are closely related to the structure of the universe and the grand-unified theory of elementary particles. Neutrino properties are also expected to explain why our universe is made of
matter. However, neutrinos penetrate matter almost freely and they are rather difficult to detect. The RCNS reveals the properties of such elusive neutrinos using an underground-, huge-, ultra-low-background detector, i.e., the Kamioka Liquidscintillator Anti-Neutrino Detector (KamLAND). KamLAND detects anti-neutrinos from distant nuclear power plants and enables us to determine how electron-type neutrinos travel. Neutrinos currently serve as tools to enable the interiors of opaque objects to be observed. KamLAND has pioneered "Neutrino Geophysics" enabling neutrinos emanating from the earth to be observed and this is going to propel "Neutrino Astrophysics" through detecting abundant low-energy neutrinos created at the center of the sun.
A new project, KamLAND-Zen (Zero neutrino double-beta-decay search) is running in parallel. The project investigates if neutrino and anti-neutrino are identical. This feature is allowed only to neutral particles, i.e. neutrinos, and is believed to be connected with fundamental questions of the universe and elementary particles, "why does matter dominate anti-matter?", "Why does neutrino have tiny mass?". A new frontier of ultra-low radioactivity experiment propelled in KamLAND also covers various subjects including dark matter, number of generations of elementary particles, pre-supernova alarm and so on.
Research Center for Electron Photon Science (ELPH) is a nationwide joint-use research center in nuclear science. It was founded to aim at carrying out fundamental researches and applications in nuclear science as well as at educating students and young researchers.
The ELPH facility operates two accelerators: an electron linear acceleratorand a 1.3 GeV synchrotron.
The linear accelerator provides an intense pulsed beam whose energy is typically 60 MeV, and has been used in a wide range of research fields, not only in nuclear physics but in solid state physics, radiochemistry, biology, engineering, and so on.
A GeV tagged photon beam produced with an internal radiator from a 1.3 GeV electron beam stored in the synchrotron is utilized for hadron-physics experiments.
The research program of Nuclear Science Group has four main components: the quark nuclear physics program, the exotic nuclear physics program, the accelerator science program, and the radiochemistry program.
CYRIC was established in 1977 as an institution for carrying out research studies in various field with the cyclotron and radioisotopes, and for training researchers of Tohoku University for safe treatment of radioisotopes and radiations. The first beam was extracted at the end of December 1977. The scheduled operation of the cyclotron for research studies started in July 1979. From 2001, two cyclotrons are working; the first is the multipurpose cyclotron (K=110 MeV) which is replaced from the old one (K=50 MeV) and the second is the small cyclotron (12 MeV) for the production of positron emitters of PET study.
In conformity with the aim of establishment of CYRIC, the cyclotrons have been used for studies in various fields of research, such as nuclear physics, nuclear chemistry, solid-state physics and element analysis by PIXE and activation, and for radioisotope production for use in engineering, biology and medicine. Five divisions (Division of Accelerator, Division of Instrumentations, Division of Radio-pharmaceutical Chemistry, Division of Cyclotron Nuclear Medicine and Division of Radiation Protection and Safety Control) work for maintenance, development of facilities, and for studies of their individual research fields. The divisions belong to the graduate schools of Tohoku University.
Having experienced the catastrophic disaster in 2011, Tohoku University has founded the International Research Institute of Disaster Science (IRIDeS). Together with collaborating organizations from many countries and with broad areas of specializations, the IRIDeS conducts world-leading research on natural disaster science and disaster mitigation. Based on the lessons from the 2011 Great East Japan (Tohoku) earthquake and tsunami disaster, IRIDeS aims to become a world centre for the study of the disasters and disaster mitigation, learning from and building upon past lessons in disaster management from Japan and around the world. Throughout, the IRIDeS will contribute to on-going recovery/reconstruction efforts in the affected areas, conducting action-oriented research, and pursuing effective disaster management to build sustainable and resilient societies, the IRIDeS innovates the past paradigm of Japan's and world's disaster management to catastrophic natural disasters, hence to become a foundation stone of disaster mitigation management and sciences.
The Advanced Institute for Materials Research (AIMR) at Tohoku University is one of nine World Premier International Research Centers Initiative (WPI) Program established with the support of the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) , aimed at developing world-class research bases in Japan. After its establishment in 2007, AIMR has been active in conducting research activities and creating new systems in order to become a global center for materials science. Since 2012, AIMR has also been conducting fundamental research by finding connections between materials science and mathematics. Through prediction, AIMR aims to formulate new scientific principles that can enable the development of materials. In order to ensure that the advanced materials that have been developed play a useful role in society, AIMR is also engaged in the development of devices and systems that make use of these materials. AIMR's mission is to contribute to the resolution of resource and environmental problems caused by human beings.
The Research Center for Marine Biology was founded in July 8th, 1924 as the Asamushi Marine Biological Station, an extension of the Biological Institute of the Faculty of Science of Tohoku University in Sendai. The center is located on the small cape near the Asamushi hot spring facing the Mutsu Bay, which is the northernmost bay on Honshu-island. It is about 2km from Asamushi-onsen Station of Tohoku Main Line, JR. There are rocky-boulder shores and sand-gravel beaches in the vicinity of the center. The rich shore flora and fauna as well as plankton are affected by influences of the cold Liman Current and the warm Tsugaru Current, which flow into the Bay. Lists of common animals, which have been used for experimental purpose in the center, have been published (Hirai, E., 1963, Sci. Rep. Tohoku Univ., Ser. IV, Biol., 29, 369: Tsuchiya, M. and K. Osanai, 1978, Bull. Mar. Biol. Stn. Asamushi, Tohoku Univ., 16,29). Various sea urchins (Hemicentrotus pulcherrimus, Strongylocentrotus intermedius, S. nudus, Glyptocydaris crenularis and Temnopleurus hardwicki) and ascidans (Halocynthia roretzi, Ciona savignyi, Chelyosoma siboja and Ascidia sydneiensis samea) are available for experiments almost throughout the year. The scallop (Patinopecten yessoensis) and the ascidian (Halocynthia roretzi) are cultivated in the Bay. |
Math Skills packs the basic math skills every child must acquire into a nine-book series. In First Number Skills students practice numerals, counting, number words, ordinals, and beginning place value skills. Time & Money Skills focuses on telling time to the hour, half-hour, and minute and how to work with both coins and bills. Number Facts to 10 and Number Facts to 18 give students practice with the basic addition and subtraction facts. Regrouping Skills and Multiplication Facts allow students to master addition and subtraction regrouping and basic multiplication facts. Multiplication Skills is a book for those who know basic multiplication facts and are ready to use them in higher level computations. Place Value Skills focuses on ones, tens, and hundreds. Finally, Fraction Basics addresses parts of the whole, parts of a set, equivalency, etc. |
Endangered Turtles’ Trek Along Ocean Currents Revealed By Satellite
The site where Europes spacecraft are launched into orbit, the Atlantic shoreline of French Guiana, is also the starting point for another hardly less remarkable journey: the epic migration of the critically endangered leatherback turtle.
Scientists have been using tracking sensors to follow the long treks of individual leatherbacks, then overlaying their routes with sea state data, including near-real time maps of ocean currents gathered by satellites including ESAs ERS-2 and now Envisat.
They are working to uncover connections between the apparently meandering routes followed by turtles and the local ocean conditions, and so develop strategies to minimise the unintended but deadly threat posed to leatherbacks by deep-sea fishing.
These giant reptiles – known to reach 2.1 metres in length and weigh in at 365 kg – briefly come ashore to lay their eggs on beaches across French Guiana and neighbouring Suriname, the turtles last remaining major nesting sites in the Atlantic Ocean. Around nine weeks later the hatchlings emerge en masse and head into the sea, one day to return when they reach maturity and lay eggs themselves.
However each turtles return is by no means certain. While in open water the turtles have been known to dive as deep as 1,230 metres in search of food, most of the time they do not venture deeper than 250 metres down, leaving them vulnerable to the hooks of longline fishermen – hundreds of thousands of such hooks are deployed daily across the Atlantic.
Ongoing bycatching of leatherbacks by fishermen has left the 100-million-year-old species on the brink of extinction in the Pacific and Indian oceans. In the Atlantic their numbers are higher – partly due to a ban on longline US fishermen operating in the Oceans northern section – but the turtles are still being lost at an unsustainable rate.
A paper was recently published in Nature summarising the work done so far in tracking leatherbacks through the Atlantic, submitted by a team of researchers from Frances National Centre for Scientific Research in Strasbourg, neighbouring Louis Pasteur University, the French Guiana Regional Department of the Environment and the company Collecte Localisation Satellites (CLS) in Ramonville, specialising in satellite-based systems for location-finding, data collection and Earth Observation.
Pacific leatherbacks follow narrow migration corridors. Researchers hoped that if their Atlantic counterparts acted in the same way then fishing could be restricted across these zones.
Starting in 1999 individual turtles were tracked using the CLS-run Argos system, based on radio-emitting tags whose position can be tracked worldwide to a maximum accuracy of 150 metres. Six American NOAA spacecraft currently carry Argos receivers, with ESAs MetOp series due to join the system following their initial satellite launch next year.
The turtles tracks were then overlaid with maps of sea level anomalies obtained by merging data with the radar altimeter aboard ESAs ERS-2 with another aboard the NASA-CNES satellite TOPEX-Poseidon.
ERS-2, like its successor Envisat, is part of the select group of satellites equipped with a Radar Altimeter (RA) instrument. By firing thousands of radar pulses off the surface of the sea every second extremely precise ocean height measurement is made possible. Height anomalies detected by this type of sensor are often indicators of the presence of ocean currents and eddies: warm currents can stand up to a metre above colder waters.
By merging multiple radar altimeter results together, the result is a more frequent and higher resolution measurement of sea level anomalies than any one spacecraft could achieve. For example, now that ERS-2s global mission is over, results from Envisats RA-2 instrument are being combined with similar data from the joint French-US Jason spacecraft and the US Navys GFO.
“The altimetry data has been very useful to our work because we have been able to check the turtles trajectory against ocean currents,” said Philippe Gaspar, co-author of the Nature paper and Head of the Satellite Oceanography Division of CLS. “What we have found is that their relationship with currents alters considerably over the course of their journeys.
Unlike their Pacific relations, the Atlantic leatherbacks do not follow narrow migration corridors but disperse widely – to begin with, the leatherbacks carry out long nearly straight migrations either to the north or to the Equator, swimming across currents as they encounter them. One made it to within 500 km of West Africa before turning back, another came close to Nova Scotia.
“Then having either made it to the Gulf Stream area or to the equatorial belt, the turtles tend to slow down and follow the frontal areas associated with local ocean current systems, which are generally rich in marine life.”
Unfortunately fishing fleets target these frontal systems for exactly the same reason, so these turtles are placing themselves in danger. This finding means limited closures of Atlantic fishing areas is unlikely to have much impact in turtle bycatch reduction, and other solutions will have to be considered, such as turtle-friendly fishing gear and hooks recently developed by NOAA and endorsed by the World Wildlife Fund.
Meanwhile leatherback tracking continues on an ongoing basis, Gaspar added: “We are now looking at estimating the swimming speed of turtles during their trips by obtaining their total velocity from the Argos receivers, then subtracting the current velocity made available to us by altimetry. This has never been done before and should provide us with useful information on the energy they expend throughout their migration.”
French schools have been given the chance to take part as part of an educational oceanographic scheme called Argonautica, with classes participating in the Argo-luth project, analysing turtle movements against outputs from MERCATOR, a model that presently covers the North and Equatorial Atlantic Ocean and assimilates radar altimeter data on an operational basis.
All news from this category: Life Sciences
Articles and reports from the Life Sciences area deal with applied and basic research into modern biology, chemistry and human medicine.
Valuable information can be found on a range of life sciences fields including bacteriology, biochemistry, bionics, bioinformatics, biophysics, biotechnology, genetics, geobotany, human biology, marine biology, microbiology, molecular biology, cellular biology, zoology, bioinorganic chemistry, microchemistry and environmental chemistry.
Researchers confront optics and data-transfer challenges with 3D-printed lens
Researchers have developed new 3D-printed microlenses with adjustable refractive indices – a property that gives them highly specialized light-focusing abilities. This advancement is poised to improve imaging, computing and communications…
Research leads to better modeling of hypersonic flow
Hypersonic flight is conventionally referred to as the ability to fly at speeds significantly faster than the speed of sound and presents an extraordinary set of technical challenges. As an…
Researchers create ingredients to produce food by 3D printing
Food engineers in Brazil and France developed gels based on modified starch for use as “ink” to make foods and novel materials by additive manufacturing. It is already possible to… |
How to use ROUNDDOWN function in Microsoft Excel
In this tutorial we will learn how to use the ROUNDDOWN function in Microsoft Excel. The ROUNDDOWN function differs from the standard ROUND function in that it rounds a number down rather than up. For example, if a number 2.95 is given as an argument to both the ROUND and ROUNDDOWN functions, the ROUND function will return 3, while the ROUNDDOWN function will return 2.9.
Microsoft Excel is a powerful spreadsheet software developed by Microsoft Corporation and is part of the Microsoft Office suite. It is widely used for data analysis, data management, and data presentation. Excel provides a user-friendly interface with a variety of features and tools that allow users to manipulate and analyze data with ease. It includes built-in functions and formulas for mathematical and statistical calculations, pivot tables for data summaries and grouping, and chart creation tools for visual representation of data.
Step 1 – Select a Blank Cell
– Select a targeted Blank cell where you want to ROUND DOWN a number.
Step 2 – Place an Equals Sign
– Place an Equals sign in the targeted blank cell.
Step 3 – Use the ROUNDDOWN function
– The syntax of ROUNDDOWN function is
– Where the first argument i.e. A2 is the cell containing the number to be rounded.
– The second argument i.e. 2 is an optional argument that specifies the number of decimal places to round down to.
Step 4 – Press the Enter Key and Apply the ROUNDDOWN function to Each Row
– Press the Enter key to get the results.
– Use the “handle select” and “ drag and drop” method and apply the ROUNDDOWN function to each row.
Step 5 – Comparing ROUNDDOWN function with ROUND function
– By comparing the outputs of both the ROUNDDOWN function and the ROUND function we can conclude that the ROUNDDOWN is an opposite function to the ROUND function. |
Coagulation is a complex process by which blood forms clots. It is an important part of hemostasis, the cessation of blood loss from a damaged vessel, wherein a damaged blood vessel wall is covered by a platelet and fibrin-containing clot to stop bleeding and begin repair of the damaged vessel. Disorders of coagulation can lead to an increased risk of bleeding (hemorrhage) or obstructive clotting (thrombosis).
Coagulation is highly conserved throughout biology; in all mammals, coagulation involves both a cellular (platelet) and a protein (coagulation factor) component. The system in humans has been the most extensively researched and is therefore the best understood.
Coagulation begins almost instantly after an injury to the blood vessel has damaged the endothelium lining the vessel. Exposure of the blood to proteins such as tissue factor initiates changes to blood platelets and the plasma protein fibrinogen, a clotting factor. Platelets immediately form a plug at the site of injury; this is called primary hemostasis. Secondary hemostasis occurs simultaneously: Proteins in the blood plasma, called coagulation factors or clotting factors, respond in a complex cascade to form fibrin strands, which strengthen the platelet plug.
- 1 Physiology
- 2 Testing of coagulation
- 3 Role in disease
- 4 Pharmacology
- 5 Coagulation factors
- 6 History
- 7 Other species
- 8 References
- 9 External links
Damage to blood vessel walls exposes subendothelium proteins, most notably von Willebrand factor (vWF), present under the endothelium. vWF is a protein secreted by healthy endothelium, forming a layer between the endothelium and underlying basement membrane. When the endothelium is damaged, the normally-isolated, underlying vWF is exposed to white blood cells and recruits Factor VIII, collagen, and other clotting factors. Circulating platelets bind to collagen with surface collagen-specific glycoprotein Ia/IIa receptors. This adhesion is strengthened further by additional circulating proteins vWF, which forms additional links between the platelets glycoprotein Ib/IX/V and the collagen fibrils. These adhesions activate the platelets.
Activated platelets release the contents of stored granules into the blood plasma. The granules include ADP, serotonin, platelet-activating factor (PAF), vWF, platelet factor 4, and thromboxane A2 (TXA2), which, in turn, activate additional platelets. The granules' contents activate a Gq-linked protein receptor cascade, resulting in increased calcium concentration in the platelets' cytosol. The calcium activates protein kinase C, which, in turn, activates phospholipase A2 (PLA2). PLA2 then modifies the integrin membrane glycoprotein IIb/IIIa, increasing its affinity to bind fibrinogen. The activated platelets change shape from spherical to stellate, and the fibrinogen cross-links with glycoprotein IIb/IIIa aid in aggregation of adjacent platelets (completing primary hemostasis).
The coagulation cascade
The coagulation cascade of secondary hemostasis has two pathways which lead to fibrin formation. These are the contact activation pathway (also known as the intrinsic pathway), and the tissue factor pathway (also known as the extrinsic pathway). It was previously thought that the coagulation cascade consisted of two pathways of equal importance joined to a common pathway. It is now known that the primary pathway for the initiation of blood coagulation is the tissue factor pathway. The pathways are a series of reactions, in which a zymogen (inactive enzyme precursor) of a serine protease and its glycoprotein co-factor are activated to become active components that then catalyze the next reaction in the cascade, ultimately resulting in cross-linked fibrin. Coagulation factors are generally indicated by Roman numerals, with a lowercase a appended to indicate an active form.
The coagulation factors are generally serine proteases (enzymes). There are some exceptions. For example, FVIII and FV are glycoproteins, and Factor XIII is a transglutaminase. Serine proteases act by cleaving other proteins at specific serine residues. The coagulation factors circulate as inactive zymogens. The coagulation cascade is classically divided into three pathways. The tissue factor and contact activation pathways both activate the "final common pathway" of factor X, thrombin and fibrin.
Tissue factor pathway (extrinsic)
The main role of the tissue factor pathway is to generate a "thrombin burst," a process by which thrombin, the most important constituent of the coagulation cascade in terms of its feedback activation roles, is released instantaneously. FVIIa circulates in a higher amount than any other activated coagulation factor.
- Following damage to the blood vessel, FVII leaves the circulation and comes into contact with tissue factor (TF) expressed on tissue-factor-bearing cells (stromal fibroblasts and leukocytes), forming an activated complex (TF-FVIIa).
- TF-FVIIa activates FIX and FX.
- FVII is itself activated by thrombin, FXIa, FXII and FXa.
- The activation of FX (to form FXa) by TF-FVIIa is almost immediately inhibited by tissue factor pathway inhibitor (TFPI).
- FXa and its co-factor FVa form the prothrombinase complex, which activates prothrombin to thrombin.
- Thrombin then activates other components of the coagulation cascade, including FV and FVIII (which activates FXI, which, in turn, activates FIX), and activates and releases FVIII from being bound to vWF.
- FVIIIa is the co-factor of FIXa, and together they form the "tenase" complex, which activates FX; and so the cycle continues. ("Tenase" is a contraction of "ten" and the suffix "-ase" used for enzymes.)
Contact activation pathway (intrinsic)
The contact activation pathway begins with formation of the primary complex on collagen by high-molecular-weight kininogen (HMWK), prekallikrein, and FXII (Hageman factor). Prekallikrein is converted to kallikrein and FXII becomes FXIIa. FXIIa converts FXI into FXIa. Factor XIa activates FIX, which with its co-factor FVIIIa form the tenase complex, which activates FX to FXa. The minor role that the contact activation pathway has in initiating clot formation can be illustrated by the fact that patients with severe deficiencies of FXII, HMWK, and prekallikrein do not have a bleeding disorder. Instead, contact activation system seems to be more involved in inflammation. Patients without FXII (Hageman factor) suffer from constant infections.
Final common pathway
Thrombin has a large array of functions. Its primary role is the conversion of fibrinogen to fibrin, the building block of a hemostatic plug. In addition, it activates Factors VIII and V and their inhibitor protein C (in the presence of thrombomodulin), and it activates Factor XIII, which forms covalent bonds that crosslink the fibrin polymers that form from activated monomers.
Following activation by the contact factor or tissue factor pathways, the coagulation cascade is maintained in a prothrombotic state by the continued activation of FVIII and FIX to form the tenase complex, until it is down-regulated by the anticoagulant pathways.
Various substances are required for the proper functioning of the coagulation cascade:
- Calcium and phospholipid (a platelet membrane constituent) are required for the tenase and prothrombinase complexes to function. Calcium mediates the binding of the complexes via the terminal gamma-carboxy residues on FXa and FIXa to the phospholipid surfaces expressed by platelets, as well as procoagulant microparticles or microvesicles shed from them. Calcium is also required at other points in the coagulation cascade.
- Vitamin K is an essential factor to a hepatic gamma-glutamyl carboxylase that adds a carboxyl group to glutamic acid residues on factors II, VII, IX and X, as well as Protein S, Protein C and Protein Z. In adding the gamma-carboxyl group to glutamate residues on the immature clotting factors Vitamin K is itself oxidized. Another enzyme, Vitamin K epoxide reductase, (VKORC) reduces vitamin K back to its active form. Vitamin K epoxide reductase is pharmacologically important as a target for anticoagulant drugs warfarin and related coumarins such as acenocoumarol, phenprocoumon, and dicumarol. These drugs create a deficiency of reduced vitamin K by blocking VKORC, thereby inhibiting maturation of clotting factors. Other deficiencies of vitamin K (e.g., in malabsorption), or disease (hepatocellular carcinoma) impairs the function of the enzyme and leads to the formation of PIVKAs (proteins formed in vitamin K absence); this causes partial or non-gamma carboxylation, and affects the coagulation factors' ability to bind to expressed phospholipid.
Five mechanisms keep platelet activation and the coagulation cascade in check. Abnormalities can lead to an increased tendency toward thrombosis:
- Protein C is a major physiological anticoagulant. It is a vitamin K-dependent serine protease enzyme that is activated by thrombin into activated protein C (APC). Protein C is activated in a sequence that starts with Protein C and thrombin binding to a cell surface protein thrombomodulin. Thrombomodulin binds these proteins in such a way that it activates Protein C. The activated form, along with protein S and a phospholipid as cofactors, degrades FVa and FVIIIa. Quantitative or qualitative deficiency of either may lead to thrombophilia (a tendency to develop thrombosis). Impaired action of Protein C (activated Protein C resistance), for example by having the "Leiden" variant of Factor V or high levels of FVIII also may lead to a thrombotic tendency.
- Antithrombin is a serine protease inhibitor (serpin) that degrades the serine proteases: thrombin, FIXa, FXa, FXIa, and FXIIa. It is constantly active, but its adhesion to these factors is increased by the presence of heparan sulfate (a glycosaminoglycan) or the administration of heparins (different heparinoids increase affinity to FXa, thrombin, or both). Quantitative or qualitative deficiency of antithrombin (inborn or acquired, e.g., in proteinuria) leads to thrombophilia.
- Tissue factor pathway inhibitor (TFPI) limits the action of tissue factor (TF). It also inhibits excessive TF-mediated activation of FIX and FX.
- Plasmin is generated by proteolytic cleavage of plasminogen, a plasma protein synthesized in the liver. This cleavage is catalyzed by tissue plasminogen activator (t-PA), which is synthesized and secreted by endothelium. Plasmin proteolytically cleaves fibrin into fibrin degradation products that inhibit excessive fibrin formation.
- Prostacyclin (PGI2) is released by endothelium and activates platelet Gs protein-linked receptors. This, in turn, activates adenylyl cyclase, which synthesizes cAMP. cAMP inhibits platelet activation by decreasing cytosolic levels of calcium and, by doing so, inhibits the release of granules that would lead to activation of additional platelets and the coagulation cascade.
Role in immune system
The coagulation system overlaps with the immune system. Coagulation can physically trap invading microbes in blood clots. Also, some products of the coagulation system can contribute to the innate immune system by their ability to increase vascular permeability and act as chemotactic agents for phagocytic cells. In addition, some of the products of the coagulation system are directly antimicrobial. For example, beta-lysine, a protein produced by platelets during coagulation, can cause lysis of many Gram-positive bacteria by acting as a cationic detergent. Many acute-phase proteins of inflammation are involved in the coagulation system. In addition, pathogenic bacteria may secrete agents that alter the coagulation system, e.g. coagulase and streptokinase.
Testing of coagulation
Numerous tests are used to assess the function of the coagulation system:
- Common: aPTT, PT (also used to determine INR), fibrinogen testing (often by the Clauss method), platelet count, platelet function testing (often by PFA-100).
- Other: TCT, bleeding time, mixing test (whether an abnormality corrects if the patient's plasma is mixed with normal plasma), coagulation factor assays, antiphosholipid antibodies, D-dimer, genetic tests (e.g. factor V Leiden, prothrombin mutation G20210A), dilute Russell's viper venom time (dRVVT), miscellaneous platelet function tests, thromboelastography (TEG or Sonoclot), euglobulin lysis time (ELT).
The contact activation (intrinsic) pathway is initiated by activation of the "contact factors" of plasma, and can be measured by the activated partial thromboplastin time (aPTT) test.
The tissue factor (extrinsic) pathway is initiated by release of tissue factor (a specific cellular lipoprotein), and can be measured by the prothrombin time (PT) test. PT results are often reported as ratio (INR value) to monitor dosing of oral anticoagulants such as warfarin.
The quantitative and qualitative screening of fibrinogen is measured by the thrombin clotting time (TCT). Measurement of the exact amount of fibrinogen present in the blood is generally done using the Clauss method for fibrinogen testing. Many analysers are capable of measuring a "derived fibrinogen" level from the graph of the Prothrombin time clot.
If a coagulation factor is part of the contact activation or tissue factor pathway, a deficiency of that factor will affect only one of the tests: Thus hemophilia A, a deficiency of factor VIII, which is part of the contact activation pathway, results in an abnormally prolonged aPTT test but a normal PT test. The exceptions are prothrombin, fibrinogen, and some variants of FX that can be detected only by either aPTT or PT. If an abnormal PT or aPTT is present, additional testing will occur to determine which (if any) factor is present as aberrant concentrations.
Deficiencies of fibrinogen (quantitative or qualitative) will affect all screening tests.
Condition Prothrombin time Partial thromboplastin time Bleeding time Platelet count Vitamin K deficiency or warfarin prolonged prolonged unaffected unaffected Disseminated intravascular coagulation prolonged prolonged prolonged decreased von Willebrand disease unaffected prolonged prolonged unaffected Haemophilia unaffected prolonged unaffected unaffected Aspirin unaffected unaffected prolonged unaffected Thrombocytopenia unaffected unaffected prolonged decreased Early Liver failure prolonged unaffected unaffected unaffected End-stage Liver failure prolonged prolonged prolonged decreased Uremia unaffected unaffected prolonged unaffected Congenital afibrinogenemia prolonged prolonged prolonged unaffected Factor V deficiency prolonged prolonged unaffected unaffected Factor X deficiency as seen in amyloid purpura prolonged prolonged unaffected unaffected Glanzmann's thrombasthenia unaffected unaffected prolonged unaffected Bernard-Soulier syndrome unaffected unaffected prolonged unaffected
Role in disease
Problems with coagulation may dispose to hemorrhage, thrombosis, and occasionally both, depending on the nature of the pathology.
Platelet conditions may be congenital or acquired. Some inborn platelet pathologies are Glanzmann's thrombasthenia, Bernard-Soulier syndrome (abnormal glycoprotein Ib-IX-V complex), gray platelet syndrome (deficient alpha granules), and delta storage pool deficiency (deficient dense granules). Most are rare conditions. Most inborn platelet pathologies predispose to hemorrhage. Von Willebrand disease is due to deficiency or abnormal function of von Willebrand factor, and leads to a similar bleeding pattern; its milder forms are relatively common.
Decreased platelet numbers may be due to various causes, including insufficient production (e.g., in myelodysplastic syndrome or other bone marrow disorders), destruction by the immune system (immune thrombocytopenic purpura/ITP), and consumption due to various causes (thrombotic thrombocytopenic purpura/TTP, hemolytic-uremic syndrome/HUS, paroxysmal nocturnal hemoglobinuria/PNH, disseminated intravascular coagulation/DIC, heparin-induced thrombocytopenia/HIT). Most consumptive conditions lead to platelet activation, and some are associated with thrombosis.
Disease and clinical significance of thrombosis
The best-known coagulation factor disorders are the hemophilias. The three main forms are hemophilia A (factor VIII deficiency), hemophilia B (factor IX deficiency or "Christmas disease") and hemophilia C (factor XI deficiency, mild bleeding tendency). Hemophilia A and B are X-linked recessive disorders, whereas Hemophilia C is much more rare autosomal recessive disorder most commonly seen in Ashkenazi Jews.
Von Willebrand disease (which behaves more like a platelet disorder except in severe cases), is the most common hereditary bleeding disorder and is characterized as being inherited autosomal recessive or dominant. In this disease, there is a defect in von Willebrand factor (vWF), which mediates the binding of glycoprotein Ib (GPIb) to collagen. This binding helps mediate the activation of platelets and formation of primary hemostasis.
Bernard-Soulier syndrome is a defect or deficiency in GPIb. GPIb, the receptor for vWF, can be defective and lead to lack of primary clot formation (primary hemostasis) and increased bleeding tendency. This is an autosomal recessive inherited disorder.
Thrombasthenia of Glanzman and Naegeli (Glanzmann thrombasthenia) is extremely rare. It is characterized by a defect in GPIIb/IIIa fibrinogen receptor complex. When GPIIb/IIIa receptor is dysfunctional, fibrinogen cannot cross-link platelets, which inhibits primary hemostasis. This is an autosomal recessive inherited disorder.
In liver failure (acute and chronic forms), there is insufficient production of coagulation factors by the liver; this may increase bleeding risk.
Deficiency of Vitamin K may also contribute to bleeding disorders because clotting factor maturation depends on Vitamin K.
Thrombosis is the pathological development of blood clots. These clots may break free and become mobile, forming an embolus or grow to such a size that occludes the vessel in which it developed. An embolism is said to occur when the thrombus (blood clot) becomes a mobile embolus and migrates to another part of the body, interfering with blood circulation and hence impairing organ function downstream of the occlusion. This causes ischemia and often leads to ischemic necrosis of tissue. Most cases of thrombosis are due to acquired extrinsic problems (surgery, cancer, immobility, obesity, economy class syndrome), but a small proportion of people harbor predisposing conditions known collectively as thrombophilia (e.g., antiphospholipid syndrome, factor V Leiden, and various other rarer genetic disorders).
Mutations in factor XII have been associated with an asymptomatic prolongation in the clotting time and possibly a tendency toward thrombophlebitis. Other mutations have been linked with a rare form of hereditary angioedema (type III).
The use of adsorbent chemicals, such as zeolites, and other hemostatic agents are also used for use in sealing severe injuries quickly (such as in traumatic bleeding secondary to gunshot wounds). Thrombin and fibrin glue are used surgically to treat bleeding and to thrombose aneurysms.
Coagulation factor concentrates are used to treat hemophilia, to reverse the effects of anticoagulants, and to treat bleeding in patients with impaired coagulation factor synthesis or increased consumption. Prothrombin complex concentrate, cryoprecipitate and fresh frozen plasma are commonly-used coagulation factor products. Recombinant activated human factor VII is increasingly popular in the treatment of major bleeding.
Tranexamic acid and aminocaproic acid inhibit fibrinolysis, and lead to a de facto reduced bleeding rate. Before its withdrawal, aprotinin was used in some forms of major surgery to decrease bleeding risk and need for blood products.
Anticoagulants and anti-platelet agents are amongst the most commonly used medications. Anti-platelet agents include aspirin, clopidogrel, dipyridamole and ticlopidine; the parenteral glycoprotein IIb/IIIa inhibitors are used during angioplasty. Of the anticoagulants, warfarin (and related coumarins) and heparin are the most commonly used. Warfarin affects the vitamin K-dependent clotting factors (II, VII, IX,X) , whereas heparin and related compounds increase the action of antithrombin on thrombin and factor Xa. A newer class of drugs, the direct thrombin inhibitors, is under development; some members are already in clinical use (such as lepirudin). Also under development are other small molecular compounds that interfere directly with the enzymatic action of particular coagulation factors (e.g., rivaroxaban, dabigatran, apixaban).
Coagulation factors and related substances Number and/or name Function Associated genetic disorders I (fibrinogen) Forms clot (fibrin) Congenital afibrinogenemia, Familial renal amyloidosis II (prothrombin) Its active form (IIa) activates I, V, VII, VIII, XI, XIII, protein C, platelets Hypoprothrombinemia, Thrombophilia Tissue factor Co-factor of VIIa (formerly known as factor III) Calcium Required for coagulation factors to bind to phospholipid (formerly known as factor IV) V (proaccelerin, labile factor) Co-factor of X with which it forms the prothrombinase complex Activated protein C resistance VI Unassigned – old name of Factor Va VII (stable factor, proconvertin) Activates IX, X congenital proconvertin/factor VII deficiency VIII (Antihemophilic factor A) Co-factor of IX with which it forms the tenase complex Haemophilia A IX (Antihemophilic factor B or Christmas factor) Activates X: forms tenase complex with factor VIII Haemophilia B X (Stuart-Prower factor) Activates II: forms prothrombinase complex with factor V Congenital Factor X deficiency XI (plasma thromboplastin antecedent) Activates IX Haemophilia C XII (Hageman factor) Activates factor XI, VII and prekallikrein Hereditary angioedema type III XIII (fibrin-stabilizing factor) Congenital Factor XIIIa/b deficiency von Willebrand factor Binds to VIII, mediates platelet adhesion von Willebrand disease prekallikrein (Fletcher factor) Activates XII and prekallikrein; cleaves HMWK Prekallikrein/Fletcher Factor deficiency high-molecular-weight kininogen (HMWK) (Fitzgerald factor) Supports reciprocal activation of XII, XI, and prekallikrein Kininogen deficiency fibronectin Mediates cell adhesion Glomerulopathy with fibronectin deposits antithrombin III Inhibits IIa, Xa, and other proteases Antithrombin III deficiency heparin cofactor II Inhibits IIa, cofactor for heparin and dermatan sulfate ("minor antithrombin") Heparin cofactor II deficiency protein C Inactivates Va and VIIIa Protein C deficiency protein S Cofactor for activated protein C (APC, inactive when bound to C4b-binding protein) Protein S deficiency protein Z Mediates thrombin adhesion to phospholipids and stimulates degradation of factor X by ZPI Protein Z deficiency Protein Z-related protease inhibitor (ZPI) Degrades factors X (in presence of protein Z) and XI (independently) plasminogen Converts to plasmin, lyses fibrin and other proteins Plasminogen deficiency, type I (ligneous conjunctivitis) alpha 2-antiplasmin Inhibits plasmin Antiplasmin deficiency tissue plasminogen activator (tPA) Activates plasminogen Familial hyperfibrinolysis and thrombophilia urokinase Activates plasminogen Quebec platelet disorder plasminogen activator inhibitor-1 (PAI1) Inactivates tPA & urokinase (endothelial PAI) Plasminogen activator inhibitor-1 deficiency plasminogen activator inhibitor-2 (PAI2) Inactivates tPA & urokinase (placental PAI) cancer procoagulant Pathological factor X activator linked to thrombosis in cancer
Theories on the coagulation of blood have existed since antiquity. Physiologist Johannes Müller (1801–1858) described fibrin, the substance of a thrombus. Its soluble precursor, fibrinogen, was thus named by Rudolf Virchow (1821–1902), and isolated chemically by Prosper Sylvain Denis (1799–1863). Alexander Schmidt suggested that the conversion from fibrinogen to fibrin is the result of an enzymatic process, and labeled the hypothetical enzyme "thrombin" and its precursor "prothrombin". Arthus discovered in 1890 that calcium was essential in coagulation. Platelets were identified in 1865, and their function was elucidated by Giulio Bizzozero in 1882.
The theory that thrombin is generated by the presence of tissue factor was consolidated by Paul Morawitz in 1905. At this stage, it was known that thrombokinase/thromboplastin (factor III) is released by damaged tissues, reacting with prothrombin (II), which, together with calcium (IV), forms thrombin, which converts fibrinogen into fibrin (I).
The remainder of the biochemical factors in the process of coagulation were largely discovered in the 20th century.
A first clue as to the actual complexity of the system of coagulation was the discovery of proaccelerin (initially and later called Factor V) by Paul Owren (1905–1990) in 1947. He also postulated its function to be the generation of accelerin (Factor VI), which later turned out to be the activated form of V (or Va); hence, VI is not now in active use.
Factor VII (also known as serum prothrombin conversion accelerator or proconvertin, precipitated by barium sulfate) was discovered in a young female patient in 1949 and 1951 by different groups.
Factor VIII turned out to be deficient in the clinically recognised but etiologically elusive hemophilia A; it was identified in the 1950s and is alternatively called antihemophilic globulin due to its capability to correct hemophilia A.
Factor IX was discovered in 1952 in a young patient with hemophilia B named Stephen Christmas (1947–1993). His deficiency was described by Dr. Rosemary Biggs and Professor R.G. MacFarlane in Oxford, UK. The factor is, hence, called Christmas Factor. Christmas lived in Canada, and campaigned for blood transfusion safety until succumbing to transfusion-related AIDS at age 46. An alternative name for the factor is plasma thromboplastin component, given by an independent group in California.
Hageman factor, now known as factor XII, was identified in 1955 in an asymptomatic patient with a prolonged bleeding time named of John Hageman. Factor X, or Stuart-Prower factor, followed, in 1956. This protein was identified in a Ms. Audrey Prower of London, who had a lifelong bleeding tendency. In 1957, an American group identified the same factor in a Mr. Rufus Stuart. Factors XI and XIII were identified in 1953 and 1961, respectively.
The usage of Roman numerals rather than eponyms or systematic names was agreed upon during annual conferences (starting in 1955) of hemostasis experts. In 1962, consensus was achieved on the numbering of factors I-XII. This committee evolved into the present-day International Committee on Thrombosis and Hemostasis (ICTH). Assignment of numerals ceased in 1963 after the naming of Factor XIII. The names Fletcher Factor and Fitzgerald Factor were given to further coagulation-related proteins, namely prekallikrein and high-molecular-weight kininogen, respectively.
Factors III and VI are unassigned, as thromboplastin was never identified, and actually turned out to consist of ten further factors, and accelerin was found to be activated Factor V.
All mammals have an extremely closely related blood coagulation process, using a combined cellular and serine protease process. In fact, it is possible for any mammalian coagulation factor to "cleave" its equivalent target in any other mammal. The only nonmammalian animal known to use serine proteases for blood coagulation is the horseshoe crab.
- ^ Furie B, Furie BC (2005). "Thrombus formation in vivo". J. Clin. Invest. 115 (12): 3355–62. doi:10.1172/JCI26987. PMC 1297262. PMID 16322780. http://www.jci.org/cgi/content/full/115/12/3355.
- ^ IMMUNOLOGY - CHAPTER ONE > INNATE (NON-SPECIFIC) IMMUNITY Gene Mayer, Ph.D. Immunology Section of Microbiology and Immunology On-line. University of South Carolina
- ^ Kaplan QBook - USMLE Step 1 - 5th edition - page 254
- ^ Schmidt A (1872). "Neue Untersuchungen ueber die Fasserstoffesgerinnung". Pflüger's Archiv für die gesamte Physiologie 6: 413–538. doi:10.1007/BF01612263.
- ^ Schmidt A. Zur Blutlehre. Leipzig: Vogel, 1892.
- ^ Arthus M, Pagès C (1890). "Nouvelle theorie chimique de la coagulation du sang". Arch Physiol Norm Pathol 5: 739–46.
- ^ Shapiro SS (2003). "Treating thrombosis in the 21st century". N. Engl. J. Med. 349 (18): 1762–4. doi:10.1056/NEJMe038152. PMID 14585945.
- ^ Brewer DB (2006). "Max Schultze (1865), G. Bizzozero (1882) and the discovery of the platelet". Br. J. Haematol. 133 (3): 251–8. doi:10.1111/j.1365-2141.2006.06036.x. PMID 16643426.
- ^ Morawitz P (1905). "Die Chemie der Blutgerinnung". Ergebn Physiol 4: 307–422.
- ^ a b c d e f Giangrande PL (2003). "Six characters in search of an author: the history of the nomenclature of coagulation factors". Br. J. Haematol. 121 (5): 703–12. doi:10.1046/j.1365-2141.2003.04333.x. PMID 12780784.
- ^ MacFarlane RG (1964). "An enzyme cascade in the blood clotting mechanism, and its function as a biochemical amplifier". Nature 202 (4931): 498–9. doi:10.1038/202498a0. PMID 14167839.
- ^ Davie EW, Ratnoff OD (1964). "Waterfall sequence for intrinsic blood clotting". Science 145 (3638): 1310–2. doi:10.1126/science.145.3638.1310. PMID 14173416.
- ^ Wright IS (1962). "The Nomenclature of Blood Clotting Factors". Can Med Assoc J 86 (8): 373–4. PMC 1848865. PMID 14008442. http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1848865. Full text at PMC: 1848865
- UMich Orientation of Proteins in Membranes families/superfamily-97 - Calculated orientations of complexes with GLA domains in membrane
- UMich Orientation of Proteins in Membranes families/superfamily-48 - Discoidin domains of blood coagulation factors
Proteins: coagulation Coagulation factors Coagulation inhibitors Thrombolysis/fibrinolysis Myeloid physiology HematopoiesisCFU-GMGeneralExtramedullary hematopoiesis Hemostasis OtherErythrocyte aggregation
Wikimedia Foundation. 2010.
Look at other dictionaries:
coagulation — [ kɔagylasjɔ̃ ] n. f. • 1360; de coaguler ♦ Précipitation de particules en suspension dans un liquide (⇒ coagulum), causée par le chauffage, l addition d un acide ou une réaction de condensation. La coagulation du blanc d œuf, de la caséine du… … Encyclopédie Universelle
Coagulation — Co*ag u*la tion, n. [L. coagulatio.] 1. The change from a liquid to a thickened, curdlike, insoluble state, not by evaporation, but by some kind of chemical reaction; as, the spontaneous coagulation of freshly drawn blood; the coagulation of milk … The Collaborative International Dictionary of English
coagulation — COAGULATION. s. f. Terme didactique. L état d une chose coagulée, ou l action par laquelle elle se coagule. La coagulation du sang. La coagulation du lait … Dictionnaire de l'Académie Française 1798
coagulation — Coagulation. s. f. v. L estat d une chose coagulée. La coagulation du sang. le lait ne luy est pas bon, il s en fait une coagulation dans son estomac … Dictionnaire de l'Académie française
coagulation — index adhesion (affixing), agglomeration, coalescence, congealment Burton s Legal Thesaurus. William C. Burton. 2006 … Law dictionary
coagulation — c.1400, from L. coagulationem (nom. coagulatio), noun of action from pp. stem of coagulare (see COAGULATE (Cf. coagulate)) … Etymology dictionary
coagulation — [n] clotting agglomeration, caseation, concentration, concretion, condensation, congelation, consolidation, curdling, embolism, gelatination, incrassation, inspissation, jellification, thickening; concept 469 Ant. dissolution, melting, opening,… … New thesaurus
Coagulation — In medicine, the clotting of blood. The process by which the blood clots to form solid masses, or clots. More than 30 types of cells and substances in blood affect clotting. The process is initiated by blood platelets. Platelets produce a… … Medical dictionary
Coagulation — Cette page d’homonymie répertorie les différents sujets et articles partageant un même nom. Un phénomène de coagulation existe lorsque certains constituants d une masse liquide (lait, sang, etc.) s agglutinent pour former une masse plus compacte… … Wikipédia en Français
coagulation — See coagulability. * * * Process of forming a blood clot to prevent blood loss from a ruptured vessel. A damaged blood vessel stimulates activation of clotting factors, eventually leading to the formation of long, sticky threads of fibrin. These… … Universalium |
What is limit and types of limits?
In Mathematics, a limit is defined as a value that a function approaches the output for the given input values. Limits are important in calculus and mathematical analysis and used to define integrals, derivatives, and continuity.
The limit of a sum is equal to the sum of the limits. The limit of a difference is equal to the difference of the limits. The limit of a constant times a function is equal to the constant times the limit of the function. The limit of a product is equal to the product of the limits.
Limits. There are three different types of limits: left-hand limits, right-hand limits, and two-sided limits. To determine if a specific limit exists or does not exist, you must first recognize what type of limit you are seeking. For example, given a function f(x).
Basic Limits — the minimum limits of liability that can be purchased by an insured.
Properties of Limits
limx→a c = c, where c is a constant quantity. limx→a xn = an, if n is a positive integer. Value of limx→0+ 1/xr = +∞. limx→0− 1/xr = +∞, if r is even.
Logical validity is relative to logical systems. Some arguments are logically valid in one logic but logically invalid in another logic. There are various logical systems, each of which has been developed based on some notion of what logic is or should be.
a psychophysical procedure for determining the sensory threshold by gradually increasing or decreasing the magnitude of the stimulus presented in discrete steps.
- Be clear with the limits. Specific rules and expectations should be presented in such a way that children can understand and explain them in their own words. ...
- Be consistent. When caregivers provide limits, they should be established and held firm to. ...
- Provide an alternative behavior.
Some functions do not have any kind of limit as x tends to infinity. For example, consider the function f(x) = xsin x. This function does not get close to any particular real number as x gets large, because we can always choose a value of x to make f(x) larger than any number we choose.
A finite limit is a limit over a finite diagram - that is, one whose shape is a finite category. More generally, in higher category theory, a finite limit is a limit of a diagram that is a finite (n,r)-category.
How many types of limits are there in mathematics?
Besides ordinary, two-sided limits, there are one-sided limits (left- hand limits and right-hand limits), infinite limits and limits at infinity.
Some limits can be determined by inspection just by looking at the form of the limit – these predictable limit forms are called determinate.
limit, mathematical concept based on the idea of closeness, used primarily to assign values to certain functions at points where no values are defined, in such a way as to be consistent with nearby values.
Special limit theorems are a set of rules to evaluate certain limits. They are “special” because they tackle limits that can't easily be evaluated by any of the usual methods. In a way, they are shortcuts to dealing with specific forms of limits.
The formal statement says that the limit L is the number such that if you take numbers arbitrarily close to a (or, values of x within delta of a ) that the result of f applied to those numbers must be arbitrarily close to L (or, within epsilon of L ).
Limit Laws are the properties of limit. They are used to calculate the limit of a function. Constant Law. The limit of a constant is the constant itself.
The statement limx→af(x)=∞ tells us that whenever x is close to (but not equal to) a, f(x) is a large positive number. A limit with a value of ∞ means that as x gets closer and closer to a, f(x) gets bigger and bigger; it increases without bound.
An informal definition of left and right limits
Similarly, we say that L is the right limit of the function f at a point a if we can get f(x) as close as we want to L by taking x to the right of a and close to a, but not equal to a. We write limx→a+f(x)=L.
The symbol lim means we're taking a limit of something. The expression to the right of lim is the expression we're taking the limit of. In our case, that's the function f. The expression x → 3 x\to 3 x→3 that comes below lim means that we take the limit of f as values of x approach 3.
We should study limits because the deep comprehension of limits creates the necessary prerequisites for understanding other concepts in calculus.
What is the main function of limit?
Limits are used to define continuity, integrals, and derivatives. The limit of a function is always concerned with the behavior of the function at a particular point. The limit of a function exists if and only if the Left-Hand Limit is equal to the Right-Hand Limit.
No, if a function has a limit x→y, the limit can only have one value. Because if limx→yf(x)=A and limx→yf(x)=B then A=B.
Thus, we can now say that the limit of any constant is the same constant. Hence, limx→a(c)=c. Note: We must always remember that the limit of a constant value, is always that same value. We should not ignore any of the conditions that are required for the existence of limits at any point.
laws of thought, traditionally, the three fundamental laws of logic: (1) the law of contradiction, (2) the law of excluded middle (or third), and (3) the principle of identity.
There are three laws upon which all logic is based, and they're attributed to Aristotle. These laws are the law of identity, law of non-contradiction, and law of the excluded middle. According to the law of identity, if a statement is true, then it must be true. |
Act of Parliament
|This article needs additional citations for verification. (February 2011) (Learn how and when to remove this template message)|
An Act of Parliament is a statute enacted as primary legislation by a national or sub-national parliament. In the Republic of Ireland the term Act of the Oireachtas is used, and in the United States the term Act of Congress is used.
In Commonwealth countries, the term is used both in a narrow sense, as the formal description of a policy passed in certain territories, and in a wider (generic) sense for primary legislation passed in any country.
A draft Act of Parliament is known as a bill.
In territories with a Westminster system, most bills that have any possibility of becoming law are introduced into parliament by the government. This will usually happen following the publication of a "white paper", setting out the issues and the way in which the proposed new law is intended to deal with them. A bill may also be introduced into parliament without formal government backing; this is known as a private member's bill.
In territories with a multicameral parliament, most bills may be first introduced in any chamber. However, certain types of legislation are required, either by constitutional convention or by law, to be introduced into a specific chamber. For example, bills imposing a tax, or involving public expenditure, are introduced into the House of Commons in the United Kingdom, Canada's House of Commons and Ireland's Dáil as a matter of law. Conversely, bills proposed by the Law Commission and consolidation bills traditionally start in the House of Lords.
Once introduced, a bill must go through a number of stages before it can become law. In theory, this allows the bill's provisions to be debated in detail, and for amendments to the original bill to also be introduced, debated, and agreed to.
In bicameral parliaments, a bill that has been approved by the chamber into which it was introduced then sends the bill to the other chamber. Broadly speaking, each chamber must separately agree to the same version of the bill. Finally, the approved bill receives assent; in most territories this is merely a formality, and is often a function exercised by the head of state.
In some countries, such as in Spain and Portugal, the term for a bill differs depending on whether it is initiated by the government (when it is known as a "project"), or by the Parliament (a "proposition", i.e., a private member's bill).
In Australia, the bill passes through the following stages:
- First reading: This stage is a mere formality.
- Second reading: As in the UK, the stage involves a debate on the general principles of the bill and is followed by a vote. Again, the second reading of a Government bill is usually approved. A defeat for a Government bill on this reading signifies a major loss. If the bill is read a second time, it is then considered in detail
- Consideration in detail: This usually takes place on the floor of the House. Generally, committees sit on the floor of the House and consider the bill in detail.
- Third reading: A debate on the final text of the bill, as amended. Very rarely do debates occur during this stage.
- Passage: The bill is then sent to the other House (to the Senate, if it originated in the House of Representatives; to the House of Representatives, if it is a Senate bill), which may amend it. If the other House amends the bill, the bill and amendments are posted back to the original House for a further stage. The State of Queensland's Parliament is unicameral and skips this and the rest of the stages.
- Consideration of Senate/Representatives amendments: The House in which the bill originated considers the amendments made in the other House. It may agree to them, amend them, propose other amendments in lieu, or reject them. However, the Senate may not amend money bills, though it can "request" the House to make amendments. A bill may pass backwards and forwards several times at this stage, as each House amends or rejects changes proposed by the other. If each House insists on disagreeing with the other, the Bill is lost.
- Disagreement between the Houses: Often, when a bill cannot be passed in the same form by both Houses, it is "laid aside", i.e. abandoned. There is also a special constitutional procedure allowing the passage of the bill without the separate agreement of both houses. If the House twice passes the same bill, and the Senate twice fails to pass that bill (either through rejection or through the passage of unacceptable amendments), then the Governor-General may dissolve both Houses of Parliament simultaneously and call an election for the entire Parliament. This is called a double dissolution. After the election, if the House again passes the bill, but the deadlock between the Houses persists, then the Governor-General may convene a joint sitting of both Houses, where a final decision will be taken on the bill. Although the House and the Senate sit as a single body, bills passed at a joint sitting are treated as if they had been passed by each chamber separately. The procedure only applies if the bill originated in the House of Representatives. Six double dissolutions have occurred, though a joint sitting was only held once, in 1974.
- The bill is sent to the viceroy (the Governor-General for the Commonwealth; the Governor for a State; the Administrator for a Territory) for the royal assent. Certain bills must be reserved by the viceroy for the Queen's personal assent. Acts in the A.C.T. do not require this step.
In Canada, the bill passes through the following stages:
- First reading: This stage is a mere formality.
- Second reading: As in the UK, the stage involves a debate on the general principles of the bill and is followed by a vote. Again, the second reading of a government bill is usually approved. A defeat for a Government bill on this reading signifies a major loss. If the bill is read a second time, then it progresses to the committee stage.
- Committee stage: This usually takes place in a standing committee of the Commons or Senate.
- Standing committee: The standing committee is a permanent one; each committee deals with bills in specific subject areas. Canada's standing committees are similar to the UK's select committees.
- Special committee: A committee established for a particular purpose, be it the examination of a bill or a particular issue.
- Legislative committee: Similar to a special committee in that it is established for the consideration of a particular bill. The chairmanship is determined by the Speaker, rather than elected by the members of the committee. Not used in the Senate.
- Committee of the Whole: The whole house sits as a committee in the House of Commons or Senate. Most often used to consider appropriation bills, but can be used to consider any bill.
- The committee considers each clause of the bill, and may make amendments to it. Significant amendments may be made at committee stage. In some cases, whole groups of clauses are inserted or removed. However, if the Government holds a majority, almost all the amendments which are agreed to in committee will have been tabled by the Government to correct deficiencies in the bill or to enact changes to policy made since the bill was introduced (or, in some cases, to import material which was not ready when the bill was presented).
- Report stage: this takes place on the floor of the appropriate chamber, and allows the House or Senate to approve amendments made in committee, or to propose new ones.
- Third reading: A debate on the final text of the bill, as amended.
- Passage: The bill is then sent to the other House (to the Senate, if it originated in the House of Commons; to the Commons, if it is a Senate bill), where it will face a virtually identical process. If the other House amends the bill, the bill and amendments are sent back to the original House for a further stage.
- Consideration of Senate/Commons amendments: The House in which the bill originated considers the amendments made in the other House. It may agree to them, amend them, propose other amendments in lieu or reject them. If each House insists on disagreeing with the other, the Bill is lost.
- Disagreement between the Houses: There is no specific procedure under which the Senate's disagreement can be overruled by the Commons. The Senate's rejection is absolute.
The debate on each stage is actually debate on a specific motion. For the first reading, there is no debate. For the second reading, the motion is "That this bill be now read a second time and be referred to [name of committee]" and for third reading "That this bill be now read a third time and pass." In the Committee stage, each clause is called and motions for amendments to these clauses, or that the clause stand part of the bill are made. In the Report stage, the debate is on the motions for specific amendments.
Once a bill has passed both Houses in an identical form, it receives final, formal examination by the Governor General, who gives it the royal assent. Although the Governor General can refuse to assent a bill or reserve the bill for the Queen at this stage, this power has never been exercised.
Bills being reviewed by Parliament are assigned numbers: 2 to 200 for government bills, 201 to 1000 for private member's bills, and 1001 up for private bills. They are preceded by C- if they originate in the House of Commons, or S- if they originate in the Senate. For example, Bill C-250 was a private member's bill introduced in the House. Bills C-1 and S-1 are pro forma bills, and are introduced at the beginning of each session in order to assert the right of each Chamber to manage its own affairs. They are introduced and read a first time, and then are dropped from the Order Paper.
- First reading - introduction stage: Any member, or member-in-charge of the bill seeks the leave of the house to introduce a bill. If the bill is an important one, the minister may make a brief speech, stating its main features.
- Second reading - discussion stage: This stage consists of consideration of the bill and its provisions.
- Third reading - voting stage: This stage is confined only to arguments either in support of the bill or for its rejection as a whole, without referring to its details. After the bill is passed, it is sent to the other house.
- Bill in the other house (Rajya Sabha): After a bill, other than a money bill, is transmitted to the other house, it goes through all the stages in that house as that in the first house. But if the bill passed by one house is amended by the other house, it goes back to the originating house.
- President's approval: When a bill is passed by both the houses, it is sent to the President for his approval. The President can assent or withhold his assent to a bill or he can return a bill, other than a money bill. If the President gives his assent, the bill is published in The Gazette of India and becomes an Act from the date of his assent. If he withholds his assent, the bill is dropped, which is known as pocket veto. The pocket veto is not written in the constitution and has only been exercised once by President Zail Singh: in 1986, over the postal act where the government wanted to open postal letters without warrant. If the president returns it for reconsideration, the Parliament must do so, but if it is passed again and returned to him, he must give his assent to it.
- First stage - Private members must seek the permission of the house to introduce a bill. Government bills do not require approval and are therefore introduced at the second stage.
- Second stage – this involves a discussion of the general principle of the bill. It is introduced by the sponsoring minister (or in the case of a private member’s bill, by the member) and is followed by contributions from the floor of the house. Finally the debate is brought to a conclusion by voting on the proposal “that the bill now be read a second time”.
- Third stage, commonly referred to as the Committee Stage. This involves section by section scrutiny of the bill and any amendments which have been tabled. In the Dáil this usually takes place in a committee room and will involve examination by one of the select committees. In the Seanad, this stage takes place in the chamber. The Seanad may only make recommendations rather than amendments, in the case of a money bill.
- Fourth stage, commonly referred to as the Report Stage. At this point, a version of the bill incorporating any changes made at the Committee Stage is printed for consideration. In both houses, this stage is taken on the floor of the chamber. Amendments may be considered at this stage but must arise from matters discussed or changes made at the Committee Stage.
- Fifth stage: in practice this is a formality, taken with the fourth stage and referred to as the ‘Report and Final Stage’.
- Passage in the other house: the same stages are repeated in the other house and the bill is then deemed to have been passed, except that any bill initiated in the Dáil and amended by the Seanad must return to the Dáil for final consideration.
- Signature: once the bill has passed both houses it is sent to the President for signature. The signed copy is the enrolled in the Office of the Supreme Court.
In New Zealand, the bill passes through the following stages:
- First reading: MPs debate and vote on the bill. If a bill is approved, it passes on to the committee stage.
- Select committee stage: The bill is considered by a Select Committee, which scrutinises the bill in detail and hears public submissions on the matter. The Committee may recommend amendments to the bill.
- Second reading: The general principles of the bill are debated, and a vote is held. If the bill is approved, it is put before a Committee of the House.
- Committee of the House: The bill is debated and voted on, clause by clause, by the whole House sitting as a committee.
- Third reading: Summarising arguments are made, and a final vote is taken. If the bill is approved, it is passed to the Governor-General for royal assent. New Zealand has no upper house, and so no approval is necessary.
United Kingdom Parliament
A draft piece of legislation is called a bill, when this is passed by Parliament it becomes an Act and part of statute law. There are two types of bill and Act, public and private. Public Acts apply to the whole of the UK or a number of its constituent countries — England, Scotland, Wales and Northern Ireland. Private Acts are local and personal in their effect, giving special powers to bodies such as local authorities or making exceptions to the law in particular geographic areas.
In the United Kingdom Parliament, each bill passes through the following stages:
- Pre-legislative scrutiny: Not undertaken for all bills; usually a joint committee of both houses will review a bill and vote on amendments that the government can either accept or reject. The report from this stage can be influential in later stages as rejected recommendations from the committee are revived to be voted on.
- First reading: This is a formality; no vote occurs. The Bill is presented and ordered to be printed and, in the case of private members' bills, a date is set for second reading.
- Second reading: A debate on the general principles of the bill is followed by a vote.
- Committee stage: This usually takes place in a public bill committee in the Commons and on the Floor of the House in the Lords. The committee considers each clause of the bill, and may make amendments to it.
- Consideration (or report) stage: this takes place on the floor of the House, and is a further opportunity to amend the bill. Unlike committee stage, the House need not consider every clause of the bill, only those to which amendments have been tabled.
- Third reading: a debate on the final text of the bill, as amended in the House of Lords. Further amendments may be tabled at this stage.
- Passage: The bill is then sent to the other House (to the Lords, if it originated in the Commons; to the Commons, if it is a Lords bill), which may amend it.
- Consideration of Lords/Commons amendments: The House in which the bill originated considers the amendments made in the other House.
- Royal assent: the bill is passed with any amendments and becomes an act of parliament.
In the Scottish Parliament, bills pass through the following stages:
- Introduction: The Bill is introduced to the Parliament together with its accompanying documents — Explanatory Notes, a Policy Memorandum setting out the policy underlying the Bill and a Financial Memorandum setting out the costs and savings associated with it. Statements from the Presiding Officer and the member in charge of the Bill are also lodged, indicating whether the Bill is within the legislative competence of the Parliament.
- Stage one: The Bill is considered by one or more of the subject Committees of the Parliament, which normally take evidence from the bill's promoter and other interested parties before reporting to the Parliament on the principles of the Bill. Other Committees, notably the Finance and Subordinate Legislation Committees, may also feed in at this stage. The report from the Committee is followed by a debate in the full Parliament.
- Stage two: The Bill returns to the subject Committee where it is subject to line-by-line scrutiny and amendment. This is similar to the Committee Stage in the UK Parliament.
- Stage three: The Bill as amended by the Committee returns to the full Parliament. There is a further opportunity for amendment, followed by a debate on the whole Bill, at the end of which the Parliament decides whether to pass the Bill.
- Royal assent: After the Bill has been passed, the Presiding Officer submits it to Her Majesty for royal assent. However he cannot do so until a 4-week period has elapsed during which the Law Officers of the Scottish Executive or UK Government can refer the Bill to the Supreme Court of the United Kingdom for a ruling on whether the Bill is within the powers of the Parliament.
There are special procedures for emergency bills, member's bills (similar to private member's bills in the UK Parliament), committee bills, and private bills.
In Singapore, the bill passes through these certain stages before becoming into an Act of Parliament.
- First Reading: The bill is introduced to the government, usually by the members of parliament. The unicameral parliament will then discuss the bill, followed by a vote. Voting must be at least 1/2 aye for non-controversial bills and 2/3 aye for controversial ones. If the bill passes the vote it will proceed to the second reading.
- Second Reading: In this stage, the bill is further discussed and put to a second vote. If more than half of the votes are aye the bill proceeds to the select committee.
- Select Committee: The select committee consists of people not only from the parliaments, but also the people who could be affected by the bill is passed into law. This is to ensure equity and that the bill is fair for all. If the Bill is in favor, it will proceed to the third reading.
- Third Reading: After the select committee has discussed and are in favor of the bill, they will put it to a vote. At this juncture, if the votes are more than 1/2 aye, it will be sent to the President of Singapore, currently Tony Tan. This is known as President Assent.
- President Assent: The president must give permission in order for the bill to be passed. If he approves it, it will become a statute passed down by the members of parliament which is called an Act of Parliament.
Titles and citation of Acts
Acts passed by the Parliament of England did not originally have titles, and could only be formally cited by reference to the parliamentary session in which they were passed, with each individual Act being identified by a chapter number. Descriptive titles began to be added to the enrolled Acts by the official clerks, as a reference aid; over time, titles came to be included within the text of each bill. Since the mid-nineteenth century, it has also become common practice for Acts to have a short title, as a convenient alternative to the sometimes lengthy main titles. The Short Titles Act 1892, and its replacement the Short Titles Act 1896, gave short titles to many Acts which previously lacked them.
The numerical citation of Acts has also changed over time. The original method was based on the regnal year(s) in which the relevant parliament session met. This has been replaced in most territories by simple reference to the calendar year, with the first Act passed being chapter 1, and so on.
- Act of Congress
- Legislative act
- Halsbury's Statutes
- List of Acts of Parliament in the United Kingdom
- Table of contents
- Smith, Jennifer. Democracy and the Canadian House of Commons at the millennium, Canadian Public Administration, Jan 1, 1999, Vol. 42, No. 4 (Winter 1999), p. 398.
- "HOW A BILL BECOMES AN ACT". parliamentofindia.nic.in. Retrieved 24 December 2013.
- "Amendments to Sebi Act gets Presidential assent". PTI. 18 Sep 2013. Retrieved 23 September 2013.
It has now been published in the Gazette of India, Extraordinary, Part-II, Section-1, dated the 13th September 2013 as Act No. 22 of 2013
- Gupta, V. P. (26 Aug 2002). "The President's role". Times of India. Retrieved 4 January 2012.
- "The Role of the Houses of the Oireachtas in the Scrutiny of Legislation" (PDF). Retrieved 22 January 2016.
- p. 190, How Parliament Works, 6th edition, Robert Rogers and Rhodri Walters, Pearson Longman, 2006
- Levy, Jessica, Public Bill Committees: An Assessment Scrutiny Sought; Scrutiny Gained, Parliamentary Affairs, Vol. 63, No. 3 (Jul 2010), p. 534.
- Mitchell, James. The Narcissism of Small Differences: Scotland and Westminster, Parliamentary Affairs, Jan 1, 2010, Vol. 63, No. 1 (Jan 2010), p. 98.
- All Acts of Parliament (since 1988) and statutory instruments are available free on-line under Crown copyright terms from the Office of Public Sector Information (OPSI).
- Acts of Parliament (since 1267) revised to date are available free on-line under Crown copyright terms from the Ministry of Justice (SLD).
- Parliamentary Stages of a Government Bill (pdf) from the House of Commons Information Office.
- Acts of the Commonwealth Parliament of Australia: ComLaw.gov.au |
Economists use a vocabulary of maximizing utility to describe people’s preferences. In Consumer Choices, the level of utility that a person receives is described in numerical terms. An alternative approach to describing personal preferences, called indifference curves, which avoids any need for using numbers to measure utility. By setting aside the assumption of putting a numerical valuation on utility—an assumption that many students and economists find uncomfortably unrealistic—the indifference curve framework helps to clarify the logic of the underlying model.
What Is an Indifference Curve?
People cannot really put a numerical value on their level of satisfaction. However, they can, and do, identify what choices would give them more, or less, or the same amount of satisfaction. An indifference curve shows combinations of goods that provide an equal level of utility or satisfaction. For example, Figure 6.5 presents three indifference curves that represent Lilly’s preferences for the tradeoffs that she faces in her two main relaxation activities: eating doughnuts and reading paperback books. Each indifference curve (Ul, Um, and Uh) represents one level of utility. First we will explore the meaning of one particular indifference curve and then we will look at the indifference curves as a group.
The Shape of an Indifference Curve
The indifference curve Um has four points labeled on it: A, B, C, and D. Since an indifference curve represents a set of choices that have the same level of utility, Lilly must receive an equal amount of utility, judged according to her personal preferences, from two books and 120 doughnuts (point A), from three books and 84 doughnuts (point B) from 11 books and 40 doughnuts (point C) or from 12 books and 35 doughnuts (point D). She would also receive the same utility from any of the unlabeled intermediate points along this indifference curve.
Indifference curves have a roughly similar shape in two ways: 1) they are downward sloping from left to right; 2) they are convex with respect to the origin. In other words, they are steeper on the left and flatter on the right. The downward slope of the indifference curve means that Lilly must trade off less of one good to get more of the other, while holding utility constant. For example, points A and B sit on the same indifference curve Um, which means that they provide Lilly with the same level of utility. Thus, the marginal utility that Lilly would gain from, say, increasing her consumption of books from two to three must be equal to the marginal utility that she would lose if her consumption of doughnuts was cut from 120 to 84—so that her overall utility remains unchanged between points A and B. Indeed, the slope along an indifference curve as the marginal rate of substitution, which is the rate at which a person is willing to trade one good for another so that utility will remain the same.
Indifference curves like Um are steeper on the left and flatter on the right. The reason behind this shape involves diminishing marginal utility—the notion that as a person consumes more of a good, the marginal utility from each additional unit becomes lower. Compare two different choices between points that all provide Lilly an equal amount of utility along the indifference curve Um: the choice between A and B, and between C and D. In both choices, Lilly consumes one more book, but between A and B her consumption of doughnuts falls by 36 (from 120 to 84) and between C and D it falls by only five (from 40 to 35). The reason for this difference is that points A and C are different starting points, and thus have different implications for marginal utility. At point A, Lilly has few books and many doughnuts. Thus, her marginal utility from an extra book will be relatively high while the marginal utility of additional doughnuts is relatively low—so on the margin, it will take a relatively large number of doughnuts to offset the utility from the marginal book. At point C, however, Lilly has many books and few doughnuts. From this starting point, her marginal utility gained from extra books will be relatively low, while the marginal utility lost from additional doughnuts would be relatively high—so on the margin, it will take a relatively smaller number of doughnuts to offset the change of one marginal book. In short, the slope of the indifference curve changes because the marginal rate of substitution—that is, the quantity of one good that would be traded for the other good to keep utility constant—also changes, as a result of diminishing marginal utility of both goods.
The Field of Indifference Curves
Each indifference curve represents the choices that provide a single level of utility. Every level of utility will have its own indifference curve. Thus, Lilly’s preferences will include an infinite number of indifference curves lying nestled together on the diagram—even though only three of the indifference curves, representing three levels of utility, appear on Figure 6.5. In other words, an infinite number of indifference curves are not drawn on this diagram—but you should remember that they exist.
Higher indifference curves represent a greater level of utility than lower ones. In Figure 6.5, indifference curve Ul can be thought of as a “low” level of utility, while Um is a “medium” level of utility and Uh is a “high” level of utility. All of the choices on indifference curve Uh are preferred to all of the choices on indifference curve Um, which in turn are preferred to all of the choices on Ul.
To understand why higher indifference curves are preferred to lower ones, compare point B on indifference curve Um to point F on indifference curve Uh. Point F has greater consumption of both books (five to three) and doughnuts (100 to 84), so point F is clearly preferable to point B. Given the definition of an indifference curve—that all the points on the curve have the same level of utility—if point F on indifference curve Uh is preferred to point B on indifference curve Um, then it must be true that all points on indifference curve Uh have a higher level of utility than all points on Um. More generally, for any point on a lower indifference curve, like Ul, you can identify a point on a higher indifference curve like Um or Uh that has a higher consumption of both goods. Since one point on the higher indifference curve is preferred to one point on the lower curve, and since all the points on a given indifference curve have the same level of utility, it must be true that all points on higher indifference curves have greater utility than all points on lower indifference curves.
These arguments about the shapes of indifference curves and about higher or lower levels of utility do not require any numerical estimates of utility, either by the individual or by anyone else. They are only based on the assumptions that when people have less of one good they need more of another good to make up for it, if they are keeping the same level of utility, and that as people have more of a good, the marginal utility they receive from additional units of that good will diminish. Given these gentle assumptions, a field of indifference curves can be mapped out to describe the preferences of any individual.
The Individuality of Indifference Curves
Each person determines his or her own preferences and utility. Thus, while indifference curves have the same general shape—they slope down, and the slope is steeper on the left and flatter on the right—the specific shape of indifference curves can be different for every person. Figure 6.5, for example, applies only to Lilly’s preferences. Indifference curves for other people would probably travel through different points.
Utility-Maximizing with Indifference Curves
People seek the highest level of utility, which means that they wish to be on the highest possible indifference curve. However, people are limited by their budget constraints, which show what tradeoffs are actually possible.
Maximizing Utility at the Highest Indifference Curve
Return to the situation of Lilly’s choice between paperback books and doughnuts. Say that books cost $6, doughnuts are 50 cents each, and that Lilly has $60 to spend. This information provides the basis for the budget line shown in Figure 6.6. Along with the budget line are shown the three indifference curves from Figure 6.5. What is Lilly’s utility-maximizing choice? Several possibilities are identified in the diagram.
The choice of F with five books and 100 doughnuts is highly desirable, since it is on the highest indifference curve Uh of those shown in the diagram. However, it is not affordable given Lilly’s budget constraint. The choice of H with three books and 70 doughnuts on indifference curve Ul is a wasteful choice, since it is inside Lilly’s budget set, and as a utility-maximizer, Lilly will always prefer a choice on the budget constraint itself. Choices B and G are both on the opportunity set. However, choice G of six books and 48 doughnuts is on lower indifference curve Ul than choice B of three books and 84 doughnuts, which is on the indifference curve Um. If Lilly were to start at choice G, and then thought about whether the marginal utility she was deriving from doughnuts and books, she would decide that some additional doughnuts and fewer books would make her happier—which would cause her to move toward her preferred choice B. Given the combination of Lilly’s personal preferences, as identified by her indifference curves, and Lilly’s opportunity set, which is determined by prices and income, B will be her utility-maximizing choice.
The highest achievable indifference curve touches the opportunity set at a single point of tangency. Since an infinite number of indifference curves exist, even if only a few of them are drawn on any given diagram, there will always exist one indifference curve that touches the budget line at a single point of tangency. All higher indifference curves, like Uh, will be completely above the budget line and, although the choices on that indifference curve would provide higher utility, they are not affordable given the budget set. All lower indifference curves, like Ul, will cross the budget line in two separate places. When one indifference curve crosses the budget line in two places, however, there will be another, higher, attainable indifference curve sitting above it that touches the budget line at only one point of tangency.
Changes in Income
A rise in income causes the budget constraint to shift to the right. In graphical terms, the new budget constraint will now be tangent to a higher indifference curve, representing a higher level of utility. A reduction in income will cause the budget constraint to shift to the left, which will cause it to be tangent to a lower indifference curve, representing a reduced level of utility. If income rises by, for example, 50%, exactly how much will a person alter consumption of books and doughnuts? Will consumption of both goods rise by 50%, or will the quantity of one good rise substantially, while the quantity of the other good rises only a little, or even declines?
Since personal preferences and the shape of indifference curves are different for each individual, the response to changes in income will be different, too. For example, consider the preferences of Manuel and Natasha in Figure 6.7 (a) and Figure 6.7 (b). They each start with an identical income of $40, which they spend on yogurts that cost $1 and rental movies that cost $4. Thus, they face identical budget constraints. However, based on Manuel’s preferences, as revealed by his indifference curves, his utility-maximizing choice on the original budget set occurs where his opportunity set is tangent to the highest possible indifference curve at W, with three movies and 28 yogurts, while Natasha’s utility-maximizing choice on the original budget set at Y will be seven movies and 12 yogurts.
Now, say that income rises to $60 for both Manuel and Natasha, so their budget constraints shift to the right. As shown in Figure 6.7 (a), Manuel’s new utility maximizing choice at X will be seven movies and 32 yogurts—that is, Manuel will choose to spend most of the extra income on movies. Natasha’s new utility maximizing choice at Z will be eight movies and 28 yogurts—that is, she will choose to spend most of the extra income on yogurt. In this way, the indifference curve approach allows for a range of possible responses. However, if both goods are normal goods, then the typical response to a higher level of income will be to purchase more of them—although exactly how much more is a matter of personal preference. If one of the goods is an inferior good, the response to a higher level of income will be to purchase less of it.
Responses to Price Changes: Substitution and Income Effects
A higher price for a good will cause the budget constraint to shift to the left, so that it is tangent to a lower indifference curve representing a reduced level of utility. Conversely, a lower price for a good will cause the opportunity set to shift to the right, so that it is tangent to a higher indifference curve representing an increased level of utility. Exactly how much a change in price will lead to the quantity demanded of each good will depend on personal preferences.
Anyone who faces a change in price will experience two interlinked motivations: a substitution effect and an income effect. The substitution effect is that when a good becomes more expensive, people seek out substitutes. If oranges become more expensive, fruit-lovers scale back on oranges and eat more apples, grapefruit, or raisins. Conversely, when a good becomes cheaper, people substitute toward consuming more. If oranges get cheaper, people fire up their juicing machines and ease off on other fruits and foods. The income effect refers to how a change in the price of a good alters the effective buying power of one’s income. If the price of a good that you have been buying falls, then in effect your buying power has risen—you are able to purchase more goods. Conversely, if the price of a good that you have been buying rises, then the buying power of a given amount of income is diminished. (One common source of confusion is that the “income effect” does not refer to a change in actual income. Instead, it refers to the situation in which the price of a good changes, and thus the quantities of goods that can be purchased with a fixed amount of income change. It might be more accurate to call the “income effect” a “buying power effect,” but the “income effect” terminology has been used for decades, and it is not going to change during this economics course.) Whenever a price changes, consumers feel the pull of both substitution and income effects at the same time.
Using indifference curves, you can illustrate the substitution and income effects on a graph. In Figure 6.8, Ogden faces a choice between two goods: haircuts or personal pizzas. Haircuts cost $20, personal pizzas cost $6, and he has $120 to spend.
The price of haircuts rises to $30. Ogden starts at choice A on the higher opportunity set and the higher indifference curve. After the price of haircuts increases, he chooses B on the lower opportunity set and the lower indifference curve. Point B with two haircuts and 10 personal pizzas is immediately below point A with three haircuts and 10 personal pizzas, showing that Ogden reacted to a higher price of haircuts by cutting back only on haircuts, while leaving his consumption of pizza unchanged.
The dashed line in the diagram, and point C, are used to separate the substitution effect and the income effect. To understand their function, start by thinking about the substitution effect with this question: How would Ogden change his consumption if the relative prices of the two goods changed, but this change in relative prices did not affect his utility? The slope of the budget constraint is determined by the relative price of the two goods; thus, the slope of the original budget line is determined by the original relative prices, while the slope of the new budget line is determined by the new relative prices. With this thought in mind, the dashed line is a graphical tool inserted in a specific way: It is inserted so that it is parallel with the new budget constraint, so it reflects the new relative prices, but it is tangent to the original indifference curve, so it reflects the original level of utility or buying power.
Thus, the movement from the original choice (A) to point C is a substitution effect; it shows the choice that Ogden would make if relative prices shifted (as shown by the different slope between the original budget set and the dashed line) but if buying power did not shift (as shown by being tangent to the original indifference curve). The substitution effect will encourage people to shift away from the good which has become relatively more expensive—in Ogden’s case, the haircuts on the vertical axis—and toward the good which has become relatively less expensive—in this case, the pizza on the vertical axis. The two arrows labeled with “s” for “substitution effect,” one on each axis, show the direction of this movement.
The income effect is the movement from point C to B, which shows how Ogden reacts to a reduction in his buying power from the higher indifference curve to the lower indifference curve, but holding constant the relative prices (because the dashed line has the same slope as the new budget constraint). In this case, where the price of one good increases, buying power is reduced, so the income effect means that consumption of both goods should fall (if they are both normal goods, which it is reasonable to assume unless there is reason to believe otherwise). The two arrows labeled with “i” for “income effect,” one on each axis, show the direction of this income effect movement.
Now, put the substitution and income effects together. When the price of pizza increased, Ogden consumed less of it, for two reasons shown in the exhibit: the substitution effect of the higher price led him to consume less and the income effect of the higher price also led him to consume less. However, when the price of pizza increased, Ogden consumed the same quantity of haircuts. The substitution effect of a higher price for pizza meant that haircuts became relatively less expensive (compared to pizza), and this factor, taken alone, would have encouraged Ogden to consume more haircuts. However, the income effect of a higher price for pizza meant that he wished to consume less of both goods, and this factor, taken alone, would have encouraged Ogden to consume fewer haircuts. As shown in Figure 6.8, in this particular example the substitution effect and income effect on Ogden’s consumption of haircuts are offsetting—so he ends up consuming the same quantity of haircuts after the price increase for pizza as before.
The size of these income and substitution effects will differ from person to person, depending on individual preferences. For example, if Ogden’s substitution effect away from pizza and toward haircuts is especially strong, and outweighs the income effect, then a higher price for pizza might lead to increased consumption of haircuts. This case would be drawn on the graph so that the point of tangency between the new budget constraint and the relevant indifference curve occurred below point B and to the right. Conversely, if the substitution effect away from pizza and toward haircuts is not as strong, and the income effect on is relatively stronger, then Ogden will be more likely to react to the higher price of pizza by consuming less of both goods. In this case, his optimal choice after the price change will be above and to the left of choice B on the new budget constraint.
Although the substitution and income effects are often discussed as a sequence of events, it should be remembered that they are twin components of a single cause—a change in price. Although you can analyze them separately, the two effects are always proceeding hand in hand, happening at the same time.
Indifference Curves with Labor-Leisure and Intertemporal Choices
The concept of an indifference curve applies to tradeoffs in any household choice, including the labor-leisure choice or the intertemporal choice between present and future consumption. In the labor-leisure choice, each indifference curve shows the combinations of leisure and income that provide a certain level of utility. In an intertemporal choice, each indifference curve shows the combinations of present and future consumption that provide a certain level of utility. The general shapes of the indifference curves—downward sloping, steeper on the left and flatter on the right—also remain the same.
A Labor-Leisure Example
Petunia is working at a job that pays $12 per hour but she gets a raise to $20 per hour. After family responsibilities and sleep, she has 80 hours per week available for work or leisure. As shown in Figure 6.9, the highest level of utility for Petunia, on her original budget constraint, is at choice A, where it is tangent to the lower indifference curve (Ul). Point A has 30 hours of leisure and thus 50 hours per week of work, with income of $600 per week (that is, 50 hours of work at $12 per hour). Petunia then gets a raise to $20 per hour, which shifts her budget constraint to the right. Her new utility-maximizing choice occurs where the new budget constraint is tangent to the higher indifference curve Uh. At B, Petunia has 40 hours of leisure per week and works 40 hours, with income of $800 per week (that is, 40 hours of work at $20 per hour).
Substitution and income effects provide a vocabulary for discussing how Petunia reacts to a higher hourly wage. The dashed line serves as the tool for separating the two effects on the graph.
The substitution effect tells how Petunia would have changed her hours of work if her wage had risen, so that income was relatively cheaper to earn and leisure was relatively more expensive, but if she had remained at the same level of utility. The slope of the budget constraint in a labor-leisure diagram is determined by the wage rate. Thus, the dashed line is carefully inserted with the slope of the new opportunity set, reflecting the labor-leisure tradeoff of the new wage rate, but tangent to the original indifference curve, showing the same level of utility or “buying power.” The shift from original choice A to point C, which is the point of tangency between the original indifference curve and the dashed line, shows that because of the higher wage, Petunia will want to consume less leisure and more income. The “s” arrows on the horizontal and vertical axes of Figure 6.9 show the substitution effect on leisure and on income.
The income effect is that the higher wage, by shifting the labor-leisure budget constraint to the right, makes it possible for Petunia to reach a higher level of utility. The income effect is the movement from point C to point B; that is, it shows how Petunia’s behavior would change in response to a higher level of utility or “buying power,” with the wage rate remaining the same (as shown by the dashed line being parallel to the new budget constraint). The income effect, encouraging Petunia to consume both more leisure and more income, is drawn with arrows on the horizontal and vertical axis of Figure 6.9.
Putting these effects together, Petunia responds to the higher wage by moving from choice A to choice B. This movement involves choosing more income, both because the substitution effect of higher wages has made income relatively cheaper or easier to earn, and because the income effect of higher wages has made it possible to have more income and more leisure. Her movement from A to B also involves choosing more leisure because, according to Petunia’s preferences, the income effect that encourages choosing more leisure is stronger than the substitution effect that encourages choosing less leisure.
Figure 6.9 represents only Petunia’s preferences. Other people might make other choices. For example, a person whose substitution and income effects on leisure exactly counterbalanced each other might react to a higher wage with a choice like D, exactly above the original choice A, which means taking all of the benefit of the higher wages in the form of income while working the same number of hours. Yet another person, whose substitution effect on leisure outweighed the income effect, might react to a higher wage by making a choice like F, where the response to higher wages is to work more hours and earn much more income. To represent these different preferences, you could easily draw the indifference curve Uh to be tangent to the new budget constraint at D or F, rather than at B.
An Intertemporal Choice Example
Quentin has saved up $10,000. He is thinking about spending some or all of it on a vacation in the present, and then will save the rest for another big vacation five years from now. Over those five years, he expects to earn a total 80% rate of return. Figure 6.9 shows Quentin’s budget constraint and his indifference curves between present consumption and future consumption. The highest level of utility that Quentin can achieve at his original intertemporal budget constraint occurs at point A, where he is consuming $6,000, saving $4,000 for the future, and expecting with the accumulated interest to have $7,200 for future consumption (that is, $4,000 in current financial savings plus the 80% rate of return).
However, Quentin has just realized that his expected rate of return was unrealistically high. A more realistic expectation is that over five years he can earn a total return of 30%. In effect, his intertemporal budget constraint has pivoted to the left, so that his original utility-maximizing choice is no longer available. Will Quentin react to the lower rate of return by saving more, or less, or the same amount? Again, the language of substitution and income effects provides a framework for thinking about the motivations behind various choices. The dashed line, which is a graphical tool to separate the substitution and income effect, is carefully inserted with the same slope as the new opportunity set, so that it reflects the changed rate of return, but it is tangent to the original indifference curve, so that it shows no change in utility or “buying power.”
The substitution effect tells how Quentin would have altered his consumption because the lower rate of return makes future consumption relatively more expensive and present consumption relatively cheaper. The movement from the original choice A to point C shows how Quentin substitutes toward more present consumption and less future consumption in response to the lower interest rate, with no change in utility. The substitution arrows on the horizontal and vertical axes of Figure 6.10 show the direction of the substitution effect motivation. The substitution effect suggests that, because of the lower interest rate, Quentin should consume more in the present and less in the future.
Quentin also has an income effect motivation. The lower rate of return shifts the budget constraint to the left, which means that Quentin’s utility or “buying power” is reduced. The income effect (assuming normal goods) encourages less of both present and future consumption. The impact of the income effect on reducing present and future consumption in this example is shown with “i” arrows on the horizontal and vertical axis of Figure 6.10.
Taking both effects together, the substitution effect is encouraging Quentin toward more present and less future consumption, because present consumption is relatively cheaper, while the income effect is encouraging him to less present and less future consumption, because the lower interest rate is pushing him to a lower level of utility. For Quentin’s personal preferences, the substitution effect is stronger so that, overall, he reacts to the lower rate of return with more present consumption and less savings at choice B. However, other people might have different preferences. They might react to a lower rate of return by choosing the same level of present consumption and savings at choice D, or by choosing less present consumption and more savings at a point like F. For these other sets of preferences, the income effect of a lower rate of return on present consumption would be relatively stronger, while the substitution effect would be relatively weaker.
Sketching Substitution and Income Effects
Indifference curves provide an analytical tool for looking at all the choices that provide a single level of utility. They eliminate any need for placing numerical values on utility and help to illuminate the process of making utility-maximizing decisions. They also provide the basis for a more detailed investigation of the complementary motivations that arise in response to a change in a price, wage or rate of return—namely, the substitution and income effects.
If you are finding it a little tricky to sketch diagrams that show substitution and income effects so that the points of tangency all come out correctly, it may be useful to follow this procedure.
Step 1. Begin with a budget constraint showing the choice between two goods, which this example will call “candy” and “movies.” Choose a point A which will be the optimal choice, where the indifference curve will be tangent—but it is often easier not to draw in the indifference curve just yet. See Figure 6.11.
Step 2. Now the price of movies changes: let’s say that it rises. That shifts the budget set inward. You know that the higher price will push the decision-maker down to a lower level of utility, represented by a lower indifference curve. But at this stage, draw only the new budget set. See Figure 6.12.
Step 3. The key tool in distinguishing between substitution and income effects is to insert a dashed line, parallel to the new budget line. This line is a graphical tool that allows you to distinguish between the two changes: (1) the effect on consumption of the two goods of the shift in prices—with the level of utility remaining unchanged—which is the substitution effect; and (2) the effect on consumption of the two goods of shifting from one indifference curve to the other—with relative prices staying unchanged—which is the income effect. The dashed line is inserted in this step. The trick is to have the dashed line travel close to the original choice A, but not directly through point A. See Figure 6.13.
Step 4. Now, draw the original indifference curve, so that it is tangent to both point A on the original budget line and to a point C on the dashed line. Many students find it easiest to first select the tangency point C where the original indifference curve touches the dashed line, and then to draw the original indifference curve through A and C. The substitution effect is illustrated by the movement along the original indifference curve as prices change but the level of utility holds constant, from A to C. As expected, the substitution effect leads to less consumed of the good that is relatively more expensive, as shown by the “s” (substitution) arrow on the vertical axis, and more consumed of the good that is relatively less expensive, as shown by the “s” arrow on the horizontal axis. See Figure 6.14.
Step 5. With the substitution effect in place, now choose utility-maximizing point B on the new opportunity set. When you choose point B, think about whether you wish the substitution or the income effect to have a larger impact on the good (in this case, candy) on the horizontal axis. If you choose point B to be directly in a vertical line with point A (as is illustrated here), then the income effect will be exactly offsetting the substitution effect on the horizontal axis. If you insert point B so that it lies a little to right of the original point A, then the substitution effect will exceed the income effect. If you insert point B so that it lies a little to the left of point A, then the income effect will exceed the substitution effect. The income effect is the movement from C to B, showing how choices shifted as a result of the decline in buying power and the movement between two levels of utility, with relative prices remaining the same. With normal goods, the negative income effect means less consumed of each good, as shown by the direction of the “i” (income effect) arrows on the vertical and horizontal axes. See Figure 6.15.
In sketching substitution and income effect diagrams, you may wish to practice some of the following variations: (1) Price falls instead of a rising; (2) The price change affects the good on either the vertical or the horizontal axis; (3) Sketch these diagrams so that the substitution effect exceeds the income effect; the income effect exceeds the substitution effect; and the two effects are equal.
One final note: The helpful dashed line can be drawn tangent to the new indifference curve, and parallel to the original budget line, rather than tangent to the original indifference curve and parallel to the new budget line. Some students find this approach more intuitively clear. The answers you get about the direction and relative sizes of the substitution and income effects, however, should be the same.
Key Concepts and Summary
An indifference curve is drawn on a budget constraint diagram that shows the tradeoffs between two goods. All points along a single indifference curve provide the same level of utility. Higher indifference curves represent higher levels of utility. Indifference curves slope downward because, if utility is to remain the same at all points along the curve, a reduction in the quantity of the good on the vertical axis must be counterbalanced by an increase in the quantity of the good on the horizontal axis (or vice versa). Indifference curves are steeper on the far left and flatter on the far right, because of diminishing marginal utility.
The utility-maximizing choice along a budget constraint will be the point of tangency where the budget constraint touches an indifference curve at a single point. A change in the price of any good has two effects: a substitution effect and an income effect. The substitution effect motivation encourages a utility-maximizer to buy less of what is relatively more expensive and more of what is relatively cheaper. The income effect motivation encourages a utility-maximizer to buy more of both goods if utility rises or less of both goods if utility falls (if they are both normal goods).
In a labor-leisure choice, every wage change has a substitution and an income effect. The substitution effect of a wage increase is to choose more income, since it is cheaper to earn, and less leisure, since its opportunity cost has increased. The income effect of a wage increase is to choose more of leisure and income, since they are both normal goods. The substitution and income effects of a wage decrease would reverse these directions.
In an intertemporal consumption choice, every interest rate change has a substitution and an income effect. The substitution effect of an interest rate increase is to choose more future consumption, since it is now cheaper to earn future consumption and less present consumption (more savings), since the opportunity cost of present consumption in terms of what is being given up in the future has increased. The income effect of an interest rate increase is to choose more of both present and future consumption, since they are both normal goods. The substitution and income effects of an interest rate decrease would reverse these directions. |
Ovals, or ellipses, look like horizontally elongated circles. Accurately finding the perimeter (circumference) requires some rather complicated calculus formulas. However, a much simpler formula provides a rough estimate that falls within 5 per cent of the value found by using the calculus equations. The rough estimate equation --- circumference≈2 π√1/2(a squared + b squared) -- begins with finding: (a) the semi-major axis (longer, horizontal radius) and (b) the semi-minor axis (shorter, vertical radius). The mathematical functions used in this equation include squaring and then adding the axes as well as division, square root and multiplication.
- Skill level:
Other People Are Reading
Things you need
- Oval diagram
Find the oval's (a) semi-major axis by using the ruler to measure the longer, horizontal diameter, measuring from one side of the horizontal perimeter to the other, going through the centre point of the oval. For example: a = 10 feet. Make a note of the semi-major axis length on the oval diagram.
Find the (b) semi-minor axis by using the ruler to measure the shorter, vertical diameter, measuring from one side of the vertical perimeter to the other, going through the centre point of the oval. Example: b = 6 feet. Jot down the semi-minor axis length on the oval diagram.
Square both the semi-major and semi-minor axes and then add them together. Example: (a squared + b squared); (10 squared + 6 squared) = (100 + 36) = 136. Write down this number next to the oval diagram or on a separate piece of paper.
Taking the value found, either multiply it by 1/2 or divide it by 2. Example 1/2(a squared + b squared); 1/2 x (100 + 36) = 1/2 x 136 or 136 / 2 = 68. Record this value.
Using a calculator with the square root function, find the square root for the quotient, which will give a decimal value. Example: √1/2(a squared + b squared); √1/2(100+36) = √68 = 8.2462113. Write down the square root.
For this final step, multiply 2π by the square root value. Note that this value will also contain decimals. Example: 2π√1/2(a squared + b squared); 2π√1/2 x (100 + 36) = 2π√68 = 2π x 8.2462113 = 2 x 3.14 x 8.2462113 = 51.786207. The circumference, or perimeter, of the oval is 51.786207 feet. Record the final answer inside or next to the diagram of the oval.
- 20 of the funniest online reviews ever
- 14 Biggest lies people tell in online dating sites
- Hilarious things Google thinks you're trying to search for |
Below you can find two calculators that you can use to check answers to chemistry problems. The first one calculates the pH of a strong acid or strong base solution, and the second one calculates the pH of a weak acid or weak base solution. Some theory and an explanation of the calculations with formulas can be found below the calculators.
pH of a solution
pH means 'potential of hydrogen' or 'power of hydrogen'. pH is the negative of the base 10 logarithm of the hydrogen ion activity.
In most chemistry problems, however, we do not use hydrogen ion activity, but molar concentration or molarity. How are these two related? Of course, ion activity depends on ion concentration and this is described by the equation
– hydrogen ion activity
– hydrogen ion activity coefficient
– hydrogen ion concentration
The activity coefficient is a function of the ion concentration and approaches 1 as the solution becomes increasingly dilute. For dilute (ideal) solutions, the standard state of the solute is 1.00 M, so its molarity equals its activity. That's why for most problems that assume ideal solutions we can use the base 10 logarithm of the molar concentration, not the activity.
Why do we need pH at all? pH is a measure used to specify acidity or basicity of an aqueous solution. Whether an aqueous solution reacts as an acid or a base depends on its hydrogen ion (H+) content.
However, even chemically pure, neutral water contains some hydrogen ions1 due to the auto dissociation of water.
It is known that at equilibrium under standard conditions (750 mmHg and 25°C), 1 L of pure water contains mol and mol ions, hence, water at standard temperature and pressure (STP) has a pH of 7. Acids release hydrogen ions, so their aqueous solutions contain more hydrogen ions than neutral water and are considered acidic with a pH less than 7. Bases accept hydrogen ions (they bind to some of the hydrogen ions formed from the dissociation of the water), so their aqueous solutions contain fewer hydrogen ions than neutral water and are considered basic with pH more than 7. Note that the pH scale is logarithmic (difference by one means difference by order of magnitude, or tenfold) and inversely indicates the concentration of hydrogen ions in the solution. A lower pH indicates a higher concentration of hydrogen ions and vice versa.
The calculation of pH using molar concentration is different in the case of a strong acid/base and weak acid/base. More on this below.
Strong acids and bases are compounds that, for practical purposes, completely dissociate into their ions in water. Hence the concentration of hydrogen ions in such solutions can be taken to be equal to the concentration of the acid. The calculation of pH becomes straightforward
For basic solutions, you have the concentration of the base, thus, the concentration of the hydroxide ions OH-. You can calculate pOH.
Based on equilibrium concentrations of H+ and OH− in water (above), pH and pOH are related by the following equation
, which is true for any aqueous solution.
Hence, in the case of a basic solution
There are only seven common strong acids:
– hydrochloric acid HCl
– nitric acid HNO3
– sulfuric acid H2SO4
– hydrobromic acid HBr
– hydroiodic acid HI
– perchloric acid HClO4
– chloric acid HClO3
There aren't very many strong bases either, and some of them are not very soluble in water. Those that are soluble are
– sodium hydroxide NaOH
– potassium hydroxide KOH
– lithium hydroxide LiOH
– rubidium hydroxide RbOH
– cesium hydroxide CsOH
A solution of a strong acid at concentration 1 M (1 mol/L) has a pH of 0. A solution of a strong alkali at concentration 1 M (1 mol/L) has a pH of 14. Thus, in most problems that arise pH values lie mostly in the range 0 to 14, though negative pH values and values above 14 are entirely possible.
Weak acids/bases only partially dissociate in water. Finding the pH of a weak acid is a bit more complicated. The pH equation is still the same: , but you need to use the acid dissociation constant (Ka) to find [H+].
The formula for Ka is:
– concentration of H+ ions
– concentration of conjugate base ions
– concentration of undissociated acid molecules
for a reaction
This formula describes the equilibrium. In order to deduce the formula for H+ from the formula above, we can use an ICE (initial – change – equilibrium) table. Let x represent the concentration of H+ that dissociates from HB, then we can fill the table like this:
|Initial concentration||C M||0 M||0 M|
|Change in concentration||-x M||+x M||+x M|
|Equilibrium concentration||(C-x) M||x M||x M|
Now, plug these into the Ka formula:
After re-arrangement, we get a quadratic equation:
To find x we need to solve the quadratic equation and pick the positive root.
Finally, plug x into the pH formula to find the pH value.
The same applies to bases, where you use the base dissociation constant Kb. Ka and Kb are usually given, or can be found in tables.
You may notice that tables list some acids with multiple Ka values. This means that acid is polyprotic, which means it can give up more than one proton. However, due to molecular forces, the value of the constant for each next proton becomes smaller by several orders of magnitude. For example, for phosphoric acid
So, usually only one proton is considered, and you use stoichiometric coefficient equals to one for all calculations.
The hydrogen ion does not remain as a free proton for long, as it is quickly hydrated by a surrounding water molecule. The result is the hydronium ion ↩ |
My Russian-language page is more informative.
Switch into Russian
In fact, in the most of cases we measure the velocity, which has no name in physics. Let's solve several simple problems.
Problem 1. You have made an ideal meter rod. Suppose it flew (sorry) in front of your nose in one second, sliding it slightly. What is the velocity of the rod?
If you divide one meter per one second, you will find an "error velocity". According to special relativity the moving rod is contracted.
d = d0sqr(1-v2/c2) = d0(1-v2/c2)(0.5).
According to the definition, the velocity is the ratio of the distance, covered by the material point, to the time interval of its movement. The length of the moving rod is smaller than one meter, consequently the trace, made by some point of rod on the coordinate plane will be smaller than one meter.
v = d0sqr(1-v2/c2) / t.
Velocity enters to both parts of this equation. After transformations we'll have:
v = b / sqr(1+b2/c2),
where: b = d0/t = 1m/s is "the error velocity", which we had received at the beginning. After computing we'll have:
v = 0,99999999999999999443674972... m/s.
This figure shows how we can obtain two measurable values: "erroneous velocity" and "correct velocity". In order to measure the "erroneous velocity" you must tear out your nose and put instead of it a piece of chalk. Then you very gently slide a rod of known length against your chalky nose. You take the motion time from your wrist watch. In order to measure the "correct velocity" take a piece of chalk, attach it to the beginning of a moving rod (blue one of the figure), and let it slide against the resting rod (red one on the figure). In order to measure the motion time you must use two synchronized watches, placed at the start and finish points of chalk's trajectory.
Problem 2. You are in the uniformly moving car. At 12.00 you see the kilometer pole 100. At 13.00 you see the kilometer pole 200. What is the velocity of your car?
If you divide the covered distance 100 km by one hour you will receive "the error velocity" again. The role of moving rod in this problem plays the road. Road is contracted and the velocity will be a little smaller than 100 km/hour. If you don't believe me, open you car's door, bent your head to asphalt and touch it by your nose. Road slises by your nose. Road moves relatively you car’s reference system. Consequently the road is contracted... If you don't want to scribble your nose about asphalt - push out your hand out of the window, strike the kilometer poles by your hand, count the number of strikes. Divide this number per time of your movement, measured by your wrist watch and you'll have this damn “erroneous velocity”.
Problem 3. You are in the relativistic rocket with unlimited technical possibilities. Can you cover the distance of 1024 meters in a second?
If you rocket has unlimited technical possibilities then it is possible. Your "erroneous velocity" will be1024 meters per second, but the "correct velocity" will be smaller than 3*108 meters per second. In this problem the whole Universe plays the role of the upper contracted rod.
In fact, in our nature there are two types of measurable velocities. They are both real. Measurability must be comprehended here in the wider sense. The "erroneous velocity" is as much physical, as "the correct velocity" is. As we will see bottom "the correct velocity" is the hyperbolic tangent of rapidity, the "erroneous velocity" in the hyperbolic sine of rapidity. The coefficient of contraction is the hyperbolic cosine of rapidity. And the rapidity is the third type of velocity, but this value is not measurable. Let's analyze all these types of velocities, but at first let's introduce their definitions:
Coordinate time is the time measured by synchronized watches, placed everywhere in the coordinate system: t.
Coordinate velocity is the ratio of passed distance to the interval of time, measured by synchronized watches, placed at the start and finish points: v=dr/dt; v=vxi+vyj+vzk.
Proper time is the time, measured by the watch, connected with the moving object: t, dt=dt/g.
Proper velocity is the ratio of passed distance to the time measured by the watch, connected with the moving object: b=dr/dt; b=bxi+byj+bzk. Proper velocity is not limited by the value of speed of light. The limit of proper velocity is infinity. The proper velocity of light is infinity.
Speed is the absolute value of coordinate velocity. For example, speed of light is: c=299792458 m/s. v=|v|; v=sqr(vx2+vy2+vz2). In fact, the "ideal speedometer" does not measure the absolute value of coordinate velocity, but it measures the value of proper velocity.
Gamma is the coefficient of space contraction of the moving system, or time dilation in the moving system: g=1/sqr(1-v2/c2)=sqr(1+b2/c2).
Coordinate and proper velocities can be expressed symmetrically through each other
b = vg
v = b/g = b/sqr(1+b2/c2).
Velocities v and b are not additive ones. The law of velocities addition looks like hyperbolic tangent of the sum of two angles.
v = (v1+v2) / (1 + v1v2/c2),
v/c = (v1/c+v2/c) / (1 + (v1/c)(v2/c)),
thy = th(y1+y2) = (thy1+thy2) / (1+thy1thy2).
That is why in the Special Relativity there were introduced two physical values: dimensionless parameter of rapidity y, and rapidity r with units of velocity.
r = y c.
Here are connections between v, b, g, y, r:
v/c = thy = th(r/c);
g = chy = ch(r/c);
b/c = shy = sh(r/c).
On the bottom figure one can see the graphs b/c=shy, v/c=thy, r/c=y.
Here are some another comprehensible formulae:
y = r/c
= Arth(v/c) = (1/2)ln((1+v/c)/(1-v/c)),
y = r/c = Arsh(b/c) = ln(b/c + g) = ln(b/c + sqr(1+b2/c2)),
y = r/c = Arch(sqr(1+b2/c2))=Arch(g).
The low of addition of several equal coordinate velocities:
v/c = thy = th(ny0) = th(nArth(v0/c)).
For several parallel vi, but not equal to each other:
v/c = th(Arth(v1/c)+Arth(v2/c)+Arth(v3/c)+...),
The low of addition of two proper velocities: b = b1g2+b2g1,
Compare: shy = sh(y1+y2) = shy1chy2+shy2chy1.
The low of addition of several equal proper velocities:
For several parallel bi, but not equal to each other:
Velocities can be also expressed though exponents:
v/c = thy = (ey-e-y)/(ey+e-y),
b/c = shy = (ey-e-y)/2,
g = chy = (ey+e-y)/2.
Let's try the formula g=sqr(1+b2/c2), using the known formula ch2y - sh2y = 1.
g2 - b2/c2 = (1+b2/c2) - b2/c2 = 1.
In the next sections we'll find the fourth, the most mysterious type of velocity, - the quantable velocity. Introduction of the quantable velocity explains why the electron is not a point particle, but it is a string, embracing the whole Universe at the the extremely short period of time, which is equal to the electrons classical period.
Model of Electron.
In order to understand this figure go ahead and read the next pages.
Forward to part 2: Interval. Four-dimensional velocities. Types of acceleration. Four-accelerations. Relativistic rocket.
Forward to part 3: Different forms of space-time transformations. Quantum velocities in Special Relativity. Model of electron.
Program of moving electron.
To index of Space Genetics.
firstname.lastname@example.org Ivan Gorelik. |
Presentation on theme: "Properties of Addition and Multiplication Why is it important to recognize these properties? It helps us with mental math. It helps us recognize patterns."— Presentation transcript:
Properties of Addition and Multiplication Why is it important to recognize these properties? It helps us with mental math. It helps us recognize patterns in problems. It helps us solve problems faster!!!
Inverse Operation Property Inverse means opposite. Use the opposite operation to check solutions and find the value of variables. Addition X+4=9 Multiplication 8*x=32
Commutative Property If the order of the addends or factors is changed, the sum or product stays the same. Addition 4+9=9+4 Multiplication 2x8=8x2
Associative Property The way addends are grouped or factors are grouped does not change the sum or product. Addition (8+1)+5=8+(1+5) Multiplication (6x2)x4=6x(2x4)
Identity Property The sum of zero and any number equals that number. The product of one and any number equals that number. Addition 6+0=6 Multiplication 5x1=5
Distributive Property Multiplying a sum by a number is the same as multiplying each addend in the sum by the number and then adding the products. Multiplication 3x(5+10)=(3x5)+(3x10)
Zero Property of Multiplication The product of any number and zero is zero. Multiplication 12x0=0 |
- Phonological choices can be determined by finding minimal pairs.
- Consonants are described using the Voice-Place-Manner system.
- There are about 24 consonant choices in English which can be classified according to their typical articulation.
- Approximants and Nasals have similar acoustics to vowels.
- The source of energy for fricatives is turbulence generated at or near the constriction. Fricative spectra vary according to size of cavity forward of the constriction.
- Plosives have a series of events: closing, hold, burst, opening and optional aspiration. Place cues for plosive are related to the spectrum of the burst and the formant transitions.
- Voice Onset Time is an important voicing cue for plosives.
At the end of this topic the student should be able to:
- describe the procedure by which phonological choices may be identified for a language
- identify the phonological choices for consonants used in English
- describe a typical articulation for each English consonant using the Voice, Place, Manner notational system
- locate vowel articulations and consonantal articulations on spectrograms of simple words
- describe the acoustic differences between English fricatives
- describe the articulatory and acoustic differences between English plosives
- How to identify phonological choices
To a great extent we can formalise the procedure for finding the phonological choices used in a language by searching for minimal pairs of words - that is, words which only differ by the insertion, substitution or deletion of a single phonological unit. The fact that two words are recognisably different to listeners implies a difference in their phonological form.
Here are some sets of words which help define the English consonant set:
- pin, bin, din, tin, kin, gin, chin
- coat, goat
- sum, sun, sung
- whip, rip, lip, yip
- fin, thin, sin, shin
- vat, that, hat
- baize, beige
- pass, parse
The minimal pair method is not foolproof - for example you can't find a minimal pair which contrasts the /h/ sound in "hat" with the /ŋ/ sound in "sung" (can you see why?). However few people believe these are the "same" phonological unit because they are realised as such different sounds.
- Phonetics of English Consonant Choices
There are about 24 consonant choices used in English words (the number varies according to accent and whether words loaned from other languages are counted). We can use the typical phonetic form of these choices to group the consonants in many ways. Within a typical English accent, the consonant choices can be grouped according to Voice (voiced, voiceless), Place (bilabial, labiodental, dental, alveolar, palatal, velar, glottal) and Manner (plosive, affricate, fricative, nasal, approximant). We can then use symbols from the International Phonetic Association alphabet (IPA chart) to denote these choices and their typical articulation:
Symbol Keyword Place Symbol Keyword Place Voiced Voiceless Plosives b bin Bilabial p pin Bilabial d din Alveolar t tin Alveolar g give Velar k kin Velar Affricates dʒ gin Palato-alveolar tʃ chin Palatal Fricatives v vim Labiodental f fin Labiodental ð this Dental θ thin Dental z zing Alveolar s sin Alveolar ʒ measure Palato-alveolar ʃ shin Palatal h hit Glottal Nasals m mock Bilabial n knock Alveolar ŋ thing Velar Approximants r wrong Alveolar-retroflex l long Alveolar-lateral w wasp Labial-velar j yacht Palatal
- Place of Consonant Articulation
In general, the Place of consonant articulation is considered to be the main point of constriction in the vocal tract, where a Primary Active articulator makes contact with or approximates a Passive location.
The names for the main places of articulation are shown in the diagram and table below:
Place Description Bilabial both lips Labiodental lower lip against upper teeth Dental tongue tip against upper teeth Alveolar tongue tip against teeth ridge Alveolar-lateral tongue tip against teeth ridge but with sides lowered Alveolar-retroflex tongue tip curled back near teeth ridge Palato-alveolar tongue tip slightly retracted from teeth ridge Palatal tongue blade against hard palate Velar back of tongue against soft palate Glottal vocal fold closure in larynx
- Manner of Consonant Articulation
In general the Manner of consonant articulation describes the nature of the articulatory gesture occurring at one place; in particular it refers to the degree of stricture, that is how closely does the active articulator approach the passive location.
The main manners of articulation are described in the table below:
Manner Description Approximant sounds do not cause complete obstructions of the vocal tract, they are just narrowings of the tract in different ways at different positions. They are all voiced. Plosive sounds have a complete obstruction at some place. These cause an interruption to the air-flow and to the passage of sound from the larynx. For voiceless plosives there is a simultaneous glottal opening gesture as well as an oral closure. Fricative sounds involve a severe narrowing of the air path at some place. The narrowing causes the air flow to become turbulent and to create noise. In voiced fricatives, this turbulent noise is added to the sound of phonation from the larynx. Affricate sounds are like a combination of plosive and fricative: a plosive like oral closure followed by a fricative release. Nasal sounds have oral closures like the plosives, but these are made in combination with a lowered soft palate that allows air flow out through the nose. The continuation of flow means that voicing can be maintained.
- Voicing of Consonant Articulation
In general the Voice of consonants relates to the form of any associated laryngeal configuration or gesture. Approximants and Nasals are always voiced - that is, they are always associated with a closed glottis and phonation. Fricatives fall into two categories. Unvoiced ("voiceless") fricatives are made with an open glottis, so that the only sound being generated is that of the turbulence at the place of constriction. Voiced fricatives are made with a closed glottis - this can lead to two simultaneous sources of sound: both phonation in the larynx and turbulence at the place of constriction. See diagram below.
The situation in Plosives is more complex. In a Voiced plosive made between two vowel sounds, the glottis typically remains closed throughout. However, this does not mean that phonation occurs through the consonant. Because a plosive creates a complete obstruction to the air flow in the oral cavity, it prevents air flowing through the larynx and hence prevents phonation. Thus in a voiced plosive phonation tends to stop soon after the oral closure is made and does not restart until the closure is removed and air can flow again through the larynx. Typically voicing will restart very quickly after release of the plosive - within 30ms. See diagram below.
When Unvoiced plosives are made between two vowel sounds, a laryngeal gesture occurs in which the vocal folds are fully or partially abducted (pulled apart) thereby opening the glottis. The glottis opening gesture is made simultaneously with the oral closure of the plosive. This tends to make phonation stop more rapidly than in the voiced case. When the plosive is released, the glottis may still be open or only partially closed. This means that voicing does not restart at plosive release. Instead, the air flow that builds up through the larynx may cause turbulence - creating a kind of /h/ sound that immediately follows the plosive burst. This glottal turbulence following the release of an unvoiced plosive is called aspiration. Finally, some time after the release, a second glottal gesture takes place in which the folds become fully approximated, the glottis closes and phonation restarts. However this restart occurs some time after the plosive release, typically more than 30ms later. See diagram below.
- Consonant Acoustics
We can apply the source-filter model to consonant acoustics as well as to vowels. In voiced sounds one sound source is larynx vibration. In voiceless sounds the sound source is turbulence created at a constriction in the vocal tract. The filter is often considered to be just the effect of the vocal tract pipe forward of the sound source. For example in fricatives, the filter is the part of the vocal tract tube that extends from the point of constriction to the lips, while in nasals the filter includes the effect of resonances of the nasal cavity.
Since approximants do not have any constriction in the oral cavity, we can analyze their acoustics using the methods developed for vowels and diphthongs, that is in terms of changes in the frequencies of the resonances of the vocal tract as it changes shape. Approximants are thus seen as particularly shaped movements of the formant pattern with time:
Although nasals do have an oral closure, the lowered soft palate ensures a continuation of airflow through the larynx and continued phonation. The resonances of the nasal cavity affect the timbre of larynx buzz in much the same way as the resonances of the oral cavity (aka formants) affect the timbre of larynx buzz in vowels. Although in nasals sound does not pass through the oral cavity, the shape of the oral cavity behind the closure does contribute to the overall frequency response of the vocal tract pipe and hence to the spectral pattern observed on a spectrogram:
Since fricatives have a narrow constriction, the cavity behind the constriction does not play a very large role in shaping the frequency response of the filter (since sound is strongly attenuated passing from behind the constriction to the front). The frequency response of the front cavity is strongly affected by its length and size of opening. This is why energy is concentrated at higher frequencies the more the place of articulation moves to the front of the mouth.
We can explain fricative acoustics using the source-filter model. The source is turbulence generated at the point of constriction which has a generally flat spectrum. The filter is the effect of the anterior cavity, which has a few resonances. In the simulation below we see a noise spectrum being shaped by a single wide resonance at about 2500Hz, which gives a [ʃ]-like sound, and at about 4500Hz, which gives a [s]-like sound:
You can see the effect of changing size of anterior cavity on the spectrograms of fricatives.
To understand the acoustics of plosives, it is necessary to divide their articulation into separate phases: a closing phase, a hold phase, a release phase and an optional aspiration phase. In the closing phase we see the previous vowel cut off as the oral closure is made. In the hold phase we see a silent interval in the sound (the "stop gap"). In the release phase we may see a short interval of turbulence made at the point of constriction as the articulator comes away from the closure (the "burst"). The sound of this turbulence will be shaped by the cavity forward of the closure, just as in fricatives. Finally, in the aspiration phase found in voiceless plosives, turbulence occurs at the glottis, producing a kind of /h/ sound.
- Hewlett and Beck, Introduction to the Science of Phonetics, Chapter 4, Basic principles of consonant description. [available in library].
- Elizabeth Zsiga, "The Sounds of Language", Wiley Blackwell 2013. Chapter 3 - A Tour of the Consonants. [available in library]
- Peter Ladefoged, "Vowels and Consonants", Blackwell 2001. Chapter 6, The sounds of consonants. [available in library]
In this week's lab session you will look at the spectrographic character of consonants in context.
- Studying the essential character of approximants, nasals, fricatives and plosives in some minimal pairs
- Studying how fricatives change with place and voice within a recording of the sentence: "The freezing harsh and heaving seas".
You can improve your learning by reflecting on your understanding. Come to the tutorial prepared to discuss the items below.
- Generally, how do fricative sounds change as the place of articulation is brought forward in the mouth?
- Think of two ways in which losing your two front upper teeth will affect fricative production.
- How and why does lip-rounding affect fricative quality?
- What might cause a voiced fricative to lose its voicing? What might cause a voiced fricative to lose its turbulence?
- Why does the burst centre frequency for /k/ vary with vowel context?
- What differences are there between syllable-initial and syllable-final plosives?
Word count: . Last modified: 14:14 21-Feb-2018. |
Moons of Saturn
The moons of Saturn are numerous and diverse ranging from tiny moonlets less than 1 kilometer across to the enormous Titan, which is larger than the planet Mercury. Saturn has 62 moons with confirmed orbits, 53 of which have names and only 13 of which have diameters larger than 50 kilometers. Seven Saturnian moons are large enough to be ellipsoidal in shape, though only two of those, Titan and Rhea, are currently in hydrostatic equilibrium, as well as dense rings with complex orbital motions of their own. Particularly notable among Saturn's moons are Titan, the second-largest moon in the Solar System, with a nitrogen-rich Earth-like atmosphere and a landscape including hydrocarbon lakes and dry river networks; and Enceladus, which emits jets of gas and dust and may harbor liquid water under its south pole region.
Twenty-four of Saturn's moons are regular satellites; they have prograde orbits not greatly inclined to Saturn's equatorial plane. They include the seven major satellites, four small moons that exist in a trojan orbit with larger moons, two mutually co-orbital moons and two moons that act as shepherds of Saturn's F Ring. Two other known regular satellites orbit within gaps in Saturn's rings. The relatively large Hyperion is locked in a resonance with Titan. The remaining regular moons orbit near the outer edge of the A Ring, within G Ring and between the major moons Mimas and Enceladus. The regular satellites are traditionally named after Titans and Titanesses or other figures associated with the mythological Saturn.
The remaining 38, all small except one, are irregular satellites, whose orbits are much farther from Saturn, have high inclinations, and are mixed between prograde and retrograde. These moons are probably captured minor planets, or debris from the breakup of such bodies after they were captured, creating collisional families. The irregular satellites have been classified by their orbital characteristics into the Inuit, Norse, and Gallic groups, and their names are chosen from the corresponding mythologies. The largest of the irregular moons is Phoebe, the ninth moon of Saturn, discovered at the end of the 19th century.
The rings of Saturn are made up of objects ranging in size from microscopic to moonlets hundreds of meters across, each in its own orbit around Saturn. Thus a precise number of Saturnian moons cannot be given, because there is no objective boundary between the countless small anonymous objects that form Saturn's ring system and the larger objects that have been named as moons. Over 150 moonlets embedded in the rings have been detected by the disturbance they create in the surrounding ring material, though this is thought to be only a small sample of the total population of such objects.
- 1 Discovery and naming
- 2 Sizes
- 3 Orbital groups
- 4 Tables of moons
- 5 Formation
- 6 Notes
- 7 References
- 8 External links
Discovery and naming
Before the advent of telescopic photography, eight moons of Saturn were discovered by direct observation using optical telescopes. Saturn's largest moon, Titan, was discovered in 1655 by Christiaan Huygens using a 57-millimeter (2.2 in) objective lens on a refracting telescope of his own design. Tethys, Dione, Rhea and Iapetus (the "Sidera Lodoicea") were discovered between 1671 and 1684 by Giovanni Domenico Cassini. Mimas and Enceladus were discovered in 1789 by William Herschel. Hyperion was discovered in 1848 by W.C. Bond, G.P. Bond and William Lassell.
The use of long-exposure photographic plates made possible the discovery of additional moons. The first to be discovered in this manner, Phoebe, was found in 1899 by W.H. Pickering. In 1966 the tenth satellite of Saturn was discovered by Audouin Dollfus, when the rings were observed edge-on near an equinox. It was later named Janus. A few years later it was realized that all observations of 1966 could only be explained if another satellite had been present and that it had an orbit similar to that of Janus. This object is now known as Epimetheus, the eleventh moon of Saturn. It shares the same orbit with Janus—the only known example of co-orbitals in the Solar System. In 1980 three additional Saturnian moons were discovered from the ground and later confirmed by the Voyager probes. They are trojan moons of Dione (Helene) and Tethys (Telesto and Calypso).
Observations by spacecraft
The study of the outer planets has since been revolutionized by the use of unmanned space probes. The arrival of the Voyager spacecraft at Saturn in 1980–1981 resulted in the discovery of three additional moons—Atlas, Prometheus and Pandora, bringing the total to 17. In addition, Epimetheus was confirmed as distinct from Janus. In 1990, Pan was discovered in archival Voyager images.
The Cassini mission, which arrived at Saturn in the summer of 2004, initially discovered three small inner moons including Methone and Pallene between Mimas and Enceladus as well as the second Lagrangian moon of Dione—Polydeuces. It also observed three suspected but unconfirmed moons in the F Ring. In November 2004 Cassini scientists announced that the structure of Saturn's rings indicates the presence of several more moons orbiting within the rings, although only one, Daphnis, has been visually confirmed so far (in 2005). In 2007 Anthe was announced. In 2008 it was reported that Cassini observations of a depletion of energetic electrons in Saturn's magnetosphere near Rhea might be the signature of a tenuous ring system around Saturn's second largest moon. In March 2009, Aegaeon, a moonlet within the G Ring, was announced. In July of the same year, S/2009 S 1, the first moonlet within the B Ring, was observed. In April 2014, the possible beginning of a new moon, within the A Ring, was reported. (related image)
Study of Saturn's moons has also been aided by advances in telescope instrumentation, primarily the introduction of digital charge-coupled devices which replaced photographic plates. For the entire 20th century, Phoebe stood alone among Saturn's known moons with its highly irregular orbit. Beginning in 2000, however, three dozen additional irregular moons have been discovered using ground-based telescopes. A survey starting in late 2000 and conducted using three medium-size telescopes found thirteen new moons orbiting Saturn at a great distance, in eccentric orbits, which are highly inclined to both the equator of Saturn and the ecliptic. They are probably fragments of larger bodies captured by Saturn's gravitational pull. In 2005, astronomers using the Mauna Kea Observatory announced the discovery of twelve more small outer moons. In 2006, astronomers using the Subaru 8.2 m telescope reported the discovery of further nine irregular moons. In April 2007, Tarqeq (S/2007 S 1) was announced. In May of the same year S/2007 S 2 and S/2007 S 3 were reported.
The modern names for Saturnian moons were suggested by John Herschel in 1847. He proposed to name them after mythological figures associated with the Roman god of agriculture and harvest, Saturn (equated to the Greek Cronus). In particular, the then known seven satellites were named after Titans, Titanesses and Giants—brothers and sisters of Cronus. In 1848 Lassell proposed that the eighth satellite of Saturn was named Hyperion after another Titan. When in the 20th century the names of Titans were exhausted, the moons were named after different characters of the Greco-Roman mythology or giants from other mythologies. All the irregular moons (except Phoebe) are named after Inuit and Gallic gods and after Norse ice giants.
Some asteroids share the same names as moons of Saturn: 55 Pandora, 106 Dione, 577 Rhea, 1809 Prometheus, 1810 Epimetheus, and 4450 Pan. In addition, two more asteroids previously shared the names of Saturnian moons until spelling differences were made permanent by the International Astronomical Union (IAU): Calypso and asteroid 53 Kalypso; and Helene and asteroid 101 Helena.
The Saturnian moon system is very lopsided: one moon, Titan, comprises more than 96% of the mass in orbit around the planet. The six other planemo (ellipsoidal) moons constitute roughly 4%, while the remaining 55 small moons, together with the rings, comprise only 0.04%.[a]
|Saturn's major satellites, compared to Earth's Moon|
Although the boundaries may be somewhat vague, Saturn's moons can be divided into ten groups according to their orbital characteristics. Many of them, such as Pan and Daphnis, orbit within Saturn's ring system and have orbital periods only slightly longer than the planet's rotation period. The innermost moons and most regular satellites all have mean orbital inclinations ranging from less than a degree to about 1.5 degrees (except Iapetus, which has an inclination of 7.57 degrees) and small orbital eccentricities. On the other hand, irregular satellites in the outermost regions of Saturn's moon system, in particular the Norse group, have orbital radii of millions of kilometers and orbital periods lasting several years. The moons of the Norse group also orbit in the opposite direction to Saturn's rotation.
During late July 2009, a moonlet was discovered in the B Ring, 480 km from the outer edge of the ring, by the shadow it cast. It is estimated to be 300 m in diameter. Unlike the A Ring moonlets (see below), it does not induce a 'propeller' feature, probably due to the density of the B Ring.
In 2006, four tiny moonlets were found in Cassini images of the A Ring. Before this discovery only two larger moons had been known within gaps in the A Ring: Pan and Daphnis. These are large enough to clear continuous gaps in the ring. In contrast, a moonlet is only massive enough to clear two small—about 10 km across—partial gaps in the immediate vicinity of the moonlet itself creating a structure shaped like an airplane propeller. The moonlets themselves are tiny, ranging from about 40 to 500 meters in diameter, and are too small to be seen directly. In 2007, the discovery of 150 more moonlets revealed that they (with the exception of two that have been seen outside the Encke gap) are confined to three narrow bands in the A Ring between 126,750 and 132,000 km from Saturn's center. Each band is about a thousand kilometers wide, which is less than 1% the width of Saturn's rings. This region is relatively free from the disturbances caused by resonances with larger satellites, although other areas of the A Ring without disturbances are apparently free of moonlets. The moonlets were probably formed from the breakup of a larger satellite. It is estimated that the A Ring contains 7,000–8,000 propellers larger than 0.8 km in size and millions larger than 0.25 km.
Similar moonlets may reside in the F Ring. There, "jets" of material may be due to collisions, initiated by perturbations from the nearby small moon Prometheus, of these moonlets with the core of the F Ring. One of the largest F-Ring moonlets may be the as-yet unconfirmed object S/2004 S 6. The F Ring also contains transient "fans" which are thought to result from even smaller moonlets, about 1 km in diameter, orbiting near the F Ring core.
One of the recently discovered moons, Aegaeon, resides within the bright arc of G Ring and is trapped in the 7:6 mean motion resonance with Mimas. This means that it makes exactly seven revolutions around Saturn while Mimas makes exactly six. The moon is the largest among the population of bodies that are sources of dust in this ring.
Shepherd satellites are small moons that orbit within, or just beyond, a planet's ring system. They have the effect of sculpting the rings: giving them sharp edges, and creating gaps between them. Saturn's shepherd moons are Pan (Encke gap), Daphnis (Keeler gap), Atlas (A Ring), Prometheus (F Ring) and Pandora (F Ring). These moons together with co-orbitals (see below) probably formed as a result of accretion of the friable ring material on preexisting denser cores. The cores with sizes from one-third to one-half the present day moons may be themselves collisional shards formed when a parental satellite of the rings disintegrated.
Janus and Epimetheus are called co-orbital moons. They are of roughly equal size, with Janus being slightly larger than Epimetheus. Janus and Epimetheus have orbits with only a few kilometers difference in semi-major axis, close enough that they would collide if they attempted to pass each other. Instead of colliding, however, their gravitational interaction causes them to swap orbits every four years.
Inner large moons
The innermost large moons of Saturn orbit within its tenuous E Ring, along with three smaller moons of the Alkyonides group.
- Mimas is the smallest and least massive of the inner round moons, although its mass is sufficient to alter the orbit of Methone. It is noticeably ovoid-shaped, having been made shorter at the poles and longer at the equator (by about 20 km) by the effects of Saturn's gravity. Mimas has a large impact crater one-third its diameter, Herschel, situated on its leading hemisphere. Mimas has no known past or present geologic activity, and its surface is dominated by impact craters. The only tectonic features known are a few arcuate and linear troughs, which probably formed when Mimas was shattered by the Herschel impact.
- Enceladus is one of the smallest of Saturn's moons that is spherical in shape—only Mimas is smaller—yet is the only small Saturnian moon that is currently endogenously active, and the smallest known body in the Solar System that is geologically active today. Its surface is morphologically diverse; it includes ancient heavily cratered terrain as well as younger smooth areas with few impact craters. Many plains on Enceladus are fractured and intersected by systems of lineaments. The area around its south pole was found by Cassini to be unusually warm and cut by a system of fractures about 130 km long called "tiger stripes", some of which emit jets of water vapor and dust. These jets form a large plume off its south pole, which replenishes Saturn's E ring and serves as the main source of ions in the magnetosphere of Saturn. The gas and dust are released with a rate of more than 100 kg/s. Enceladus may have liquid water underneath the south-polar surface. The source of the energy for this cryovolcanism is thought to be a 2:1 mean-motion resonance with Dione. The pure ice on the surface makes Enceladus one of the brightest known objects in the Solar System—its geometrical albedo is more than 140%.
- Tethys is the third largest of Saturn's inner moons. Its most prominent features are a large (400 km diameter) impact crater named Odysseus on its leading hemisphere and a vast canyon system named Ithaca Chasma extending at least 270° around Tethys. The Ithaca Chasma is concentric with Odysseus, and these two features may be related. Tethys appears to have no current geological activity. A heavily cratered hilly terrain occupies the majority of its surface, while a smaller and smoother plains region lies on the hemisphere opposite to that of Odysseus. The plains contain fewer craters and are apparently younger. A sharp boundary separates them from the cratered terrain. There is also a system of extensional troughs radiating away from Odysseus. The density of Tethys (0.985 g/cm3) is less than that of water, indicating that it is made mainly of water ice with only a small fraction of rock.
- Dione is the second-largest inner moon of Saturn. It has a higher density than the geologically dead Rhea, the largest inner moon, but lower than that of active Enceladus. While the majority of Dione's surface is heavily cratered old terrain, this moon is also covered with an extensive network of troughs and lineaments, indicating that in the past it had global tectonic activity. The troughs and lineaments are especially prominent on the trailing hemisphere, where several intersecting sets of fractures form what is called "wispy terrain". The cratered plains have a few large impact craters reaching 250 km in diameter. Smooth plains with low impact-crater counts are present as well on a small fraction its surface. They were probably tectonically resurfaced relatively later in the geological history of Dione. At two locations within smooth plains strange landforms (depressions) resembling oblong impact craters have been identified, both of which lie at the centers of radiating networks of cracks and troughs; these features may be cryovolcanic in origin. Dione may be geologically active even now, although on a scale much smaller than the cryovolcanism of Enceladus. This follows from Cassini magnetic measurements that show Dione is a net source of plasma in the magnetosphere of Saturn, much like Enceladus.
Three small moons orbit between Mimas and Enceladus: Methone, Anthe, and Pallene. Named after the Alkyonides of Greek mythology, they are some of the smallest moons in the Saturn system. Anthe and Methone possess very faint ring arcs along their orbits while Pallene possesses a faint complete ring. Of these three moons, only Methone has been photographed at close range, showing it to be egg-shaped with very few or no craters.
Trojan moons are a unique feature only known from the Saturnian system. A trojan body orbits at either the leading L4 or trailing L5 Lagrange point of a much larger object, such as a large moon or planet. Tethys has two trojan moons, Telesto (leading) and Calypso (trailing), and Dione also has two, Helene (leading) and Polydeuces (trailing). Helene is by far the largest trojan moon, while Polydeuces is the smallest and has the most chaotic orbit. These moons are coated with dusty material that has smoothened out their surfaces.
Outer large moons
These moons all orbit beyond the E Ring. They are:
- Rhea is the second-largest of Saturn's moons. In 2005 Cassini detected a depletion of electrons in the plasma wake of Rhea, which forms when the co-rotating plasma of Saturn's magnetosphere is absorbed by the moon. The depletion was hypothesized to be caused by the presence of dust-sized particles concentrated in a few faint equatorial rings. Such a ring system would make Rhea the only moon in the Solar System known to have rings. However, subsequent targeted observations of the putative ring plane from several angles by Cassini's narrow-angle camera turned up no evidence of the expected ring material, leaving the origin of the plasma observations unresolved. Otherwise Rhea has rather a typical heavily cratered surface, with the exceptions of a few large Dione-type fractures (wispy terrain) on the trailing hemisphere and a very faint "line" of material at the equator that may have been deposited by material deorbiting from present or former rings. Rhea also has two very large impact basins on its anti-Saturnian hemisphere, which are about 400 and 500 km across. The first, Tirawa, is roughly comparable to the Odysseus basin on Tethys. There is also a 48 km-diameter impact crater called Inktomi[b] at 112°W that is prominent because of an extended system of bright rays, which may be one of the youngest craters on the inner moons of Saturn. No evidence of any endogenic activity has been discovered on the surface of Rhea.
- Titan, at 5,150 km diameter, is the second largest moon in the Solar System and Saturn's largest. Out of all the large moons, Titan is the only one with a dense (surface pressure of 1.5 atm), cold atmosphere, primarily made of nitrogen with a small fraction of methane. The dense atmosphere frequently produces bright white convective clouds, especially over the south pole region. On June 6, 2013, scientists at the IAA-CSIC reported the detection of polycyclic aromatic hydrocarbons in the upper atmosphere of Titan. On June 23, 2014, NASA claimed to have strong evidence that nitrogen in the atmosphere of Titan came from materials in the Oort cloud, associated with comets, and not from the materials that formed Saturn in earlier times. The surface of Titan, which is difficult to observe due to persistent atmospheric haze, shows only a few impact craters and is probably very young. It contains a pattern of light and dark regions, flow channels and possibly cryovolcanos. Some dark regions are covered by longitudinal dune fields shaped by tidal winds, where sand is made of frozen water or hydrocarbons. Titan is the only body in the Solar System beside Earth with bodies of liquid on its surface, in the form of methane–ethane lakes in Titan's north and south polar regions. The largest lake, Kraken Mare, is larger than the Caspian Sea. Like Europa and Ganymede, it is believed that Titan has a subsurface ocean made of water mixed with ammonia, which can erupt to the surface of the moon and lead to cryovolcanism. On July 2, 2014, NASA reported the ocean inside Titan may be "as salty as the Earth's Dead Sea".
- Hyperion is Titan's nearest neighbor in the Saturn system. The two moons are locked in a 4:3 mean-motion resonance with each other, meaning that while Titan makes four revolutions around Saturn, Hyperion makes exactly three. With an average diameter of about 270 km, Hyperion is smaller and lighter than Mimas. It has an extremely irregular shape, and a very odd, tan-colored icy surface resembling a sponge, though its interior may be partially porous as well. The average density of about 0.55 g/cm3 indicates that the porosity exceeds 40% even assuming it has a purely icy composition. The surface of Hyperion is covered with numerous impact craters—those with diameters 2–10 km are especially abundant. It is the only moon known to have a chaotic rotation, which means Hyperion has no well-defined poles or equator. While on short timescales the satellite approximately rotates around its long axis at a rate of 72–75° per day, on longer timescales its axis of rotation (spin vector) wanders chaotically across the sky. This makes the rotational behavior of Hyperion essentially unpredictable.
- Iapetus is the third-largest of Saturn's moons. Orbiting the planet at 3.5 million km, it is by far the most distant of Saturn's large moons, and also possesses the greatest orbital inclination, at 15.47°. Iapetus has long been known for its unusual two-toned surface; its leading hemisphere is pitch-black and its trailing hemisphere is almost as bright as fresh snow. Cassini images showed that the dark material is confined to a large near equatorial area on the leading hemisphere called Cassini Regio, which extends approximately from 40°N to 40°S. The pole regions of Iapetus are as bright as its trailing hemisphere. Cassini also discovered a 20 km tall equatorial ridge, which spans nearly the moon's entire equator. Otherwise both dark and bright surfaces of Iapetus are old and heavily cratered. The images revealed at least four large impact basins with diameters from 380 to 550 km and numerous smaller impact craters. No evidence of any endogenic activity has been discovered. A clue to the origin of the dark material covering part of Iapetus's starkly dichromatic surface may have been found in 2009, when NASA's Spitzer Space Telescope discovered a vast, nearly invisible disk around Saturn, just inside the orbit of the moon Phoebe—the Phoebe ring. Scientists believe that the disk originates from dust and ice particles kicked up by impacts on Phoebe. Because the disk particles, like Phoebe itself, orbit in the opposite direction to Iapetus, Iapetus collides with them as they drift in the direction of Saturn, darkening its leading hemisphere slightly. Once a difference in albedo, and hence in average temperature, was established between different regions of Iapetus, a thermal runaway process of water ice sublimation from warmer regions and deposition of water vapor onto colder regions ensued. Iapetus's present two-toned appearance results from the contrast between the bright, primarily ice-coated areas and regions of dark lag, the residue left behind after the loss of surface ice.
Irregular moons are small satellites with large-radii, inclined, and frequently retrograde orbits, believed to have been acquired by the parent planet through a capture process. They often occur as collisional families or groups. The precise size as well as albedo of the irregular moons are not known for sure because the moons are very small to be resolved by a telescope, although the latter is usually assumed to be quite low—around 6% (albedo of Phoebe) or less. The irregulars generally have featureless visible and near infrared spectra dominated by water absorption bands. They are neutral or moderately red in color—similar to C-type, P-type, or D-type asteroids, though they are much less red than Kuiper belt objects.[c]
The Inuit group includes five prograde outer moons that are similar enough in their distances from the planet (186–297 radii of Saturn), their orbital inclinations (45–50°) and their colors that they can be considered a group. The moons are Ijiraq, Kiviuq, Paaliaq, Siarnaq, and Tarqeq. The largest among them is Siarnaq with an estimated size of about 40 km.
The Gallic group are four prograde outer moons that are similar enough in their distance from the planet (207–302 radii of Saturn), their orbital inclination (35–40°) and their color that they can be considered a group. They are Albiorix, Bebhionn, Erriapus, and Tarvos. Tarvos, as of 2009, is the most distant of Saturn's moons with a prograde orbit. The largest among these moons is Albiorix with an estimated size of about 32 km.
The Norse (or Phoebe) group consists of 29 retrograde outer moons. They are Aegir, Bergelmir, Bestla, Farbauti, Fenrir, Fornjot, Greip, Hati, Hyrrokkin, Jarnsaxa, Kari, Loge, Mundilfari, Narvi, Phoebe, Skathi, Skoll, Surtur, Suttungr, Thrymr, Ymir, S/2004 S 7, S/2004 S 12, S/2004 S 13, S/2004 S 17, S/2006 S 1, S/2006 S 3, S/2007 S 2, and S/2007 S 3. After Phoebe, Ymir is the largest of the known retrograde irregular moons, with an estimated diameter of only 18 km. The Norse group may itself consist of several smaller subgroups.
- Phoebe, at 214 km in diameter, is by far the largest of Saturn's irregular satellites. It has a retrograde orbit and rotates on its axis every 9.3 hours. Phoebe was the first moon of Saturn to be studied in detail by Cassini, in June 2004; during this encounter Cassini was able to map nearly 90% of the moon's surface. Phoebe has a nearly spherical shape and a relatively high density of about 1.6 g/cm3. Cassini images revealed a dark surface scarred by numerous impacts—there are about 130 craters with diameters exceeding 10 km. Spectroscopic measurement showed that the surface is made of water ice, carbon dioxide, phyllosilicates, organics and possibly iron bearing minerals. Phoebe is believed to be a captured centaur that originated in the Kuiper belt. It also serves as a source of material for the largest known ring of Saturn, which darkens the leading hemisphere of Iapetus (see above).
Tables of moons
Major icy moons
|Name||Pronunciation (key)||Image||Diameter (km)[e]||Mass
(×1015 kg) [f]
|Semi-major axis (km) [g]||Orbital period (d)[g][h]||Inclination [g][i]||Eccentricity||Position||Discovery
|1||S/2009 S/2009 S 1||—||≈ 0.3||< 0.0001||≈ 117000||≈ 0.47||≈ 0°||≈ 0||outer B Ring||2009||Cassini–Huygens|
|(moonlets)||—||0.04 to 0.4 (Earhart)||< 0.0001||≈ 130000||≈ 0.55||≈ 0°||≈ 0||Three 1000 km bands within A Ring||2006||Cassini–Huygens|
(34 × 31 × 20)
|4.95±0.75||133584||+0.57505||0.001°||0.000035||in Encke Division||1990||M. Showalter|
(9 × 8 × 6)
|0.084±0.012||136505||+0.59408||≈ 0°||≈ 0||in Keeler Gap||2005||Cassini–Huygens|
(41 × 35 × 19)
|6.6±0.045||137670||+0.60169||0.003°||0.0012||outer A Ring shepherd||1980||Voyager 2|
(136 × 79 × 59)
|159.5±1.5||139380||+0.61299||0.008°||0.0022||inner F Ring shepherd||1980||Voyager 2|
(104 × 81 × 64)
|137.1±1.9||141720||+0.62850||0.050°||0.0042||outer F Ring Shepherd||1980||Voyager 2|
(130 × 114 × 106)
|526.6±0.6||151422||+0.69433||0.335°||0.0098||co-orbital with Janus||1977||J. Fountain, and S. Larson|
(203 × 185 × 153)
|1897.5±0.6||151472||+0.69466||0.165°||0.0068||co-orbital with Epimetheus||1966||A. Dollfus|
|9||LIII||Aegaeon Aegaeon||iːˈdʒiːən||≈ 0.5||≈ 0.0001||167500||+0.80812||0.001°||0.0002||G Ring moonlet||2008||Cassini–Huygens|
(416 × 393 × 381)
|11||XXXII||Methone Methone||mɨˈθoʊniː||3.2±1.2||≈ 0.02||194440||+1.00957||0.007°||0.0001||Alkyonides||2004||Cassini–Huygens|
|14||XLIX||Anthe Anthe||ˈænθiː||≈ 1||≈ 0.007||197700||+1.03650||0.1°||0.001||Alkyonides||2007||Cassini–Huygens|
(6 × 6 × 4)
(513 × 503 × 497)
|108022±101||237950||+1.370218||0.010°||0.0047||Generates the E ring||1789||W. Herschel|
(1077 × 1057 × 1053)
(33 × 24 × 20)
|≈ 9.41||294619||+1.887802||1.158°||0.000||leading Tethys trojan||1980||B. Smith, H. Reitsema, S. Larson, and J. Fountain|
(30 × 23 × 14)
|≈ 6.3||294619||+1.887802||1.473°||0.000||trailing Tethys trojan||1980||D. Pascu, P. Seidelmann, W. Baum, and D. Currie|
(1128 × 1123 × 1119)
(43 × 38 × 26)
|≈ 24.46||377396||+2.736915||0.212°||0.0022||leading Dione trojan||1980||P. Laques and J. Lecacheux|
(3 × 2 × 1)
|≈ 0.03||377396||+2.736915||0.177°||0.0192||trailing Dione trojan||2004||Cassini–Huygens|
(1530 × 1526 × 1525)
|22||VI||Titan ♠Titan||ˈtaɪtən||5151||134520000±20000||1221930||+15.94542||0.3485°||0.0288||1655||C. Huygens|
(360 × 266 × 205)
|5620±50||1481010||+21.27661||0.568°||0.123006||in 4:3 resonance with Titan||1848||W. Bond
(1491 × 1491 × 1424)
|25||XXIV||Kiviuq ‡Kiviuq||ˈkɪviək||≈ 16||≈ 2.79||11294800||+448.16||49.087°||0.3288||Inuit group||2000||B. Gladman, J. Kavelaars, et al.|
|26||XXII||Ijiraq ‡Ijiraq||ˈiː.ɨrɒk||≈ 12||≈ 1.18||11355316||+451.77||50.212°||0.3161||Inuit group||2000||B. Gladman, J. Kavelaars, et al.|
(219 × 217 × 204)
|8292±10||12869700||−545.09||173.047°||0.156242||Norse group||1899||W. Pickering|
|28||XX||Paaliaq ‡Paaliaq||ˈpɑːliɒk||≈ 22||≈ 7.25||15103400||+692.98||46.151°||0.3631||Inuit group||2000||B. Gladman, J. Kavelaars, et al.|
|29||XXVII||Skathi ♣Skathi||ˈskɒði||≈ 8||≈ 0.35||15672500||−732.52||149.084°||0.246||Norse (Skathi) Group||2000||B. Gladman, J. Kavelaars, et al.|
|30||XXVI||Albiorix ♦Albiorix||ˌælbiˈɒrɪks||≈ 32||≈ 22.3||16266700||+774.58||38.042°||0.477||Gallic group||2000||M. Holman|
|31||S/2007A ♣S/2007 S 2||—||≈ 6||≈ 0.15||16560000||−792.96||176.68°||0.2418||Norse group||2007||S. Sheppard, D. Jewitt, J. Kleyna, B. Marsden|
|32||XXXVII||Bebhionn ♦Bebhionn||bɛˈviːn, ˈvɪvi.ɒn||≈ 6||≈ 0.15||17153520||+838.77||40.484°||0.333||Gallic group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|33||XXVIII||Erriapus ♦Erriapus||ˌɛriˈæpəs||≈ 10||≈ 0.68||17236900||+844.89||38.109°||0.4724||Gallic group||2000||B. Gladman, J. Kavelaars, et al.|
|34||XLVII||Skoll ♣Skoll||ˈskɒl, ˈskɜːl||≈ 6||≈ 0.15||17473800||−862.37||155.624°||0.418||Norse (Skathi) group||2006||S. Sheppard, D. Jewitt, J. Kleyna|
|35||XXIX||Siarnaq ‡Siarnaq||ˈsiːɑrnək||≈ 40||≈ 43.5||17776600||+884.88||45.798°||0.24961||Inuit group||2000||B. Gladman, J. Kavelaars, et al.|
|36||LII||Tarqeq ‡Tarqeq||ˈtɑrkeɪk||≈ 7||≈ 0.23||17910600||+894.86||49.904°||0.1081||Inuit group||2007||S. Sheppard, D. Jewitt, J. Kleyna|
|37||S/2004B ♣S/2004 S 13||—||≈ 6||≈ 0.15||18056300||−905.85||167.379°||0.261||Norse group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|38||LI||Greip ♣Greip||ˈɡreɪp||≈ 6||≈ 0.15||18065700||−906.56||172.666°||0.3735||Norse group||2006||S. Sheppard, D. Jewitt, J. Kleyna|
|39||XLIV||Hyrrokkin ♣Hyrrokkin||hɪˈrɒkɨn||≈ 8||≈ 0.35||18168300||−914.29||153.272°||0.3604||Norse (Skathi) group||2006||S. Sheppard, D. Jewitt, J. Kleyna|
|40||L||Jarnsaxa ♣Jarnsaxa||jɑrnˈsæksə||≈ 6||≈ 0.15||18556900||−943.78||162.861°||0.1918||Norse group||2006||S. Sheppard, D. Jewitt, J. Kleyna|
|41||XXI||Tarvos ♦Tarvos||ˈtɑrvɵs||≈ 15||≈ 2.3||18562800||+944.23||34.679°||0.5305||Gallic group||2000||B. Gladman, J. Kavelaars, et al.|
|42||XXV||Mundilfari ♣Mundilfari||ˌmʊndəlˈvɛri||≈ 7||≈ 0.23||18725800||−956.70||169.378°||0.198||Norse group||2000||B. Gladman, J. Kavelaars, et al.|
|43||S/2006 ♣S/2006 S 1||—||≈ 6||≈ 0.15||18930200||−972.41||154.232°||0.1303||Norse (Skathi) group||2006||S. Sheppard, D.C. Jewitt, J. Kleyna|
|44||S/2004C ♣S/2004 S 17||—||≈ 4||≈ 0.05||19099200||−985.45||166.881°||0.226||Norse group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|45||XXXVIII||Bergelmir ♣Bergelmir||bɛərˈjɛlmɪər||≈ 6||≈ 0.15||19104000||−985.83||157.384°||0.152||Norse (Skathi) group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|46||XXXI||Narvi ♣Narvi||ˈnɑrvi||≈ 7||≈ 0.23||19395200||−1008.45||137.292°||0.320||Norse (Narvi) group||2003||S. Sheppard, D. Jewitt, J. Kleyna|
|47||XXIII||Suttungr ♣Suttungr||ˈsʊtʊŋɡər||≈ 7||≈ 0.23||19579000||−1022.82||174.321°||0.131||Norse group||2000||B. Gladman, J. Kavelaars, et al.|
|48||XLIII||Hati ♣Hati||ˈhɑːti||≈ 6||≈ 0.15||19709300||−1033.05||163.131°||0.291||Norse group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|49||S/2004A ♣S/2004 S 12||—||≈ 5||≈ 0.09||19905900||−1048.54||164.042°||0.396||Norse group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|50||XL||Farbauti ♣Farbauti||fɑrˈbaʊti||≈ 5||≈ 0.09||19984800||−1054.78||158.361°||0.209||Norse (Skathi) group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|51||XXX||Thrymr ♣Thrymr||ˈθrɪmər||≈ 7||≈ 0.23||20278100||−1078.09||174.524°||0.453||Norse group||2000||B. Gladman, J. Kavelaars, et al.|
|52||XXXVI||Aegir ♣Aegir||ˈaɪ.ɪər||≈ 6||≈ 0.15||20482900||−1094.46||167.425°||0.237||Norse group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|53||S/2007B ♣S/2007 S 3||—||≈ 5||≈ 0.09||20518500||≈ −1100||177.22°||0.130||Norse group||2007||S. Sheppard, D. Jewitt, J. Kleyna|
|54||XXXIX||Bestla ♣Bestla||ˈbɛstlə||≈ 7||≈ 0.23||20570000||−1101.45||147.395°||0.5145||Norse (Narvi) group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|55||S/2007C ♣S/2004 S 7||—||≈ 6||≈ 0.15||20576700||−1101.99||165.596°||0.5299||Norse group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|56||S/2006 ♣S/2006 S 3||—||≈ 6||≈ 0.15||21076300||−1142.37||150.817°||0.4710||Norse (Skathi) group||2006||S. Sheppard, D. Jewitt, J. Kleyna|
|57||XLI||Fenrir ♣Fenrir||ˈfɛnrɪər||≈ 4||≈ 0.05||21930644||−1212.53||162.832°||0.131||Norse group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|58||XLVIII||Surtur ♣Surtur||ˈsɜrtər||≈ 6||≈ 0.15||22288916||−1242.36||166.918°||0.3680||Norse group||2006||S. Sheppard, D. Jewitt, J. Kleyna|
|59||XLV||Kari ♣Kari||ˈkɑri||≈ 7||≈ 0.23||22321200||−1245.06||148.384°||0.3405||Norse (Skathi) group||2006||S. Sheppard, D. Jewitt, J. Kleyna|
|60||XIX||Ymir ♣Ymir||ˈɪmɪər||≈ 18||≈ 3.97||22429673||−1254.15||172.143°||0.3349||Norse group||2000||B. Gladman, J. Kavelaars, et al.|
|61||XLVI||Loge ♣Loge||ˈlɔɪ.eɪ||≈ 6||≈ 0.15||22984322||−1300.95||166.539°||0.1390||Norse group||2006||S. Sheppard, D. Jewitt, J. Kleyna|
|62||XLII||Fornjot ♣Fornjot||ˈfɔrnjɒt||≈ 6||≈ 0.15||24504879||−1432.16||167.886°||0.186||Norse group||2004||S. Sheppard, D. Jewitt, J. Kleyna|
|S/2004 S 6||≈ 3–5||≈ 140130||+0.61801||uncertain objects around the F Ring||2004|
|S/2004 S 3/S 4[j]||≈ 3–5||≈ 140300||≈ +0.619||2004|
- Chiron which was supposedly sighted by Hermann Goldschmidt in 1861, but never observed by anyone else.
- Themis was allegedly discovered in 1905 by astronomer William Pickering, but never seen again. Nevertheless it was included in numerous almanacs and astronomy books until the 1960s.
It is thought that the Saturnian system of Titan, mid-sized moons, and rings developed from a set-up closer to the Galilean moons of Jupiter, though the details are unclear. It has been proposed either that a second Titan-sized moon broke up, producing the rings and inner mid-sized moons, or that two large moons fused to form Titan, with the collision scattering icy debris that formed the mid-sized moons. On June 23, 2014, NASA claimed to have strong evidence that nitrogen in the atmosphere of Titan came from materials in the Oort cloud, associated with comets, and not from the materials that formed Saturn in earlier times.
- The mass of the rings is about the mass of Mimas, while the combined mass of Janus, Hyperion and Phoebe—the most massive of the remaining moons—is about one-third of that. The total mass of the rings and small moons is around 5.5×1019 kg.
- Inktomi was once known as "The Splat".
- The photometric color may be used as a proxy for the chemical composition of satellites' surfaces.
- A confirmed moon is given a permanent designation by the IAU consisting of a name and a Roman numeral. The nine moons that were known before 1900 (of which Phoebe is the only irregular) are numbered in order of their distance from Saturn; the rest are numbered in the order by which they received their permanent designations. Nine small moons of the Norse group and S/2009 S 1 have not yet received a permanent designation.
- The diameters and dimensions of the inner moons from Pan through Janus, Methone, Pallene, Telepso, Calypso, Helene, Hyperion and Phoebe were taken from Thomas 2010, Table 3. Diameters and dimensions of Mimas, Enceladus, Tethys, Dione, Rhea and Iapetus are from Thomas 2010, Table 1. The approximate sizes of other satellites are from the website of Scott Sheppard.
- Masses of the large moons were taken from Jacobson, 2006. Masses of Pan, Daphnis, Atlas, Prometheus, Pandora, Epimetheus, Janus, Hyperion and Phoebe were taken from Thomas, 2010, Table 3. Masses of other small moons were calculated assuming a density of 1.3 g/cm3.
- The orbital parameters were taken from Spitale, et al. 2006, IAU-MPC Natural Satellites Ephemeris Service, and NASA/NSSDC.
- Negative orbital periods indicate a retrograde orbit around Saturn (opposite to the planet's rotation).
- To Saturn's equator for the regular satellites, and to the ecliptic for the irregular satellites
- S/2004 S 4 was most likely a transient clump—it has not been recovered since the first sighting.
- "Solar System Exploration Planets Saturn: Moons: S/2009 S1". NASA. Retrieved January 17, 2010.
- Sheppard, Scott S. "The Giant Planet Satellite and Moon Page". Departament of Terrestrial Magnetism at Carniege Institution for science. Retrieved 2008-08-28.
- Porco, C. and the Cassini Imaging Team (November 2, 2009). "S/2009 S1". IAU Circular 9091.
- Esposito, L. W. (2002). "Planetary rings". Reports On Progress In Physics 65 (12): 1741–1783. Bibcode:2002RPPh...65.1741E. doi:10.1088/0034-4885/65/12/201.
- Tiscareno, Matthew S.; Burns, J.A; Hedman, M.M; Porco, C.C (2008). "The population of propellers in Saturn's A Ring". Astronomical Journal 135 (3): 1083–1091. arXiv:0710.4547. Bibcode:2008AJ....135.1083T. doi:10.1088/0004-6256/135/3/1083.
- Nemiroff, Robert and Bonnell, Jerry (March 25, 2005). "Huygens Discovers Luna Saturni". Astronomy Picture of the Day. Retrieved March 4, 2010.
- Baalke, Ron. "Historical Background of Saturn's Rings (1655)". NASA/JPL. Retrieved March 4, 2010.
- Van Helden, Albert (1994). "Naming the satellites of Jupiter and Saturn" (PDF). The Newsletter of the Historical Astronomy Division of the American Astronomical Society (32): 1–2.
- Bond, W.C (1848). "Discovery of a new satellite of Saturn". Monthly Notices of the Royal Astronomical Society 9: 1–2. Bibcode:1848MNRAS...9....1B.
- Lassell, William (1848). "Discovery of new satellite of Saturn". Monthly Notices of the Royal Astronomical Society 8: 195–197. Bibcode:1848MNRAS...8..195L. doi:10.1093/mnras/8.9.195a.
- Pickering, Edward C (1899). "A New Satellite of Saturn". Astrophysical Journal 9: 274–276. Bibcode:1899ApJ.....9..274P. doi:10.1086/140590.
- Fountain, John W; Larson, Stephen M (1977). "A New Satellite of Saturn?". Science 197 (4306): 915–917. Bibcode:1977Sci...197..915F. doi:10.1126/science.197.4306.915. PMID 17730174.
- Uralskaya, V.S (1998). "Discovery of new satellites of Saturn". Astronomical and Astrophysical Transactions 15: 249–253. Bibcode:1998A&AT...15..249U. doi:10.1080/10556799808201777.
- Porco, C. C.; Baker, E.; Barbara, J.; et al. (2005). "Cassini Imaging Science: Initial Results on Saturn's Rings and Small Satellites" (PDF). Science 307 (5713): 1226–36. Bibcode:2005Sci...307.1226P. doi:10.1126/science.1108056. PMID 15731439.
- Robert Roy Britt (2004). "Hints of Unseen Moons in Saturn's Rings". Archived from the original on February 12, 2006. Retrieved January 15, 2011.
- Porco, C.; The Cassini Imaging Team (July 18, 2007). "S/2007 S4". IAU Circular 8857.
- Jones, G.H.; Roussos, E.; Krupp, N. et al. (2008). "The Dust Halo of Saturn's Largest Icy Moon, Rhea". Science 319 (1): 1380–84. Bibcode:2008Sci...319.1380J. doi:10.1126/science.1151524. PMID 18323452.
- Porco, C.; The Cassini Imaging Team (March 3, 2009). "S/2008 S1 (Aegaeon)". IAU Circular 9023.
- Platt, Jane; Brown, Dwayne (14 April 2014). "NASA Cassini Images May Reveal Birth of a Saturn Moon". NASA. Retrieved 14 April 2014.
- Jewitt, David; Haghighipour, Nader (2007). "Irregular Satellites of the Planets: Products of Capture in the Early Solar System" (PDF). Annual Review of Astronomy and Astrophysics 45: 261–95. arXiv:astro-ph/0703059. Bibcode:2007ARA&A..45..261J. doi:10.1146/annurev.astro.44.051905.092459. Archived from the original on 2010-02-07.
- Gladman, Brett; Kavelaars, J. J.; Holman, Matthew et al. (2001). "Discovery of 12 satellites of Saturn exhibiting orbital clustering". Nature 412 (6843): 1631–166. doi:10.1038/35084032. PMID 11449267.
- David Jewitt (May 3, 2005). "12 New Moons For Saturn". University of Hawaii. Retrieved April 27, 2010.
- Emily Lakdawalla (May 3, 2005). "Twelve New Moons For Saturn". Retrieved March 4, 2010.
- Sheppard, S. S.; Jewitt, D. C.; and Kleyna, J. (June 30, 2006). "Satellites of Saturn". IAU Circular No 8727. Retrieved January 2, 2010.
- Sheppard, S. S.; Jewitt, D. C.; and Kleyna, J. (May 11, 2007). "S/2007 S 1, S/2007 S 2, AND S/2007 S 3". IAU Circular No 8836. Retrieved January 2, 2010.
- "Planet and Satellite Names and Discoverers". Gazetteer of Planetary Nomenclature. USGS Astrogeology. July 21, 2006. Retrieved 2006-08-06.
- Grav, Tommy; Bauer, James (2007). "A deeper look at the colors of the Saturnian irregular satellites". Icarus 191 (1): 267–285. arXiv:astro-ph/0611590. Bibcode:2007Icar..191..267G. doi:10.1016/j.icarus.2007.04.020.
- Thomas, P. C. (July 2010). "Sizes, shapes, and derived properties of the saturnian satellites after the Cassini nominal mission". Icarus 208 (1): 395–401. Bibcode:2010Icar..208..395T. doi:10.1016/j.icarus.2010.01.025.
- Jacobson, R. A.; Antreasian, P. G.; Bordi, J. J.; Criddle, K. E.; Ionasescu, R.; Jones, J. B.; Mackenzie, R. A.; Meek, M. C.; Parcher, D.; Pelletier, F. J.; Owen, Jr., W. M.; Roth, D. C.; Roundhill, I. M.; Stauch, J. R. (December 2006). "The Gravity Field of the Saturnian System from Satellite Observations and Spacecraft Tracking Data". The Astronomical Journal 132 (6): 2520–2526. Bibcode:2006AJ....132.2520J. doi:10.1086/508812.
- Williams, David R. (August 21, 2008). "Saturnian Satellite Fact Sheet". NASA (National Space Science Data Center). Retrieved April 27, 2010.
- Porco, C. C.; Thomas, P. C.; Weiss, J. W.; Richardson, D. C. (2007). "Saturn's Small Inner Satellites:Clues to Their Origins" (pdf). Science 318 (5856): 1602–1607. Bibcode:2007Sci...318.1602P. doi:10.1126/science.1143977. PMID 18063794.
- Sheppard, Scott S. "Saturn's Known Satellites". Retrieved January 7, 2010.
- "A Small Find Near Equinox". NASA/JPL. August 7, 2009. Retrieved January 2, 2010.
- Tiscareno, Matthew S.; Burns, Joseph A; Hedman, Mathew M; et al., Carolyn C.; Weiss, John W.; Dones, Luke; Richardson, Derek C.; Murray, Carl D. (2006). "100-metre-diameter moonlets in Saturn's A ring from observations of 'propeller' structures". Nature 440 (7084): 648–650. Bibcode:2006Natur.440..648T. doi:10.1038/nature04581. PMID 16572165.
- Sremčević, Miodrag; Schmidt, Jürgen; Salo, Heikki; et al., Martin; Spahn, Frank; Albers, Nicole (2007). "A belt of moonlets in Saturn's A ring". Nature 449 (7165): 1019–21. Bibcode:2007Natur.449.1019S. doi:10.1038/nature06224. PMID 17960236.
- Murray, Carl D.; Beurle, Kevin; Cooper, Nicholas J.; et al. (2008). "The determination of the structure of Saturn's F ring by nearby moonlets". Nature 453 (7196): 739–744. Bibcode:2008Natur.453..739M. doi:10.1038/nature06999. PMID 18528389.
- Hedman, M. M.; J. A. Burns; M. S. Tiscareno; C. C. Porco; G. H. Jones; E. Roussos; N. Krupp; C. Paranicas; S. Kempf (2007). "The Source of Saturn's G Ring" (PDF). Science 317 (5838): 653–656. Bibcode:2007Sci...317..653H. doi:10.1126/science.1143964. PMID 17673659.
- Spitale, J. N.; Jacobson, R. A.; Porco, C. C.; Owen, W. M., Jr. (2006). "The orbits of Saturn's small satellites derived from combined historic and Cassini imaging observations". The Astronomical Journal 132 (2): 692–710. Bibcode:2006AJ....132..692S. doi:10.1086/505206.
- Thomas, P.C; Burns, J.A.; Helfenstein, P.; et al. (2007). "Shapes of the saturnian icy satellites and their significance" (PDF). Icarus 190 (2): 573–584. Bibcode:2007Icar..190..573T. doi:10.1016/j.icarus.2007.03.012.
- Moore, Jeffrey M.; Schenk, Paul M.; Bruesch, Lindsey S.; Asphaug, Erik; McKinnon, William B. (October 2004). "Large impact features on middle-sized icy satellites" (PDF). Icarus 171 (2): 421–443. Bibcode:2004Icar..171..421M. doi:10.1016/j.icarus.2004.05.009.
- Porco, C. C.; Helfenstein, P.; Thomas, P. C.; Ingersoll, A. P.; Wisdom, J.; West, R.; Neukum, G.; Denk, T.; Wagner, R. (10 March 2006). "Cassini Observes the Active South Pole of Enceladus". Science 311 (5766): 1393–1401. Bibcode:2006Sci...311.1393P. doi:10.1126/science.1123013. PMID 16527964.
- Pontius, D.H.; Hill, T.W. (2006). "Enceladus: A significant plasma source for Saturn's magnetosphere" (PDF). Journal of Geophysical Research 111 (A9): A09214. Bibcode:2006JGRA..11109214P. doi:10.1029/2006JA011674.
- Wagner, R. J.; Neukum, G.; Stephan, K.; Roatsch; Wolf; Porco (2009). "Stratigraphy of Tectonic Features on Saturn's Satellite Dione Derived from Cassini ISS Camera Data". Lunar and Planetary Science XL: 2142. Bibcode:2009LPI....40.2142W.
- Schenk, P. M.; Moore, J. M. (2009). "Eruptive Volcanism on Saturn's Icy Moon Dione". Lunar and Planetary Science XL: 2465. Bibcode:2009LPI....40.2465S.
- "Cassini Images Ring Arcs Among Saturn's Moons (Cassini Press Release)". Ciclops.org. September 5, 2008. Retrieved January 1, 2010.
- Lakdawalla, E. 2012.
- Matthew S. Tiscareno, Joseph A. Burns, Jeffrey N. Cuzzi, Matthew M. Hedman (2010). "Cassini imaging search rules out rings around Rhea". Geophysical Research Letters 37 (14): L14205. arXiv:1008.1764. Bibcode:2010GeoRL..3714205T. doi:10.1029/2010GL043663.
- Wagner, R. J.; Neukum, G.; Giese, B.; Roatsch; Denk; Wolf; Porco (2008). "Geology of Saturn's Satellite Rhea on the Basis of the High-Resolution Images from the Targeted Flyby 049 on Aug. 30, 2007". Lunar and Planetary Science. XXXIX: 1930. Bibcode:2008LPI....39.1930W.
- Schenk, Paul M.; McKinnon, W. B. (2009). "Global Color Variations on Saturn's Icy Satellites, and New Evidence for Rhea's Ring". American Astronomical Society (American Astronomical Society, DPS meeting #41, #3.03) 41. Bibcode:2009DPS....41.0303S.
- "Rhea:Inktomi". USGS—Gazetteer of Planetary Nomenclature. Retrieved April 28, 2010.
- "Rhea's Bright Splat". CICLOPS. June 5, 2005. Retrieved April 28, 2010.
- Porco, Carolyn C.; Baker, Emily; Barbara, John et al. (2005). "Imaging of Titan from the Cassini spacecraft" (PDF). Nature 434 (7030): 159–168. Bibcode:2005Natur.434..159P. doi:10.1038/nature03436. PMID 15758990.
- López-Puertas, Manuel (June 6, 2013). "PAH's in Titan's Upper Atmosphere". CSIC. Retrieved June 6, 2013.
- Dyches, Preston; Clavin, Whitney (June 23, 2014). "Titan's Building Blocks Might Pre-date Saturn" (Press release). Jet Propulsion Laboratory. Retrieved June 28, 2014.
- Lopes, R.M.C.; Mitchell, K.L.; Stofan, E.R. et al. (2007). "Cryovolcanic features on Titan's surface as revealed by the Cassini Titan Radar Mapper" (PDF). Icarus 186 (2): 395–412. Bibcode:2007Icar..186..395L. doi:10.1016/j.icarus.2006.09.006.
- Lorenz, R.D.; Wall, S.; Radebaugh, J. et al. (2006). "The Sand Seas of Titan: Cassini RADAR Observations of Longitudinal Dunes". Science 312 (5774): 724–27. Bibcode:2006Sci...312..724L. doi:10.1126/science.1123257. PMID 16675695.
- Stofan, E.R.; Elachi, C.; Lunine, J.I. et al. (2007). "The lakes of Titan" (PDF). Nature 445 (7123): 61–64. Bibcode:2007Natur.445...61S. doi:10.1038/nature05438. PMID 17203056.
- "Titan:Kraken Mare". USGS—Gazetteer of Planetary Nomenclature. Retrieved January 5, 2010.
- Dyches, Preston; Brown, Dwayne (July 2, 2014). "Ocean on Saturn Moon Could be as Salty as the Dead Sea". NASA. Retrieved July 2, 2014.
- Mitria, Giuseppe; Meriggiolad, Rachele; Hayesc, Alex; Lefevree, Axel; Tobiee, Gabriel; Genovad, Antonio; Luninec, Jonathan I.; Zebkerg, Howard (July 1, 2014). "Shape, topography, gravity anomalies and tidal deformation of Titan". Icarus 236: 169–177. Bibcode:2014Icar..236..169M. doi:10.1016/j.icarus.2014.03.018. Retrieved July 2, 2014.
- Thomas, P. C.; Armstrong, J. W.; Asmar, S. W.; et al. (2007). "Hyperion's sponge-like appearance". Nature 448 (7149): 50–53. Bibcode:2007Natur.448...50T. doi:10.1038/nature05779. PMID 17611535.
- Thomas, P.C; Black, G. J.; Nicholson, P. D. (1995). "Hyperion: Rotation, Shape, and Geology from Voyager Images". Icarus 117 (1): 128–148. Bibcode:1995Icar..117..128T. doi:10.1006/icar.1995.1147.
- Porco, C.C.; Baker, E.; Barbarae, J. et al. (2005). "Cassini Imaging Science: Initial Results on Phoebe and Iapetus". Science 307 (5713): 1237–42. Bibcode:2005Sci...307.1237P. doi:10.1126/science.1107981. PMID 15731440.
- Verbiscer, Anne J.; Skrutskie, Michael F.; Hamilton, Douglas P.; et al. (2009). "Saturn's largest ring". Nature 461 (7267): 1098–1100. Bibcode:2009Natur.461.1098V. doi:10.1038/nature08515. PMID 19812546.
- Denk, T.; et al. (2009-12-10). "Iapetus: Unique Surface Properties and a Global Color Dichotomy from Cassini Imaging". Science (AAAS) 326 (5964): 435–9. Bibcode:2010Sci...327..435D. doi:10.1126/science.1177088. PMID 20007863. Retrieved 2009-12-19.
- Spencer, J. R.; Denk, T. (2009-12-10). "Formation of Iapetus' Extreme Albedo Dichotomy by Exogenically Triggered Thermal Ice Migration". Science (AAAS) 326 (5964): 432–5. Bibcode:2010Sci...327..432S. doi:10.1126/science.1177132. PMID 20007862. Retrieved 2009-12-19.
- Giese, Bernd; Neukum, Gerhard; Roatsch, Thomas et al. (2006). "Topographic modeling of Phoebe using Cassini images" (PDF). Planetary and Space Science 54 (12): 1156–66. Bibcode:2006P&SS...54.1156G. doi:10.1016/j.pss.2006.05.027.
- "Natural Satellites Ephemeris Service". IAU: Minor Planet Center. Retrieved 2011-01-08.
- Schlyter, Paul (2009). "Saturn's Ninth and Tenth Moons". Views of the Solar System (Calvin J. Hamilton). Retrieved January 5, 2010.
- Canup, R. (December 2010). "Origin of Saturn's rings and inner moons by mass removal from a lost Titan-sized satellite". Nature 468 (7326): 943–6. doi:10.1038/nature09661. PMID 21151108.
- E. Asphaug and A. Reufer. Middle sized moons as a consequence of Titan’s accretion. Icarus.
|Wikimedia Commons has media related to Moons of Saturn.|
- "Simulation showing the position of Saturn's Moon". Retrieved 26 May 2010.
- "Saturn's Rings". NASA's Solar System Exploration. Retrieved 26 May 2010.
- "Saturn's Moons". Astronomy Cast episode No. 61, includes full transcript. Retrieved 26 May 2010.
- Carolyn Porco. Fly me to the moons of Saturn. Retrieved 26 May 2010.
- "The Top 10 Largest Planetary Moons". |
Credit: NASA's Goddard Space Flight Center
Imagine a piece of ice 1,000 miles long, 400 miles wide, and 2 miles thick in the center. That's the Greenland ice sheet. But that island-sized piece of ice is melting, so NASA researchers are flying to the Arctic this week to learn more about the nature of those changes.
Researchers led by William Krabill of NASA's Wallops Flight Facility in Wallops Island, Va., embark this week on a month-long airborne campaign to measure ice sheet and glacier thickness. They are using NASA's P-3B aircraft - designed for heavy lifting and low-altitude flying - outfitted with an array of science instruments. The plane is scheduled to transit March 30 from Virginia to Thule Air Base, Greenland. Weather permitting, the P-3B will make near-daily 8-hour flights over Greenland while pointing laser and radar instruments at targets until the mission's end on May 7.
Nearly every spring since 1991 Krabill has flown NASA research planes about 2,000 feet over Greenland to collect measurements of ice thickness. Now, as Krabill and colleagues return to update their measurements, their mission has become more extensive and more urgent because of global interest in the Arctic and the aging of a key ice-observing NASA satellite.
Measurements recorded by the radars and lasers will be compared and calibrated with measurements from the Ice, Cloud and land Elevation Satellite (ICESat), which makes regular, large-scale surface elevation measurements of polar ice sheets. Launched in January 2003, ICESat is already three years beyond its primary mission lifetime, so NASA scientists and engineers are making plans to bridge the anticipated gap until the launch of ICESat-II several years from now.
> Larger image
"It's research like this on sea ice and the Greenland ice sheet that we use to understand how the polar regions are connected to global climate change and discover what changes are going on in atmospheric and ocean circulations," said Tom Wagner, cryosphere program manager at NASA Headquarters in Washington, D.C.
Krabill pioneered observing techniques that have created a continuous record of ice sheet changes. He first came to Wallops as a summer student in 1967, and eventually worked with a group of engineers on early radar and laser systems and on research uses for the Global Positioning System (GPS). Krabill is credited with being the first to combine the two technologies and put them on an airplane to measure changes in ice thickness.
"I realized the capability of the instruments and saw a research need we could fulfill," Krabill said.
So far, flights led by Krabill have found evidence that, in general, ice along Greenland's coast is thinning while some areas inland are thickening. Still, the net change points to an overall loss. There's enough ice and snow in Greenland to raise sea level by about 7 meters (23 feet) if it were to all melt.
To determine long-term trends in the ice, scientists need sustained, highly accurate and well-calibrated measurements of thickness. Past and present observations combined with climate models are critical to understanding the future behavior of the Greenland ice sheet.
> Larger image
To achieve the thickness measurements, researchers use a combination of laser and radar instruments. Laser light from the Airborne Topographic Mapper is pulsed in circular scans on the ground, which reflect back to the aircraft and are converted into elevation maps of the ice surface. Meanwhile, the Pathfinder Airborne Radar Ice Sounder instrument, to be flown by researchers from the Johns Hopkins University Applied Physics Laboratory, emits radio signals that penetrate and "see" all the way through the ice, measuring the elevation of the land surface below. By combining elevation data for the top and base of the ice, and taking into account the aircraft's position using precise Global Positioning System (GPS) data, researchers can determine ice thickness at any given location.
A similar technique will be used to measure the thickness of a different target - sea ice floating around Greenland and across the Arctic Ocean during a flight to Fairbanks, Alaska. Combining elevation data for the top of sea ice with sea level, researchers can use the known density difference between the sea and ice to estimate sea ice thickness.
"The big fear is that a lot of the multiyear ice is gone," Wagner said. "We hear stories about sea ice growing back. Well, it has grown back every winter, but it's really thin and may not last during the summer."
Krabill and colleagues will be joined on the flights by researchers from the University of Kansas, who are flying a "snow radar" that measures how snow builds up over time on ice, how that layer becomes compacted, and how it is changing.
The P-3B will fly routes that take it directly under the path of ICESat, allowing the satellite and plane to measure the same features. Each has its benefits: the satellite provides regular, continental-scale coverage of Greenland and hard-to-reach regions like Antarctica, while the aircraft can make more detailed surveys of areas where scientists expect to see rapid change.
"We need to do both because they both work together," Wagner said. Comparing the data collected simultaneously by aircraft and satellite also will help researchers use future aircraft flights to bridge the anticipated gap in satellite coverage should the ICESat mission end before ICESat-II is launched.
> NASA Mission Checks Health of Greenland's Ice Sheet and Glaciers
> NASA's Cryospheric Sciences Branch
> NASA P-3
> NASA Investigator Receives Prestigious Remote Sensing Award |
Python is a very versatile language for dealing with numerical data. It also supports working on both real and imaginary numbers. In this tutorial, you’ll learn more about imaginary numbers and how to work with them in Python.
Initialize a Complex Number in Python
Complex numbers are comprised of a real part and an imaginary part. In Python, the imaginary part can be expressed by just adding a
J after the number.
A complex number can be created easily: by directly assigning the real and imaginary part to a variable. The example code below demonstrates how you can create a complex number in Python:
a = 8 + 5j print(type(a))
We can also use the built-in
complex() function to convert the two given real numbers into a complex number.
a = 8 b = 5 c = complex(8,5) print(type(c))
Now, the other half of the article will focus more on working with imaginary numbers in Python.
Use the Complex Number Attributes and Functions in Python
Complex numbers have a few built-in accessors that can be used for general information.
For example, to access the real part of a complex number, we can use the built-in
real() function and similarly use the
imag() function to access the imaginary part. Additionally, we can also find the conjugate of a complex number using the
a = 8 + 5j print('Real Part = ', a.real) print('Imaginary Part = ', a.imag) print('Conjugate = ', a.conjugate())
Real Part = 8.0 Imaginary Part = 5.0 Conjugate = (8-5j)
Use the Regular Mathematical Operations on a Complex Number in Python
You can do basic mathematical operations like addition and multiplication on complex numbers in Python. The following code implements simple mathematical procedures on two given complex numbers.
a = 8 + 5j b = 10 + 2j # Adding imaginary part of both numbers c = (a.imag + b.imag) print(c) # Simple multiplication of both complex numbers print('after multiplication = ', a*b)
7.0 after multiplication = (70+66j)
cmath Module Functions on Complex Numbers
cmath module is a special module that provides access to several functions meant to be used on complex numbers. This module consists of a wide variety of functions. Some notable ones are the phase of a complex number, power and log functions, trigonometric functions, and hyperbolic functions.
cmath module also includes a couple of constants like
Positive infinity, and some more constants used in the calculations.
The following code implements some of the
cmath module functions on the complex number in Python:
import cmath a = 8 + 5j ph = cmath.phase(a) print('Phase:', ph) print('e^a is:', cmath.exp(a)) print('sine value of complex no.:\n', cmath.sin(a)) print('Hyperbolic sine is: \n', cmath.sinh(a))
Phase: 0.5585993153435624 e^a is: (845.5850573783163-2858.5129755252788j) sine value of complex no.: (73.42022455449552-10.796569647775932j) Hyperbolic sine is: (422.7924811101271-1429.2566486042679j)
numpy.array() Function to Store Imaginary Numbers in Arrays in Python
NumPy is an abbreviation for Numerical Python. It’s a library provided by Python that deals with arrays and provides functions for operating on these arrays. As its name suggests, the
numpy.array() function is used in the creation of an array. The program below demonstrates how you can create an array of complex numbers in Python:
import numpy as np arr = np.array([8+5j,10+2j,4+3j]) print(arr)
[8.+5.j 10.+2.j 4.+3.j]
Complex numbers are one of the three ways in which Python allows the storage and implementation of numerical data. It’s also considered an essential part of Python Programming. You can perform a wide variety of operations on complex numbers with Python Programming Language. |
A minimum wage is the lowest remuneration that employers may legally pay to workers. Equivalently, it is the price floor below which workers may not sell their labor. Although minimum wage laws are in effect in many jurisdictions, differences of opinion exist about the benefits and drawbacks of a minimum wage. Supporters of the minimum wage say it increases the standard of living of workers, reduces poverty, reduces inequality, boosts morale and forces businesses to be more "efficient". In contrast, opponents of the minimum wage say it increases poverty, increases unemployment (particularly among unskilled or inexperienced workers) and is damaging to businesses, because excessively high minimum wages require businesses to raise the prices of their product or service to accommodate the extra expense of paying a higher wage.
- 1 History
- 2 Minimum wage laws
- 3 Economics models
- 4 Empirical studies
- 5 Debate over consequences
- 6 Surveys of economists
- 7 Alternatives
- 8 US movement
- 9 See also
- 10 Notes
- 11 Further reading
- 12 External links
Modern minimum wage laws trace their origin to the Ordinance of Labourers (1349), which was a decree by King Edward III that set a maximum wage for laborers in medieval England. King Edward III, who was a wealthy landowner, was dependent, like his lords, on serfs to work the land. In the autumn of 1348, the Black Plague reached England and decimated the population. The severe shortage of labor caused wages to soar and encouraged King Edward III to set a wage ceiling. Subsequent amendments to the ordinance, such as the Statute of Labourers (1351), increased the penalties for paying a wage above the set rates.
While the laws governing wages initially set a ceiling on compensation, they were eventually used to set a living wage. An amendment to the Statute of Labourers in 1389 effectively fixed wages to the price of food. As time passed, the Justice of the Peace, who was charged with setting the maximum wage, also began to set formal minimum wages. The practice was eventually formalized with the passage of the Act Fixing a Minimum Wage in 1604 by King James I for workers in the textile industry.
By the early 19th century, the Statutes of Labourers was repealed as increasingly capitalistic England embraced laissez-faire policies which disfavored regulations of wages (whether upper or lower limits). The subsequent 19th century saw significant labor unrest affect many industrial nations. As trade unions were decriminalized during the century, attempts to control wages through collective agreement were made. However, this meant that a uniform minimum wage was not possible. In Principles of Political Economy in 1848, John Stuart Mill argued that because of the collective action problems that workers faced in organisation, it was a justified departure from laissez-faire policies (or freedom of contract) to regulate people's wages and hours by law.
It was not until the 1890s that the first modern legislative attempts to regulate minimum wages were seen in New Zealand and Australia. The movement for a minimum wage was initially focused on stopping sweatshop labor and controlling the proliferation of sweatshops in manufacturing industries. The sweatshops employed large numbers of women and young workers, paying them what were considered to be substandard wages. The sweatshop owners were thought to have unfair bargaining power over their employees, and a minimum wage was proposed as a means to make them pay fairly. Over time, the focus changed to helping people, especially families, become more self-sufficient.
Minimum wage laws
The first modern national minimum wage law was enacted by the government of New Zealand in 1894, followed by Australia in 1896 and the United Kingdom in 1909. In the United States, statutory minimum wages were first introduced nationally in 1938, and they were reintroduced and expanded in the United Kingdom in 1998. There is now legislation or binding collective bargaining regarding minimum wage in more than 90 percent of all countries. In the European Union, 22 member states out of 28 currently have national minimum wages. Other countries, such as Sweden, Finland, Denmark, Switzerland, Austria, and Italy, have no minimum wage laws, but rely on employer groups and trade unions to set minimum earnings through collective bargaining.
Minimum wage rates vary greatly across many different jurisdictions, not only in setting a particular amount of money—for example $7.25 per hour ($14,500 per year) under certain US state laws (or $2.13 for employees who receive tips, which is known as the tipped minimum wage), $9.47 in the US state of Washington, or £6.50 (for those aged 21+) in the United Kingdom—but also in terms of which pay period (for example Russia and China set monthly minimum wages) or the scope of coverage. Currently the American federal minimum wage rests at seven dollars, twenty-five cents ($7.25) per hour. However, some states do not recognize the minimum wage law such as Louisiana and Tennessee. Other states operate below the federal minimum wage such as Georgia and Wyoming. Some jurisdictions even allow employers to count tips given to their workers as credit towards the minimum wage levels. India was one of the first developing countries to introduce minimum wage policy. It also has one of the most complicated systems with more than 1,200 minimum wage rates.
Informal minimum wages
Customs and extra-legal pressures from governments or labor unions can produce a de facto minimum wage. So can international public opinion, by pressuring multinational companies to pay Third World workers wages usually found in more industrialized countries. The latter situation in Southeast Asia and Latin America was publicized in the 2000s, but it existed with companies in West Africa in the middle of the twentieth century.
Setting minimum wage
Among the indicators that might be used to establish an initial minimum wage rate are ones that minimize the loss of jobs while preserving international competitiveness. Among these are general economic conditions as measured by real and nominal gross domestic product; inflation; labor supply and demand; wage levels, distribution and differentials; employment terms; productivity growth; labor costs; business operating costs; the number and trend of bankruptcies; economic freedom rankings; standards of living and the prevailing average wage rate.
In the business sector, concerns include the expected increased cost of doing business, threats to profitability, rising levels of unemployment (and subsequent higher government expenditure on welfare benefits raising tax rates), and the possible knock-on effects to the wages of more experienced workers who might already be earning the new statutory minimum wage, or slightly more. Among workers and their representatives, political consideration weigh in as labor leaders seek to win support by demanding the highest possible rate. Other concerns include purchasing power, inflation indexing and standardized working hours.
In the United States, the minimum wage promulgated by the Fair Labor Standards Act of 1938. According to the Economic Policy Institute, the minimum wage in the United States would have been $18.28 in 2013 if the minimum wage had kept pace with labor productivity. To adjust for increased rates of worker productivity in the United States, raising the minimum wage to $22 (or more) an hour has been presented.
Supply and demand
An analysis of supply and demand of the type shown in many mainstream economics textbooks implies that by mandating a price floor above the equilibrium wage, minimum wage laws should cause unemployment. This is because a greater number of people are willing to work at the higher wage while a smaller number of jobs will be available at the higher wage. Companies can be more selective in those whom they employ thus the least skilled and least experienced will typically be excluded. An imposition or increase of a minimum wage will generally only affect employment in the low-skill labor market, as the equilibrium wage is already at or below the minimum wage, whereas in higher skill labor markets the equilibrium wage is too high for a change in minimum wage to affect employment.
According to the supply and demand model shown in many textbooks on economics, increasing the minimum wage decreases the employment of minimum-wage workers. One such textbook says:
If a higher minimum wage increases the wage rates of unskilled workers above the level that would be established by market forces, the quantity of unskilled workers employed will fall. The minimum wage will price the services of the least productive (and therefore lowest-wage) workers out of the market. …The direct results of minimum wage legislation are clearly mixed. Some workers, most likely those whose previous wages were closest to the minimum, will enjoy higher wages. This is known as the "ripple effect". The ripple effect shows that when you increase the minimum wage the wages of all others will consequently increase due the need for relativity. Others, particularly those with the lowest prelegislation wage rates, will be unable to find work. They will be pushed into the ranks of the unemployed or out of the labor force. Some argue that by increasing the federal minimum wage, however, the economy will be adversely affected due to small businesses not being able to keep up with the need to subsequently increase all workers wages.
The textbook illustrates the point with a supply and demand diagram similar to the one above. In the diagram it is assumed that workers are willing to labor for more hours if paid a higher wage. Economists graph this relationship with the wage on the vertical axis and the quantity (hours) of labor supplied on the horizontal axis. Since higher wages increase the quantity supplied, the supply of labor curve is upward sloping, and is shown as a line moving up and to the right.
A firm's cost is a function of the wage rate. It is assumed that the higher the wage, the fewer hours an employer will demand of an employee. This is because, as the wage rate rises, it becomes more expensive for firms to hire workers and so firms hire fewer workers (or hire them for fewer hours). The demand of labor curve is therefore shown as a line moving down and to the right.
Combining the demand and supply curves for labor allows us to examine the effect of the minimum wage. We will start by assuming that the supply and demand curves for labor will not change as a result of raising the minimum wage. This assumption has been questioned. If no minimum wage is in place, workers and employers will continue to adjust the quantity of labor supplied according to price until the quantity of labor demanded is equal to the quantity of labor supplied, reaching equilibrium price, where the supply and demand curves intersect. Minimum wage behaves as a classical price floor on labor. Standard theory says that, if set above the equilibrium price, more labor will be willing to be provided by workers than will be demanded by employers, creating a surplus of labor, i.e. unemployment.
In other words, the simplest and most basic economics says this about commodities like labor (and wheat, for example): Artificially raising the price of the commodity tends to cause the supply of it to increase and the demand for it to lessen. The result is a surplus of the commodity. When there is a wheat surplus, the government buys it. Since the government does not hire surplus labor, the labor surplus takes the form of unemployment, which tends to be higher with minimum wage laws than without them.
So the basic theory says that raising the minimum wage helps workers whose wages are raised, and hurts people who are not hired (or lose their jobs) because companies cut back on employment. But proponents of the minimum wage hold that the situation is much more complicated than the basic theory can account for. One complicating factor is possible monopsony in the labor market, whereby the individual employer has some market power in determining wages paid. Thus it is at least theoretically possible that the minimum wage may boost employment. Though single employer market power is unlikely to exist in most labor markets in the sense of the traditional 'company town,' asymmetric information, imperfect mobility, and the personal element of the labor transaction give some degree of wage-setting power to most firms.
Criticism of the neoclassical model
The argument that a minimum wage decreases employment is based on a simple supply and demand model of the labor market. A number of economists (for example Pierangelo Garegnani, Robert L. Vienneau, and Arrigo Opocher & Ian Steedman), building on the work of Piero Sraffa, argue that that model, even given all its assumptions, is logically incoherent. Michael Anyadike-Danes and Wynne Godley argue, based on simulation results, that little of the empirical work done with the textbook model constitutes a potentially falsifiable theory, and consequently empirical evidence hardly exists for that model. Graham White argues, partially on the basis of Sraffianism, that the policy of increased labor market flexibility, including the reduction of minimum wages, does not have an "intellectually coherent" argument in economic theory.
Gary Fields, Professor of Labor Economics and Economics at Cornell University, argues that the standard textbook model for the minimum wage is ambiguous, and that the standard theoretical arguments incorrectly measure only a one-sector market. Fields says a two-sector market, where "the self-employed, service workers, and farm workers are typically excluded from minimum-wage coverage... [and with] one sector with minimum-wage coverage and the other without it [and possible mobility between the two]," is the basis for better analysis. Through this model, Fields shows the typical theoretical argument to be ambiguous and says "the predictions derived from the textbook model definitely do not carry over to the two-sector case. Therefore, since a non-covered sector exists nearly everywhere, the predictions of the textbook model simply cannot be relied on."
An alternate view of the labor market has low-wage labor markets characterized as monopsonistic competition wherein buyers (employers) have significantly more market power than do sellers (workers). This monopsony could be a result of intentional collusion between employers, or naturalistic factors such as segmented markets, search costs, information costs, imperfect mobility and the personal element of labor markets. In such a case a simple supply and demand graph would not yield the quantity of labor clearing and the wage rate. This is because while the upward sloping aggregate labor supply would remain unchanged, instead of using the upward labor supply curve shown in a supply and demand diagram, monopsonistic employers would use a steeper upward sloping curve corresponding to marginal expenditures to yield the intersection with the supply curve resulting in a wage rate lower than would be the case under competition. Also, the amount of labor sold would also be lower than the competitive optimal allocation.
Such a case is a type of market failure and results in workers being paid less than their marginal value. Under the monopsonistic assumption, an appropriately set minimum wage could increase both wages and employment, with the optimal level being equal to the marginal product of labor. This view emphasizes the role of minimum wages as a market regulation policy akin to antitrust policies, as opposed to an illusory "free lunch" for low-wage workers.
Another reason minimum wage may not affect employment in certain industries is that the demand for the product the employees produce is highly inelastic. For example, if management is forced to increase wages, management can pass on the increase in wage to consumers in the form of higher prices. Since demand for the product is highly inelastic, consumers continue to buy the product at the higher price and so the manager is not forced to lay off workers. Economist Paul Krugman argues this explanation neglects to explain why the firm was not charging this higher price absent the minimum wage.
Three other possible reasons minimum wages do not affect employment were suggested by Alan Blinder: higher wages may reduce turnover, and hence training costs; raising the minimum wage may "render moot" the potential problem of recruiting workers at a higher wage than current workers; and minimum wage workers might represent such a small proportion of a business's cost that the increase is too small to matter. He admits that he does not know if these are correct, but argues that "the list demonstrates that one can accept the new empirical findings and still be a card-carrying economist."
Economists disagree as to the measurable impact of minimum wages in practice. This disagreement usually takes the form of competing empirical tests of the elasticities of supply and demand in labor markets and the degree to which markets differ from the efficiency that models of perfect competition predict.
Economists have done empirical studies on different aspects of the minimum wage, including:
- Employment effects, the most frequently studied aspect
- Effects on the distribution of wages and earnings among low-paid and higher-paid workers
- Effects on the distribution of incomes among low-income and higher-income families
- Effects on the skills of workers through job training and the deferring of work to acquire education
- Effects on prices and profits
- Effects on on-the-job training
Until the mid-1990s, a general consensus existed among economists, both conservative and liberal, that the minimum wage reduced employment, especially among younger and low-skill workers. In addition to the basic supply-demand intuition, there were a number of empirical studies that supported this view. For example, Gramlich (1976) found that many of the benefits went to higher income families, and that teenagers were made worse off by the unemployment associated with the minimum wage.
Brown et al. (1983) noted that time series studies to that point had found that for a 10 percent increase in the minimum wage, there was a decrease in teenage employment of 1–3 percent. However, the studies found wider variation, from 0 to over 3 percent, in their estimates for the effect on teenage unemployment (teenagers without a job and looking for one). In contrast to the simple supply and demand diagram, it was commonly found that teenagers withdrew from the labor force in response to the minimum wage, which produced the possibility of equal reductions in the supply as well as the demand for labor at a higher minimum wage and hence no impact on the unemployment rate. Using a variety of specifications of the employment and unemployment equations (using ordinary least squares vs. generalized least squares regression procedures, and linear vs. logarithmic specifications), they found that a 10 percent increase in the minimum wage caused a 1 percent decrease in teenage employment, and no change in the teenage unemployment rate. The study also found a small, but statistically significant, increase in unemployment for adults aged 20–24.
Wellington (1991) updated Brown et al.'s research with data through 1986 to provide new estimates encompassing a period when the real (i.e., inflation-adjusted) value of the minimum wage was declining, because it had not increased since 1981. She found that a 10% increase in the minimum wage decreased the absolute teenage employment by 0.6%, with no effect on the teen or young adult unemployment rates.
Some research suggests that the unemployment effects of small minimum wage increases are dominated by other factors. In Florida, where voters approved an increase in 2004, a follow-up comprehensive study after the increase confirmed a strong economy with increased employment above previous years in Florida and better than in the US as a whole. When it comes to on-the-job training, some believe the increase in wages is taken out of training expenses. A 2001 empirical study found that there is "no evidence that minimum wages reduce training, and little evidence that they tend to increase training."
Some empirical studies have tried to ascertain the benefits of a minimum wage beyond employment effects. In an analysis of census data, Joseph Sabia and Robert Nielson found no statistically significant evidence that minimum wage increases helped reduce financial, housing, health, or food insecurity. This study was undertaken by the Employment Policies Institute, a think tank funded by the food, beverage and hospitality industries. In 2012, Michael Reich published an economic analysis that suggested that a proposed minimum wage hike in San Diego might stimulate the city's economy by about $190 million.
The Economist wrote in December 2013: "A minimum wage, providing it is not set too high, could thus boost pay with no ill effects on jobs....America's federal minimum wage, at 38% of median income, is one of the rich world's lowest. Some studies find no harm to employment from federal or state minimum wages, others see a small one, but none finds any serious damage. ... High minimum wages, however, particularly in rigid labour markets, do appear to hit employment. France has the rich world’s highest wage floor, at more than 60% of the median for adults and a far bigger fraction of the typical wage for the young. This helps explain why France also has shockingly high rates of youth unemployment: 26% for 15- to 24-year-olds."
Card and Krueger
In 1992, the minimum wage in New Jersey increased from $4.25 to $5.05 per hour (an 18.8% increase), while in the adjacent state of Pennsylvania it remained at $4.25. David Card and Alan Krueger gathered information on fast food restaurants in New Jersey and eastern Pennsylvania in an attempt to see what effect this increase had on employment within New Jersey. Basic economic theory would have implied that relative employment should have decreased in New Jersey. Card and Krueger surveyed employers before the April 1992 New Jersey increase, and again in November–December 1992, asking managers for data on the full-time equivalent staff level of their restaurants both times. Based on data from the employers' responses, the authors concluded that the increase in the minimum wage slightly increased employment in the New Jersey restaurants.
One possible explanation that the current minimum wage laws may not affect unemployment in the United States is that the minimum wage is set close to the equilibrium point for low and unskilled workers. Thus, according to this explanation, in the absence of the minimum wage law unskilled workers would be paid approximately the same amount and an increase above this equilibrium point could likely bring about increased unemployment for the low and unskilled workers.
Card and Krueger expanded on this initial article in their 1995 book Myth and Measurement: The New Economics of the Minimum Wage. They argued that the negative employment effects of minimum wage laws are minimal if not non-existent. For example, they look at the 1992 increase in New Jersey's minimum wage, the 1988 rise in California's minimum wage, and the 1990–91 increases in the federal minimum wage. In addition to their own findings, they reanalyzed earlier studies with updated data, generally finding that the older results of a negative employment effect did not hold up in the larger datasets.
Research subsequent to Card and Krueger's work
In subsequent research, David Neumark and William Wascher attempted to verify Card and Krueger's results by using administrative payroll records from a sample of large fast food restaurant chains in order to verify employment. They found that the minimum wage increases were followed by decreases in employment. On the other hand, an assessment of data collected and analyzed by Neumark and Wascher did not initially contradict the Card and Krueger results, but in a later edited version they found a four percent decrease in employment, and reported that "the estimated disemployment effects in the payroll data are often statistically significant at the 5- or 10-percent level although there are some estimators and subsamples that yield insignificant—although almost always negative" employment effects. However, this paper's conclusions were rebutted in a 2000 paper by Card and Krueger. A 2011 paper has reconciled the difference between Card and Krueger's survey data and Neumark and Wascher's payroll-based data. The paper shows that both datasets evidence conditional employment effects that are positive for small restaurants, but are negative for large fast-food restaurants.
In 1996 and 1997, the federal minimum wage was increased from $4.25 to $5.15, thereby increasing the minimum wage by $0.90 in Pennsylvania but by just $0.10 in New Jersey; this allowed for an examination of the effects of minimum wage increases in the same area, subsequent to the 1992 change studied by Card and Krueger. A study by Hoffman and Trace found the result anticipated by traditional theory: a detrimental effect on employment.
Further application of the methodology used by Card and Krueger by other researchers yielded results similar to their original findings, across additional data sets. A 2010 study by three economists (Arindrajit Dube of the University of Massachusetts Amherst, William Lester of the University of North Carolina at Chapel Hill, and Michael Reich of the University of California, Berkeley), compared adjacent counties in different states where the minimum wage had been raised in one of the states. They analyzed employment trends for several categories of low-wage workers from 1990 to 2006 and found that increases in minimum wages had no negative effects on low-wage employment and successfully increased the income of workers in food services and retail employment, as well as the narrower category of workers in restaurants.
However, a 2011 study by Baskaya and Rubinstein of Brown University found that at the federal level, "a rise in minimum wage have [sic] an instantaneous impact on wage rates and a corresponding negative impact on employment", stating, "Minimum wage increases boost teenage wage rates and reduce teenage employment." Another 2011 study by Sen, Rybczynski, and Van De Waal found that "a 10% increase in the minimum wage is significantly correlated with a 3−5% drop in teen employment." A 2012 study by Sabia, Hansen, and Burkhauser found that "minimum wage increases can have substantial adverse labor demand effects for low-skilled individuals", with the largest effects on those aged 16 to 24.
A 2013 study by Meer and West concluded that "the minimum wage reduces net job growth, primarily through its effect on job creation by expanding establishments ... most pronounced for younger workers and in industries with a higher proportion of low-wage workers." This study by Meer and West was later critiqued for its trends of assumption in the context of narrowly defined low-wage groups. The authors replied to the critiques and released additional data which addressed the criticism of their methodology, but did not resolve the issue of whether their data showed a causal relationship. Another 2013 study by Suzana Laporšek of the University of Primorska, on youth unemployment in Europe claimed there was "a negative, statistically significant impact of minimum wage on youth employment." A 2013 study by labor economists Tony Fang and Carl Lin which studied minimum wages and employment in China, found that "minimum wage changes have significant adverse effects on employment in the Eastern and Central regions of China, and result in disemployment for females, young adults, and low-skilled workers".
Several researchers have conducted statistical meta-analyses of the employment effects of the minimum wage. In 1995, Card and Krueger analyzed 14 earlier time-series studies on minimum wages and concluded that there was clear evidence of publication bias (in favor of studies that found a statistically significant negative employment effect). They point out that later studies, which had more data and lower standard errors, did not show the expected increase in t-statistic (almost all the studies had a t-statistic of about two, just above the level of statistical significance at the .05 level). Though a serious methodological indictment, opponents of the minimum wage largely ignored this issue; as Thomas Leonard noted, "The silence is fairly deafening."
In 2005, T.D. Stanley showed that Card and Krueger's results could signify either publication bias or the absence of a minimum wage effect. However, using a different methodology, Stanley concluded that there is evidence of publication bias and that correction of this bias shows no relationship between the minimum wage and unemployment. In 2008, Hristos Doucouliagos and T.D. Stanley conducted a similar meta-analysis of 64 U.S. studies on disemployment effects and concluded that Card and Krueger's initial claim of publication bias is still correct. Moreover, they concluded, "Once this publication selection is corrected, little or no evidence of a negative association between minimum wages and employment remains."
Consistent with the results from Doucouliagos and Stanley, and Card and Krueger, Baskaya and Rubinstein's 2011 study, which analyzed 24 papers on the minimum wage, found "mild positive, yet statistically insignificant association between the change in the employment of teenagers" at state minimum wage levels. However, when minimum wage is set at the federal level, they found "notable wage impacts and large corresponding disemployment effects".
Debate over consequences
Minimum wage laws affect workers in most low-paid fields of employment and have usually been judged against the criterion of reducing poverty. Minimum wage laws receive less support from economists than from the general public. Despite decades of experience and economic research, debates about the costs and benefits of minimum wages continue today.
Various groups have great ideological, political, financial, and emotional investments in issues surrounding minimum wage laws. For example, agencies that administer the laws have a vested interest in showing that "their" laws do not create unemployment, as do labor unions whose members' finances are protected by minimum wage laws. On the other side of the issue, low-wage employers such as restaurants finance the Employment Policies Institute, which has released numerous studies opposing the minimum wage. The presence of these powerful groups and factors means that the debate on the issue is not always based on dispassionate analysis. Additionally, it is extraordinarily difficult to separate the effects of minimum wage from all the other variables that affect employment.
The following table summarizes the arguments made by those for and against minimum wage laws:
|Arguments in favor of minimum wage laws||Arguments against minimum wage laws|
Supporters of the minimum wage claim it has these effects:
Opponents of the minimum wage claim it has these effects:
A widely circulated argument that the minimum wage was ineffective at reducing poverty was provided by George Stigler in 1949:
- Employment may fall more than in proportion to the wage increase, thereby reducing overall earnings;
- As uncovered sectors of the economy absorb workers released from the covered sectors, the decrease in wages in the uncovered sectors may exceed the increase in wages in the covered ones;
- The impact of the minimum wage on family income distribution may be negative unless the fewer but better jobs are allocated to members of needy families rather than to, for example, teenagers from families not in poverty;
- Forbidding employers to pay less than a legal minimum is equivalent to forbidding workers to sell their labor for less than the minimum wage. The legal restriction that employers cannot pay less than a legislated wage is equivalent to the legal restriction that workers cannot work at all in the protected sector unless they can find employers willing to hire them at that wage.
In 2006, the International Labour Organization (ILO) argued that the minimum wage could not be directly linked to unemployment in countries that have suffered job losses. In April 2010, the Organisation for Economic Co-operation and Development (OECD) released a report arguing that countries could alleviate teen unemployment by "lowering the cost of employing low-skilled youth" through a sub-minimum training wage. A study of U.S. states showed that businesses' annual and average payrolls grow faster and employment grew at a faster rate in states with a minimum wage. The study showed a correlation, but did not claim to prove causation.
Although strongly opposed by both the business community and the Conservative Party when introduced in 1999, the Conservatives reversed their opposition in 2000. Accounts differ as to the effects of the minimum wage. The Centre for Economic Performance found no discernible impact on employment levels from the wage increases, while the Low Pay Commission found that employers had reduced their rate of hiring and employee hours employed, and found ways to cause current workers to be more productive (especially service companies). The Institute for the Study of Labor found prices in the minimum wage sector rose significantly faster than prices in non-minimum wage sectors, in the four years following the implementation of the minimum wage. Neither trade unions nor employer organizations contest the minimum wage, although the latter had especially done so heavily until 1999.
In 2014, supporters of minimum wage cited a study that found that job creation within the United States is faster in states that raised their minimum wages. In 2014, supporters of minimum wage cited news organizations who reported the state with the highest minimum-wage garnered more job creation than the rest of the United States.
In 2014, in Seattle, Washington, liberal and progressive business owners who had supported the city's new $15 minimum wage said they might hold off on expanding their businesses and thus creating new jobs, due to the uncertain timescale of the wage increase implementation. However, subsequently at least two of the business owners quoted did expand.
The dollar value of the minimum wage loses purchasing power over time due to inflation. Minimum wage laws, for instance proposals to index the minimum wage to average wages, have the potential to keep the dollar value of the minimum wage relevant and predictable.
With regard to the economic effects of introducing minimum wage legislation in Germany in January 2015, recent developments have shown that the feared increase in unemployment has not materialized, however, in some economic sectors and regions of the country, it came to a decline in job opportunities particularly for temporary and part-time workers, and some low-wage jobs have disappeared entirely. Because of this overall positive development, the Deutsche Bundesbank revised its opinion, and ascertained that “the impact of the introduction of the minimum wage on the total volume of work appears to be very limited in the present business cycle”.
Surveys of economists
According to a 1978 article in the American Economic Review, 90% of the economists surveyed agreed that the minimum wage increases unemployment among low-skilled workers. By 1992 the survey found 79% of economists in agreement with that statement, and by 2000, 45.6% were in full agreement with the statement and 27.9% agreed with provisos (73.5% total). The authors of the 2000 study also reweighted data from a 1990 sample to show that at that time 62.4% of academic economists agreed with the statement above, while 19.5% agreed with provisos and 17.5% disagreed. They state that the reduction on consensus on this question is "likely" due to the Card and Krueger research and subsequent debate.
A similar survey in 2006 by Robert Whaples polled PhD members of the American Economic Association (AEA). Whaples found that 46.8% respondents wanted the minimum wage eliminated, 37.7% supported an increase, 14.3% wanted it kept at the current level, and 1.3% wanted it decreased. Another survey in 2007 conducted by the University of New Hampshire Survey Center found that 73% of labor economists surveyed in the United States believed 150% of the then-current minimum wage would result in employment losses and 68% believed a mandated minimum wage would cause an increase in hiring of workers with greater skills. 31% felt that no hiring changes would result.
Surveys of labor economists have found a sharp split on the minimum wage. Fuchs et al. (1998) polled labor economists at the top 40 research universities in the United States on a variety of questions in the summer of 1996. Their 65 respondents were nearly evenly divided when asked if the minimum wage should be increased. They argued that the different policy views were not related to views on whether raising the minimum wage would reduce teen employment (the median economist said there would be a reduction of 1%), but on value differences such as income redistribution. Daniel B. Klein and Stewart Dompe conclude, on the basis of previous surveys, "the average level of support for the minimum wage is somewhat higher among labor economists than among AEA members."
In 2007, Klein and Dompe conducted a non-anonymous survey of supporters of the minimum wage who had signed the "Raise the Minimum Wage" statement published by the Economic Policy Institute. 95 of the 605 signatories responded. They found that a majority signed on the grounds that it transferred income from employers to workers, or equalized bargaining power between them in the labor market. In addition, a majority considered disemployment to be a moderate potential drawback to the increase they supported.
In 2013, a diverse group of 37 economics professors was surveyed on their view of the minimum wage's impact on employment. 34% of respondents agreed with the statement, "Raising the federal minimum wage to $9 per hour would make it noticeably harder for low-skilled workers to find employment." 32% disagreed and the remaining respondents were uncertain or had no opinion on the question. 47% agreed with the statement, "The distortionary costs of raising the federal minimum wage to $9 per hour and indexing it to inflation are sufficiently small compared with the benefits to low-skilled workers who can find employment that this would be a desirable policy", while 11% disagreed.
Economists and other political commentators[who?] have proposed alternatives to the minimum wage.[which?] They argue that these alternatives may address the issue of poverty better than a minimum wage, as it would benefit a broader population of low wage earners, not cause any unemployment, and distribute the costs widely rather than concentrating it on employers of low wage workers.
A basic income (or negative income tax) is a system of social security that periodically provides each citizen with a sum of money that is sufficient to live on frugally. It is argued that recipients of the basic income would have considerably more bargaining power when negotiating a wage with an employer as there would be no risk of destitution for not taking the employment. As a result, the jobseeker could spend more time looking for a more appropriate or satisfying job, or they could wait until a higher-paying job appeared. Alternately, they could spend more time increasing their skills in university, which would make them more suitable for higher-paying jobs, as well as provide numerous other benefits. Experiments on Basic Income and NIT in Canada and the USA show that people spent more time studying while the program was running.
Proponents argue that a basic income that is based on a broad tax base would be more economically efficient, as the minimum wage effectively imposes a high marginal tax on employers, causing losses in efficiency.
Guaranteed minimum income
A guaranteed minimum income is another proposed system of social welfare provision. It is similar to a basic income or negative income tax system, except that it is normally conditional and subject to a means test. Some proposals also stipulate a willingness to participate in the labor market, or a willingness to perform community services.
Refundable tax credit
A refundable tax credit is a mechanism whereby the tax system can reduce the tax owed by a household to below zero, and result in a net payment to the taxpayer beyond their own payments into the tax system. Examples of refundable tax credits include the earned income tax credit and the additional child tax credit in the US, and working tax credits and child tax credits in the UK. Such a system is slightly different from a negative income tax, in that the refundable tax credit is usually only paid to households that have earned at least some income. This policy is more targeted against poverty than the minimum wage, because it avoids subsidizing low-income workers who are supported by high-income households (for example, teenagers still living with their parents).
In the United States, earned income tax credit rates, also known as EITC or EIC, vary by state—some are refundable while other states do not allow a refundable tax credit. The federal EITC program has been expanded by a number of presidents including Jimmy Carter, Ronald Reagan, George H.W. Bush, and Bill Clinton. In 1986, President Reagan described the EITC as "the best anti poverty, the best pro-family, the best job creation measure to come out of Congress." The ability of the earned income tax credit to deliver larger monetary benefits to the poor workers than an increase in the minimum wage and at a lower cost to society was documented in a 2007 report by the Congressional Budget Office.
Italy, Sweden, Norway, Finland, and Denmark are examples of developed nations where there is no minimum wage that is required by legislation. Such nations, particularly the Nordics, have very high union participation rates. Instead, minimum wage standards in different sectors are set by collective bargaining.
In January 2014, seven Nobel economists—Kenneth Arrow, Peter Diamond, Eric Maskin, Thomas Schelling, Robert Solow, Michael Spence, and Joseph Stiglitz—and 600 other economists wrote a letter to the US Congress and the US President urging that, by 2016, the US government should raise the minimum wage to $10.10. They endorsed the Minimum Wage Fairness Act which was introduced by US Senator Tom Harkin in 2013. U.S. Senator Bernie Sanders introduced a bill in 2015 that would raise the minimum wage to $15, and in his 2016 campaign for president ran on a platform of increasing it. Although Sanders did not become the nominee, the Democratic National Committee adopted his $15 minimum wage push in their 2016 party platform.
Reactions from former McDonald's USA Ed Rensi about raising minimum wage to $15 is to completely push humans out of the picture when it comes to labor if they are to pay minimum wage at $15 they would look into replacing humans with machines as that would be the more cost-effective than having employees that are ineffective. He as well believes that an increase to $15 an hour would cause job loss at an extraordinary level. said former McDonald’s (MCD) USA CEO during an interview on the FOX Business Network’s Mornings with Maria. Rensi also believes it does not only affect the fast food industry, franchising he sees as the best business model in the United States, it is dependent on people that have low job skills that have to grow and if you cannot pay them a reasonable wage then they are going to be replaced with machines
In contrast, the relatively high minimum wage in Puerto Rico has been blamed by various politicians and commentators as a highly significant factor in the Puerto Rican government-debt crisis. One study concluded that 'Employers are disinclined to hire workers because the US federal minimum wage is very high relative to the local average'.
As of December 2014, unions were exempt from recent minimum wage increases in Chicago, Illinois, SeaTac, Washington, and Milwaukee County, Wisconsin, as well as the California cities of Los Angeles, San Francisco, Long Beach, San Jose, Richmond, and Oakland.
- Average worker's wage
- Basic income
- Economic inequality
- Employee benefits
- Family wage
- Garcia v. San Antonio Metropolitan Transit Authority
- Labor law
- List of minimum wages by country
- List of sovereign states in Europe by minimum wage
- Living wage
- Maximum wage
- Minimum Wage Fixing Convention, 1970
- Minimum wage in Canada
- Minimum wage in China
- Minimum wage in Taiwan
- Minimum wage in the United States
- Minimum wage law
- Moonlight clan
- Nickel and Dimed
- Positive rights
- Price controls
- Scratch Beginnings
- Wage slavery
- Working poor
- "Minimum Wages. by David Neumark and William L. Wascher".
- "The Young and the Jobless". The Wall Street Journal. 2009-10-03. Retrieved 2014-01-11.
- Black, John (September 18, 2003). Oxford Dictionary of Economics. Oxford University Press. p. 300. ISBN 978-0-19-860767-0.
- Mihm, Stephen (5 September 2013). "How the Black Death Spawned the Minimum Wage". Bloomberg View. Retrieved 17 April 2014.
- Thorpe, Vanessa (29 March 2014). "Black death was not spread by rat fleas, say researchers". theguardian.com. Retrieved 29 March 2014.
- Starr, Gerald (1993). Minimum wage fixing : an international review of practices and problems (2nd impression (with corrections) ed.). Geneva: International Labour Office. p. 1. ISBN 9789221025115.
- Nordlund, Willis J. (1997). The quest for a living wage : the history of the federal minimum wage program. Westport, Conn.: Greenwood Press. p. xv. ISBN 9780313264122.
- Neumark, David; William L. Wascher (2008). Minimum Wages. Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-14102-4.
- "OECD Statistics (GDP, unemployment, income, population, labour, education, trade, finance, prices...)". Stats.oecd.org. Retrieved 2013-03-29.
- Grossman, Jonathan. "Fair Labor Standards Act of 1938: Maximum Struggle for a Minimum Wage". Department of Labor. Retrieved 17 April 2014.
- Stone, Jon (1 October 2010). "History of the UK's minimum wage". Total Politics. Retrieved 17 April 2014.
- Williams, Walter E. (June 2009). "The Best Anti-Poverty Program We Have?". Regulation. 32 (2): 62.
- "ILO 2006: Minimum wages policy (PDF)" (PDF). Ilo.org. Retrieved March 1, 2012.
- "Minimum wage statistics - Statistics Explained". ec.europa.eu. Retrieved 2016-02-12.
- Ehrenberg, Ronald G. Labor Markets and Integrating National Economies, Brookings Institution Press (1994), p. 41
- Alderman, Liz; Greenhouse, Steven (October 27, 2014). "Fast Food in Denmark Serves Something Atypical: Living Wages". New York Times. Retrieved October 27, 2014.
- "Minimum Wage". Washington State Dept. of Labor & Industries. Retrieved 2015-01-18.
- "British Government website". Retrieved 2013-09-05.
- "Wage and Hour Division" United States Department of Labor. January 2016. Website. 13 July 2016.<https://www.dol/gov/whd/minwage/america.htm>
- "Most Asked Questions about Minimum Wages in India". PayCheck.in. 2013-02-22. Retrieved 2013-03-29.
- Sowell, Thomas (2004). "Minimum Wage Laws". Basic Economics: A Citizen's Guide to the Economy. New York: Basic Books. pp. 163–9. ISBN 978-0-465-08145-5.
- Provisional Minimum Wage Commission: Preliminary Views on a Bask of Indicators, Other Relevant Considerations and Impact Assessment, Provisional Minimum Wage Commission, Hong Kong Special Administrative Region Government,
- Setting the Initial Statutory Minimum Wage Rate, submission to government by the Hong Kong General Chamber of Commerce.
- Li, Joseph, "Minimum wage legislation for all sectors," China Daily October 16, 2008 , "Hong Kong sets Minimum Wage – What one Singaporean thinks," Speaker's Corner, SG Forums, November 5, 2010
- Editorial Board (February 9, 2014). "The Case for a Higher Minimum Wage". New York Times. Retrieved February 9, 2014.
- Chumley, Cheryl K. (March 18, 2013). "Take it to the bank: Sen. Elizabeth Warren wants to raise minimum wage to $22 per hour". Washington Times. Retrieved January 22, 2014.
- Wing, Nick (March 18, 2013). "Elizabeth Warren: Minimum Wage Would Be $22 An Hour If It Had Kept Up With Productivity". Huffington Post. Retrieved January 22, 2014.
- Hart-Landsberg, Ph.D., Martin (December 19, 2013). "$22.62/HR: The Minimum Wage If It Had Risen Like The Incomes Of The 1%". thesocietypages.org. Retrieved January 22, 2014.
- Rmusemore (December 3, 2013). "Stop Complaining Republicans, the Minimum Wage Should be $22.62 an Hour". policususa.com. Retrieved January 22, 2014.
- McConnell, C. R.; Brue, S. L. (1999). Economics (14th ed.). Irwin-McGraw Hill. p. 594.
- Gwartney, J. D.; Stroup, R. L.; Sobel, R. S.; Macpherson, D. A. (2003). Economics: Private and Public Choice (10th ed.). Thomson South-Western. p. 97.
- Mankiw, N. Gregory (2011). Principles of Macroeconomics (6th ed.). South-Western Pub. p. 311.
- Card, David; Krueger, Alan B. (1995). Myth and Measurement: The New Economics of the Minimum Wage. Princeton University Press. pp. 1; 6–7.
- Formby, J. P.; Bishop, J. A.; Kim, H. (2010). "The Redistributive Effects and Cost-Effectiveness of Increasing the Federal Minimum Wage". Public Finance Review. 38 (5): 585–618. doi:10.1177/1091142110373481.
- Belman, Dale L.; Wolfson, Paul (2010). "The Effect of Legislated Minimum Wage Increases on Employment and Hours: A Dynamic Analysis". Labour. 24 (1): 1–25. doi:10.1111/j.1467-9914.2010.00468.x.
- Gwartney, James David; Stroup, Richard L.; Studenmund, A. H. (1987). Economics: Private and Public Choice. New York: Harcourt Brace Jovanovich. pp. 559–62. ISBN 978-0-15-518880-8.
- e.g. DE Card and AB Krueger, Myth and Measurement: The New Economics of the Minimum Wage (1995) and S Machin and A Manning, ‘Minimum wages and economic outcomes in Europe’ (1997) 41 European Economic Review 733
- Rittenberg, Timothy Tregarthen, Libby (1999). Economics (2nd ed.). New York: Worth Publishers. p. 290. ISBN 9781572594180. Retrieved 21 June 2014.
- Ehrenberg, R. and Smith, R. "Modern labor economics: theory and public policy", HarperCollins, 1994, 5th ed.[page needed]
- By Jim Stanford, Debate: Boost the wage, help the worker, National Post, February 22, 2011
- Boal, William M.; Ransom, Michael R (March 1997). "Monopsony in the Labor Market". Journal of Economic Literature. 35 (1): 86–112. JSTOR 2729694.
- OECD. "Minimum relative to average wages of full-time workers".
- Garegnani, P. (July 1970). "Heterogeneous Capital, the Production Function and the Theory of Distribution". The Review of Economic Studies. 37 (3): 407–36. doi:10.2307/2296729. JSTOR 2296729.
- Vienneau, Robert L. (2005). "On Labour Demand and Equilibria of the Firm". The Manchester School. 73 (5): 612–9. doi:10.1111/j.1467-9957.2005.00467.x.
- Opocher, A.; Steedman, I. (2009). "Input price-input quantity relations and the numeraire". Cambridge Journal of Economics. 33 (5): 937–48. doi:10.1093/cje/bep005.
- Anyadike-Danes, Michael; Godley, Wynne (1989). "Real Wages and Employment: A Sceptical View of Some Recent Empirical Work". The Manchester School. 57 (2): 172–87. doi:10.1111/j.1467-9957.1989.tb00809.x.
- White, Graham (November 2001). "The Poverty of Conventional Economic Wisdom and the Search for Alternative Economic and Social Policies". The Drawing Board: an Australian Review of Public Affairs. 2 (2): 67–87.
- Fields, Gary S. (1994). "The Unemployment Effects of Minimum Wages". International Journal of Manpower. 15 (2): 74–81. doi:10.1108/01437729410059323.
- Manning, Alan (2003). Monopsony in motion: Imperfect Competition in Labor Markets. Princeton, NJ: Princeton University Press. ISBN 0-691-11312-2.[page needed]
- Gillespie, Andrew (2007). Foundations of Economics. Oxford University Press. p. 240.
- Krugman, Paul (2013). Economics. Worth Publishers. p. 385.
- Blinder, Alan S. (May 23, 1996). "The $5.15 Question". The New York Times. p. A29.
- Schmitt, John (February 2013). "Why Does the Minimum Wage Have No Discernible Effect on Employment?" (PDF). Center for Economic and Policy Research. Retrieved December 5, 2013. Lay summary – The Washington Post (February 14, 2013).
- Gramlich, Edward M.; Flanagan, Robert J.; Wachter, Michael L. (1976). "Impact of Minimum Wages on Other Wages, Employment, and Family Incomes". Brookings Papers on Economic Activity. 1976 (2): 409–61. doi:10.2307/2534380.
- Brown, Charles; Gilroy, Curtis; Kohen, Andrew (Winter 1983). "Time-Series Evidence of the Effect of the Minimum Wage on Youth Employment and Unemployment". The Journal of Human Resources. 18 (1): 3–31. doi:10.2307/145654. JSTOR 145654.
- Wellington, Alison J. (Winter 1991). "Effects of the Minimum Wage on the Employment Status of Youths: An Update". The Journal of Human Resources. 26 (1): 27–46. doi:10.2307/145715. JSTOR 145715.
- Fox, Liana (October 24, 2006). "Minimum wage trends: Understanding past and contemporary research". Economic Policy Institute. Retrieved December 6, 2013.
- "The Florida Minimum Wage: Good for Workers, Good for the Economy" (PDF). Retrieved 3 November 2013.
- Acemoglu, Daron; Pischke, Jörn-Steffen (November 2001). "Minimum Wages and On-the-Job Training" (PDF). Institute for the Study of Labor. SSRN . Retrieved December 6, 2013. Also published as Acemoglu, Daron; Pischke, Jörn-Steffen (2003). "Minimum Wages and On-the-job Training". In Polachek, Solomon W. Worker Well-Being and Public Policy. Research in Labor Economics. 22. pp. 159–202. doi:10.1016/S0147-9121(03)22005-7. ISBN 978-0-76231-026-5.
- Sabia, Joseph J.; Nielsen, Robert B. (April 2012). Can Raising the Minimum Wage Reduce Poverty and Hardship? (PDF). Employment Policies Institute.[page needed]
- Michael Reich. "Increasing the Minimum Wage in San Jose: Benefits and Costs- White Paper" (PDF). Retrieved 2013-03-29.
- "The logical floor". 14 December 2013 – via The Economist.
- Card, David; Krueger, Alan B. (September 1994). "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania". The American Economic Review. 84 (4): 772–93. JSTOR 2118030.
- ISBN 0-691-04823-1[full citation needed][page needed]
- Card; Krueger (2000). "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Reply". American Economic Review. 90 (5): 1397–1420. doi:10.1257/aer.90.5.1397.
- Dube, Arindrajit; Lester, T. William; Reich, Michael (November 2010). "Minimum Wage Effects Across State Borders: Estimates Using Contiguous Counties". Review of Economics and Statistics. 92 (4): 945–964. doi:10.1162/REST_a_00039. Retrieved 10 March 2014.
- Schmitt, John (January 1, 1996). "The Minimum Wage and Job Loss". Economic Policy Institute. Retrieved December 7, 2013.
- Neumark, David; Wascher, William (December 2000). "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Comment". The American Economic Review. 90 (5): 1362–96. doi:10.1257/aer.90.5.1362. JSTOR 2677855.
- http://www.davidson.edu/academic/economics/foley/eco324_s06/Neumark_Wascher%20AER%20(2000).pdf[full citation needed][dead link]
- Card and Krueger (2000) "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Reply" American Economic Review, Volume 90 No. 5. pg 1397-1420
- Ropponen, Olli (2011). "Reconciling the evidence of Card and Krueger (1994) and Neumark and Wascher (2000)". Journal of Applied Econometrics. 26 (6): 1051–7. doi:10.1002/jae.1258.
- Hoffman, Saul D; Trace, Diane M (2009). "NJ and PA Once Again: What Happened to Employment when the PA–NJ Minimum Wage Differential Disappeared?". Eastern Economic Journal. 35 (1): 115–28. doi:10.1057/eej.2008.1.
- Dube, Arindrajit; Lester, T. William; Reich, Michael (November 2010). "Minimum Wage Effects Across State Borders: Estimates Using Contiguous Counties" (PDF). The Review of Economics and Statistics. 92 (4): 945–64. doi:10.1162/REST_a_00039.
- FOLBRE, NANCY (November 1, 2010). "Along the Minimum-Wage Battle Front". New York Times. Retrieved 4 December 2013.
- "Using Federal Minimum Wages to Identify the Impact of Minimum Wages on Employment and Earnings Across the U.S. States" (PDF). 1 Oct 2011.
- "Teen employment, poverty, and the minimum wage: Evidence from Canada". 1 Jan 2011.
- "Are the Effects of Minimum Wage Increases Always Small? New Evidence from a Case Study of New York State". 2 Apr 2012.
- Meer, Jonathan; West, Jeremy (2013). "Effects of the Minimum Wage on Employment Dynamics". NBER Working Paper No. 19262.
- Schmitt, John. "More on Meer and West's Minimum Wage Study".
- "Minimum wage effects on youth employment in the European Union". 14 Sep 2013.
- "Minimum Wages and Employment in China". 14 Dec 2013.
- Fang, Tony; Lin, Carl (2015-11-27). "Minimum wages and employment in China". IZA Journal of Labor Policy. 4 (1): 22. doi:10.1186/s40173-015-0050-9. ISSN 2193-9004.
- Card, David; Krueger, Alan B. (May 1995). "Time-Series Minimum-Wage Studies: A Meta-analysis". The American Economic Review. 85 (2): 238–43. JSTOR 2117925.
- Leonard, T. C. (2000). "The Very Idea of Applying Economics: The Modern Minimum-Wage Controversy and Its Antecedents". History of Political Economy. 32: 117. doi:10.1215/00182702-32-Suppl_1-117.
- Stanley, T. D. (2005). "Beyond Publication Bias". Journal of Economic Surveys. 19 (3): 309. doi:10.1111/j.0950-0804.2005.00250.x.
- Doucouliagos, Hristos; Stanley, T. D. (2009). "Publication Selection Bias in Minimum-Wage Research? A Meta-Regression Analysis". British Journal of Industrial Relations. 47 (2): 406–28. doi:10.1111/j.1467-8543.2009.00723.x.
- Eatwell, John, Ed.; Murray Milgate; Peter Newman (1987). The New Palgrave: A Dictionary of Economics. London: The Macmillan Press Limited. pp. 476–478. ISBN 0-333-37235-2.
- Bernstein, Harry (September 15, 1992). "Troubling Facts on Employment". Los Angeles Times. p. D3. Retrieved December 6, 2013.
- Engquist, Erik (May 2006). "Health bill fight nears showdown". Crain's New York Business. 22 (20): 1.
- "St. Louis Post Dispatch: Holly Sklar, Small Businesses Want Minimum Wage Increase - Business For a Fair Minimum Wage".
- "Raising the minimum wage could improve public health".
- Sutch, Richard (1 September 2010). "The Unexpected Long-Run Impact of the Minimum Wage: An Educational Cascade" – via National Bureau of Economic Research.
- Wolcott, Ben. "2014 Job Creation Faster in States that Raised the Minimum Wage".
- Stilwell, Victoria (March 8, 2014). "Highest Minimum-Wage State Washington Beats U.S. in Job Creation". Bloomberg.
- "Real Value of the Minimum Wage". Epi.org. Retrieved 2013-03-29.
- Freeman, Richard B. (1994). "Minimum Wages – Again!". International Journal of Manpower. 15 (2): 8–25. doi:10.1108/01437729410059305.
- Bernard Semmel, Imperialism and Social Reform: English Social-Imperial Thought 1895–1914 (London: Allen and Unwin, 1960), p. 63.
- "ITIF Report Shows Self-service Technology a New Force in Economic Life". The Information Technology & Innovation Foundation. April 14, 2010. Retrieved October 5, 2011.
- Alesina, Alberto F.; Zeira, Joseph (2006). "Technology and Labor Regulations". doi:10.2139/ssrn.936346.
- "Minimum Wages in canada : theory, evidence and policy". Hrsdc.gc.ca. March 7, 2008. Retrieved October 5, 2011.
- Kallem, Andrew (2004). "Youth Crime and the Minimum Wage". doi:10.2139/ssrn.545382.
- "Crime and work: What we can learn from the low-wage labor market | Economic Policy Institute". Epi.org. July 1, 2000. Retrieved October 5, 2011.
- Kosteas, Vasilios D. "Minimum Wage." Encyclopedia of World Poverty. Ed. M. Odekon.Thousand Oaks, CA: Sage Publications, Inc., 2006. 719-21. SAGE knowledge. Web.
- Abbott, Lewis F. Statutory Minimum Wage Controls: A Critical Review of their Effects on Labour Markets, Employment, and Incomes. ISR Publications, Manchester UK, 2nd. edn. 2000. ISBN 978-0-906321-22-5. [page needed]
- Llewellyn H. Rockwell Jr. (October 28, 2005). "Wal-Mart Warms to the State - Mises Institute". Mises.org. Retrieved October 5, 2011.
- Tupy, Marian L. Minimum Interference, National Review Online, May 14, 2004
- "The Wages of Politics". Wall Street Journal. November 11, 2006. Retrieved December 6, 2013.
- Messmore, Ryan. "Increasing the Mandated Minimum Wage: Who Pays the Price?". Heritage.org. Retrieved October 5, 2011.
- Art Carden. "Why Wal-Mart Matters - Mises Institute". Mises.org. Retrieved October 5, 2011.
- "Will have only negative effects on the distribution of economic justice. Minimum-wage legislation, by its very nature, benefits some at the expense of the least experienced, least productive, and poorest workers." (Cato)
- Belvedere, Matthew (20 May 2016). "Worker pay vs automation tipping point may be coming, says this fast-food CEO". CNBC. Retrieved 22 December 2016.
- The Minimum Wage Law and Youth Crimes: Time-Series Evidence, Hashimoto - Chicago 1987
- Williams, Walter (1989). South Africa's War Against Capitalism. New York: Praeger. ISBN 0-275-93179-X.
- A blunt instrument, The Economist, October 26, 2006 (English)
- Partridge, M. D.; Partridge, J. S. (1999). "Do minimum wage hikes reduce employment? State-level evidence from the low-wage retail sector". Journal of Labor Research. 20 (3): 393. doi:10.1007/s12122-999-1007-9.
- "The Effects of a Minimum-Wage Increase on Employment and Family Income". February 18, 2014. Retrieved July 26, 2014.
- Covert, Bryce (21 February 2014). "A $10.10 Minimum Wage Would Make A DVD At Walmart Cost One Cent More".
- Hoium, Travis (19 October 2016). "What Will a Minimum Wage Increase Cost You at McDonald's? -- The Motley Fool".
- Scarpetta, Stephano, Anne Sonnet and Thomas Manfredi,Rising Youth Unemployment During The Crisis: How To Prevent Negative Long-Term Consequences on a Generation?, April 14, 2010 (read-only PDF)
- Fiscal Policy Institute, "States with Minimum Wages Above the Federal Level have had Faster Small Business and Retail Job Growth," March 30, 2006.
- "National Minimum Wage". politics.co.uk. Archived from the original on December 1, 2007. Retrieved December 29, 2007.
- Metcalf, David (April 2007). "Why Has the British National Minimum Wage Had Little or No Impact on Employment?".
- Low Pay Commission (2005). National Minimum Wage - Low Pay Commission Report 2005
- Wadsworth, Jonathan (September 2009). "Did the National Minimum Wage Affect UK Prices" (PDF).
- "States That Raised Minimum Wage See Faster Job Growth, Report Says".
- Rugaber, Christopher S. (July 19, 2014). "States with higher minimum wage gain more jobs". USA Today.
- Lobosco, Katie (May 14, 2014). "Washington state defies minimum wage logic". CNN.
- "Did Washington State's Minimum Wage Bet Pay Off?". 5 March 2014.
- Meyerson, Harold (May 21, 2014). "Harold Meyerson: A higher minimum wage may actually boost job creation". The Washington Post.
- Covert, Bryce (3 July 2014). "States That Raised Their Minimum Wages Are Experiencing Faster Job Growth".
- Nellis, Mike. "Minimum Wage Question and Answer".
- Minimum Wage Limbo Keeps Small Business Owners Up At Night, kuow.org, May 22, 2014
- Seattle Magazine, March 23, 2015
- $15 minimum wage a surprising success for Seattle restaurant, KOMO News, July 31, 2015
- C. Eisenring (Dec 2015). Gefährliche Mindestlohn-Euphorie (in German). Neue Zürcher Zeitung. Retrieved 30 December 2015.
- R. Janssen (Sept 2015). The German Minimum Wage Is Not A Job Killer. Social Europe. Retrieved 30 December 2015.
- Kearl, J. R.; Pope, Clayne L.; Whiting, Gordon C.; Wimmer, Larry T. (May 1979). "A Confusion of Economists?". The American Economic Review. 69 (2): 28–37. JSTOR 1801612.
- Alston, Richard M.; Kearl, J. R.; Vaughan, Michael B. (May 1992). "Is There a Consensus Among Economists in 1990s?". The American Economic Review. 82 (2): 203–9. JSTOR 2117401.
- survey by Dan Fuller and Doris Geide-Stevenson using a sample of 308 economists surveyed by the American Economic Association
- Hall, Robert Ernest. Economics: Principles and Applications. Centage Learning. ISBN 1111798206.
- Fuller, Dan; Geide-Stevenson, Doris (2003). "Consensus Among Economists: Revisited". Journal of Economic Education. 34 (4): 369–87. doi:10.1080/00220480309595230.
- Whaples, Robert (2006). "Do Economists Agree on Anything? Yes!". The Economists' Voice. 3 (9): 1–6. doi:10.2202/1553-3832.1156.
- http://epionline.org/studies/epi_minimumwage_07-2007.pdf[full citation needed]
- Fuchs, Victor R.; Krueger, Alan B.; Poterba, James M. (September 1998). "Economists' Views about Parameters, Values, and Policies: Survey Results in Labor and Public Economics". Journal of Economic Literature. 36 (3): 1387–425. JSTOR 2564804.
- Klein, Daniel; Dompe, Stewart (January 2007). "Reasons for Supporting the Minimum Wage: Asking Signatories of the 'Raise the Minimum Wage' Statement". Economics in Practice. 4 (1): 125–67.
- "Minimum Wage". IGM Forum. February 26, 2013. Retrieved December 6, 2013.
- http://monkeydo.biz, Designed & Developed by Monkey Do, LLC ::. "EconoMonitor".
- "Suggestion: Raise welfare children in institutions". Star-News. Jan 28, 1972. Retrieved November 19, 2013.
- David Scharfenberg (April 28, 2014). "What The Research Says In The Minimum Wage Debate". WBUR.
- "50 State Resources Map on State EITCs". The Hatcher Group. Retrieved June 16, 2010.
- "New Research Findings on the Effects of the Earned Income Tax Credit". Center on Budget and Policy Priorities. Retrieved June 30, 2010.
- Furman, Jason (April 10, 2006). "Tax Reform and Poverty". Center on Budget and Policy Priorities. Retrieved December 7, 2013.
- "Response to a Request by Senator Grassley About the Effects of Increasing the Federal Minimum Wage Versus Expanding the Earned Income Tax Credit" (PDF). Congressional Budget Office. January 9, 2007. Retrieved July 25, 2008.
- Olson, Parmy (9/01/2009). The Best Minimum Wages In Europe. Forbes. Retrieved 21 February 2014.
- "Labor Criticizes". Lewiston Morning Tribune. Associated Press. March 2, 1933. pp. 1, 6.
- 75 economists back minimum wage hike CNN Money, January 14, 2014
- Over 600 Economists Sign Letter In Support of $10.10 Minimum Wage Economist Statement on the Federal Minimum Wage, Economic Policy Institute
- "Sanders Introduces Bill for $15-an-Hour Minimum Wage". Sen. Bernie Sanders. Retrieved 2015-09-15.
- The rapid success of Fight for $15: 'This is a trend that cannot be stopped' S. Greenhouse, The Guardian, US-News, 24 Jul 2015
- Alex Seitz-Wald, Democrats Advance Most Progressive Platform in Party History, NBC News (July 10, 2016).
- Limitone, Julia (24 May 2016). "Fmr. Mcdonald's Usa Ceo: $35k Robots Cheaper Than Hiring At $15 Per Hour".
- California Reaches Deal For $15 Minimum Wage S. Bernstein, The Huffington Post, 28 Mar 2016
- When the Minimum Wage Really Bites: The Effect of the U.S.-Level Minimum on Puerto Rico Alida Castillo-Freeman, Richard B. Freeman (1992) in Immigration and the Workforce: Economic Consequences for the United States and Source Areas
- Puerto Rico’s crisis illustrates the risks of minimum wage hikes C. Lane, Washington Post, July 8, 2015
- Memo To The Fight For $15: Puerto Rico Happens With A Too High Minimum Wage Tim Worstall, Forbes.com, July 3, 2015.
- Puerto Rico - A Way Forward by A. Krueger, R. Teja and A. Wolfe, June 29, 2015
- Minimum wage loophole written to help labor unions, Washington Examiner, December 24, 2014
- Burkhauser, R. V. (2014). Why minimum wage increases are a poor way to help the working poor (No. 86). IZA Policy Paper, Institute for the Study of Labor (IZA).
|Wikiquote has quotations related to: Minimum wage|
|Wikimedia Commons has media related to Minimum wage.|
|Library resources about
- Minimum wage at DMOZ
- Resource Guide on Minimum Wages from the International Labour Organization (a UN agency)
- Minimum Wage Rates in All States of India from Paycheck India
- The National Minimum Wage (U.K.) from official UK government website
- Find It! By Topic: Wages: Minimum Wage U.S. Department of Labor
- Characteristics of Minimum Wage Workers: 2009 U.S. Department of Labor, Bureau of Labor Statistics
- History of Changes to the Minimum Wage Law U.S. Department of Labor, Wage and Hour Division
- The Effects of a Minimum-wage Increase on Employment and Family Income Congressional Budget Office
- Inflation and the Real Minimum Wage: A Fact Sheet Congressional Research Service
- Minimum Wages in Central and Eastern Europe Database Central Europe
- Prices and Wages - research guide at the University of Missouri libraries
- Increasing national minimum wage - from official
- Aaron and Partners site
- Issues about Minimum Wage from the AFL-CIO (U.S. labor federation favoring the minimum wage)
- Issue Guide on the Minimum Wage from the Economic Policy Institute
- A $15 U.S. Minimum Wage: How the Fast-Food Industry Could Adjust Without Shedding Jobs from the Political Economy Research Institute, January 2015.
- Reporting the Minimum Wage from The Cato Institute (U.S. libertarian organization opposed to the minimum wage)
- The Economic Effects of Minimum Wages from Show-Me Institute (U.S. libertarian organization opposed to the minimum wage)
- Economics in One Lesson: The Lesson Applied, Chapter 19: Minimum Wage Laws by Henry Hazlitt |
In the series: Pathfinder Series
Small Basic is a simple programming language that is easy to learn. It is designed to make programming uncomplicated and fun, especially for beginners. Before we start programming our Pathfinder game, we need to understand the programming environment and the components of Small Basic.
This module is about understanding how to “talk” to the computer, to make the computer perform tasks and functions. The basics covered here form the basis of understanding any computer programming language, and the simplicity of Microsoft Small Basic is an ideal platform for teaching these concepts.
This module provides examples that will be used to create a full, text-based, computer adventure game based on the Pathfinder (Part 1) module. At the end of this module students will understand the computer concepts of:
- Conditional statements (also known as IF-THEN-ELSE)
- Loops (sometimes known as FOR-NEXT statements)
In addition to these four basic concepts, the examples in this module also illustrate:
- how to write information to the screen,
- how to slow down the processing of a program, and
- how to generate random numbers.
This module exposes students to basic computer programming in a simplified and easy-to-navigate environment. It helps students construct program routines, and allows freedom to explore within the examples to see what happens. And finally, it helps students understand how computers understand sequential instructions and logic.
- a set of commands, instructions, and other syntax use to create a software program.
- How do we communicate with computers?
- How do computers “do” the things we ask them to?
- Do you want to write a computer program?
- Do you think that computer programming is hard?
This module directly links to the Computer Studies curriculum as students will be learning about coding and how humans and computers interact. Students will learn variables, conditional statements (also known as IF-THEN-ELSE), loops (sometimes known as FOR-NEXT statements) and subroutines.
- A plain coloured, medium-sized ball (like a small rubber dodgeball or volleyball)
- A receptacle (hat / bag / bucket)
- A large die (or a regular die, but big foamy ones are more fun)
- A computer with the Microsoft Small Basic application installed OR Internet access to the online interface
- A pen and paper for taking notes
The Human If-Loop
This activity is similar to “Musical Chairs”. It is a human simulation of the concepts of For Next Loop and If Statement found in the computer tutorial. Instructions:
- On a slip of paper, each student writes one colour they are wearing (must be clearly visible).
- The papers are put in a hat.
- The students then sit evenly spaced in a circle with about 6 inches between them.
- The teacher selects a slip from the hat and reads it out by saying “This ball (hold up the ball) is now ____”, and fills the blank with the colour noted on the paper.
- The teacher then hands the ball to any student wearing that colour, and puts the slip back in the hat.
- That student rolls the dice.
- The ball is passed one at a time clockwise by the number rolled on the die.
- The student with the ball now picks a colour from the hat and asks “Is the colour of this ball ____?”
- If the current “colour of the ball” matches the colour selected from the hat then that person says “True” and rolls the die and the sequence continues.
- But if the current “colour of the ball” in NOT the colour chosen, then that person says “False, this ball is now ____.” Filling in the blank with the colour selected, and they must roll the ball to a person wearing that colour.
- Once received, that person states “this ball is ____” using its current “colour” and then rolls the die and the game continues.
- You can add rules such as: if a one is rolled, you have to change the direction the ball is passed.
- If you want to introduce a counting activity: for each change in colour add 2 to a scoreboard and for each time 1 is rolled on the die subtract 1 from the scoreboard.
The Small Basic Environment
The Small Basic screen has three main components: The Editor, the Toolbar, and the Help Panel. When you first launch Small Basic, you will see a window that looks like the following image (Figure 1).
The Intellisense Helper
When you start typing in the Editor, a window pops up with suggestions that may help you. Figure 2 shows that after typing “gra”, the intellisense window series of suggestions with “GraphicsWindow” highlighted. We can click on this or simply hit “Enter” to accept it. Or you can scroll through the alternate suggestions using the arrow keys, the mouse wheel, or the scroll bar on the right. As you get familiar with the editor, this feature will make it easier and faster to write your programs.
Note: The online version of Intellisense is not as fancy as this one, but it functions the same way
As we will be adding graphics to our game, we will be using the GraphicsWindow component for our output. One of the advantages of this is the ability to use Unicode characters and our heritage language for programming AND display. This is exciting, because even though the programming keywords are in English, we can use any desired language for programming and interacting with the player. Though the code samples in this guide are in English, you will find alternative examples to highlight how code can be written using a First Nation language. The alternative examples in this module are presented in Plains Cree SRO as well as Cree syllabics.
Getting Started: “Welcome to Turtle Island” Program
Our first program is a basic “Welcome to Turtle Island” program that runs in a graphics window. In the editor type the following line of code:
GraphicsWindow.DrawText(10,10,"Welcome to Turtle Island")
That’s it! It is that simple! Press the Run button or hit the shortcut key (F5) and you should see a window similar to Figure 3.
It is a good practice to save your programs frequently just in case the power goes out, or the computer locks up, you don’t want to lose all your hard work.
To keep this program, close the running program window or hit the “End Program” button in your Editor window. Then click the Save or Save As button. Choose a file name and save it.
Now that we have written our first program, let’s have a look at all the individual tools we will need to make our Pathfinder text game. Regardless of the programming language you choose, there are some common terms and concepts that nearly all programming languages have. The ones we will be exploring in this module are:
- conditional statements (also known as IF-THEN-ELSE),
- loops (sometimes known as FOR-NEXT statements), and
A variable is a “token” we can use to store data that we want to use later, and can be changed as the needs of our program changes. Here are a couple examples:
basket = "Raspberries" NumberOfRaspberries = 5
In this example you will see that our variable name is all one word and has both upper and lower case characters. Variables must be a single word unit, cannot be a number, and though it is not necessary to use upper and lower cases it is good practice to do this to make it easier for others to read.
greeting = "Welcome to Turtle Island " name = "[your name]" GraphicsWindow.DrawText(10,10, "Hello " + name + welcome)
If all goes well, you should get the same result as the first program, so let’s jazz it up a bit. If you added the number addition section (lines 6 – 9 in the example), you will see how the “+” character behaves when used to add numbers or put words together. You can take this time to play around with the position of your DrawText lines by changing the numbers too!
Tip: Variables are a great opportunity for us to use our heritage languages:
in Cree SRO, mawiswâkan = “ayoskanak”
in Cree Syllabics, ᒪᐃᐧᓵᐧᑲᐣ = “ᐊᔪᐢᑲᓇᐠ”
2. Conditional Statements
The If-Then-Else statement is easily understood by its literal English meaning which asks a question: if a certain condition is met, then something occurs. We can add an additional component to this called Else. So in basic English if the answer to our If question is true then do something otherwise (else) do something else. Try these:
Notice the order of the IF statements. If we checked if time > 18 first, our greeting variable would say “Good Afternoon” at 6pm instead of “Good Evening” because the second If would check if it is after 12 and would change the greeting again. And there you go. That is how to use the IF THEN ELSE statement. Now let’s try some loops.
A loop is exactly what it sounds like – it loops, or repeats a set of instructions forever, or until a certain condition is met. We will often use a loop with an If statement. Let’s try this first:
Here we have a couple new commands we can use: Clear() and Delay(). These will be very useful when we get to programming our game. We can make it loop forever (or until we stop) by adding the If statement in the second piece of code. And that’s how easy it is to make a loop!
4. Subroutines (Sub)
Quite often you will find that as you write a program you will encounter an instance where you are rewriting or reusing the same code over and over again. This is a perfect case for introducing a Subroutine.
A subroutine is a small section of code that does a series of common or routine steps that we can call from somewhere else in the program. Subroutines are instructions that fit between lines that start with Sub and end with EndSub.
In the following example, add a Random() as the first line after your For loop start line, and then add lines 12 to 14 and see what happens.
What is happening is when you use Random() in your code, the computer jumps to the Subroutine called Random. It then processes all the instructions in Sub Random and returns itself back to where you called it from and continues. You can think of subroutines as mini-programs.
The other new command here is Math.GetRandomNumber, which is very handy command. We will encounter more basic programming concepts as we put together our Pathfinder video game, but we can tackle those as they arise. For now, make sure you’re comfortable with the four ideas covered in this module:
- Conditional statements (also known as IF-THEN-ELSE)
- Loops (sometimes known as FOR-NEXT statements)
Almost all programming languages utilize these concepts, and knowing them will allow you to learn other languages more quickly.
As a fun follow-up game similar to the Human If-Loop game, split the class into two groups of equal size, and each group does their own passing game same as defined in the regular game. There must be two hats, two balls, and each hat must have a copy of everyone’s colors (each student writes their colour twice, one for each hat).
The games run simultaneously, but this time whenever a colour change happens, the person has to give the ball (and die) to a member of the other circle that is wearing that new color. If no one in the other group is wearing that color then they can continue in their own circle. The two balls will likely change circles relatively frequently, and sometimes one group will have both balls, and sometimes none. It may be helpful to have a volunteer or two to assist in transporting the balls between the groups.
This modified version represents two subroutines – sometimes one is running and sometimes both are running. Each circle is running its own routine, but they are able to interact with one another when needed, and share information back and forth in the form of a “coloured” ball.
- Small Basic Getting Started Guide
- Introducing Small Basic Full Guide
- Online Tutorials
- Program Gallery – includes Small Basic examples for things specific to games, math and science applications, and graphics |
A practical superlens, or super lens, is a lens which uses metamaterials to go beyond the diffraction limit. The diffraction limit is a feature of conventional lenses and microscopes that limits the fineness of their resolution. Many lens designs have been proposed that go beyond the diffraction limit in some way, but there are constraints and obstacles involved in realizing each of them.
- 1 Development of the concept
- 2 Theory
- 3 Superlens development and construction
- 3.1 Perfect lenses
- 3.2 Near-field imaging with magnetic wires
- 3.3 Optical super lens with silver metamaterial
- 3.4 50-nm flat silver layer
- 3.5 Negative index GRIN lenses
- 3.6 Far-field superlens
- 3.7 Hyperlens
- 3.8 Plasmon-assisted microscopy
- 3.9 Super-imaging in the visible frequency range
- 3.10 Super resolution far-field microscopy techniques
- 3.11 Cylindrical superlens via coordinate transformation
- 3.12 Nano-optics with metamaterials
- 3.13 Nanoparticle imaging – quantum dots
- 4 See also
- 5 References
- 6 External links
Development of the concept
As Ernst Abbe reported in 1873, the lens of a camera or microscope is incapable of capturing some very fine details of any given image. The super lens, on the other hand, is intended to capture these fine details. Consequently, conventional lens limitation has inhibited progress in certain areas of the biological sciences. This is because a virus or DNA molecule is out of visual range with the highest powered microscopes. Also, this limitation inhibits seeing the minute processes of cellular proteins moving alongside microtubules of a living cell in their natural environments. Additionally, computer chips and the interrelated microelectronics are manufactured to smaller and smaller scales. This requires specialized optical equipment, which is also limited because these use the conventional lens. Hence, the principles governing a super lens show that it has potential for imaging a DNA molecule and cellular protein processes, or aiding in the manufacture of even smaller computer chips and microelectronics.
Furthermore, conventional lenses capture only the propagating light waves. These are waves that travel from a light source or an object to a lens, or the human eye. This can alternatively be studied as the far field. In contrast, a superlens captures propagating light waves and waves that stay on top of the surface of an object, which, alternatively, can be studied as both the far field and the near field.
An image of an object can be defined as a tangible or visible representation of the features of that object. A requirement for image formation is interaction with fields of electromagnetic radiation. Furthermore, the level of feature detail, or image resolution, is limited to a length of a wave of radiation. For example, with optical microscopy, image production and resolution depends on the length of a wave of visible light. However, with a superlens, this limitation may be removed, and a new class of image generated.
Electron beam lithography can overcome this resolution limit. Optical microscopy, on the other hand cannot, being limited to some value just above 200 nanometers. However, new technologies combined with optical microscopy are beginning to allow for increased feature resolution (see sections below).
One definition of being constrained by the resolution barrier, is a resolution cut off at half the wavelength of light. The visible spectrum has a range that extends from 390 nanometers to 750 nanometers. Green light, half way in between, is around 500 nanometers. Microscopy takes into account parameters such as lens aperture, distance from the object to the lens, and the refractive index of the observed material. This combination defines the resolution cutoff, or Microscopy's optical limit, which tabulates to 200 nanometers. Therefore, conventional lenses, which literally construct an image of an object by using "ordinary" light waves, discard information that produce very fine, and minuscule details of the object that are contained in evanescent waves. These dimensions are less than 200 nanometers. For this reason, conventional optical systems, such as microscopes, have been unable to accurately image very small, nanometer-sized structures or nanometer-sized organisms in vivo, such as individual viruses, or DNA molecules.
The limitations of standard optical microscopy (bright field microscopy) lie in three areas:
- The technique can only image dark or strongly refracting objects effectively.
- Diffraction limits the object, or cell's, resolution to approximately 200 nanometers.
- Out of focus light from points outside the focal plane reduces image clarity.
Live biological cells in particular generally lack sufficient contrast to be studied successfully, because the internal structures of the cell are mostly colorless and transparent. The most common way to increase contrast is to stain the different structures with selective dyes, but often this involves killing and fixing the sample. Staining may also introduce artifacts, apparent structural details that are caused by the processing of the specimen and are thus not a legitimate feature of the specimen.
The conventional glass lens is pervasive throughout our society and in the sciences. It is one of the fundamental tools of optics simply because it interacts with various wavelengths of light. At the same time, the wavelength of light can be analogous to the width of a pencil used to draw the ordinary images. The limit becomes noticeable, for example, when the laser used in a digital video system can only detect and deliver details from a DVD based on the wavelength of light. The image cannot be rendered any sharper beyond this limitation.
Thus, when an object emits or reflects light there are two types of electromagnetic radiation associated with this phenomenon. These are the near field radiation and the far field radiation. As implied by its description, the far field escapes beyond the object. It is then easily captured and manipulated by a conventional glass lens. However, useful (nanometer-sized) resolution details are not observed, because they are hidden in the near field. They remain localized, staying much closer to the light emitting object, unable to travel, and unable to be captured by the conventional lens. Controlling the near field radiation, for high resolution, can be accomplished with a new class of materials not easily obtained in nature. These are unlike familiar solids, such as crystals, which derive their properties from atomic and molecular units. The new material class, termed metamaterials, obtains its properties from its artificially larger structure. This has resulted in novel properties, and novel responses, which allow for details of images that surpass the limitations imposed by the wavelength of light.
This has led to the desire to view live biological cell interactions in a real time, natural environment, and the need for subwavelength imaging. Subwavelength imaging can be defined as optical microscopy with the ability to see details of an object or organism below the wavelength of visible light (see discussion in the above sections). In other words, to have the capability to observe, in real time, below 200 nanometers. Optical microscopy is a non-invasive technique and technology because everyday light is the transmission medium. Imaging below the optical limit in optical microscopy (subwavelength) can be engineered for the cellular level, and nanometer level in principle.
For example, in 2007 a technique was demonstrated where a metamaterials-based lens coupled with a conventional optical lens could manipulate visible light to see (nanoscale) patterns that were too small to be observed with an ordinary optical microscope. This has potential applications not only for observing a whole living cell, or for observing cellular processes, such as how proteins and fats move in and out of cells. In the technology domain, it could be used to improve the first steps of photolithography and nanolithography, essential for manufacturing ever smaller computer chips.
Focusing at subwavelength has become a unique imaging technique which allows visualization of features on the viewed object which are smaller than the wavelength of the photons in use. A photon is the minimum unit of light. While previously thought to be physically impossible, subwavelength imaging has been made possible through the development of metamaterials. This is generally accomplished using a layer of metal such as gold or silver a few atoms thick, which acts as a superlens, or by means of 1D and 2D photonic crystals. There is a subtle interplay between propagating waves, evanescent waves, near field imaging and far field imaging discussed in the sections below.
Early subwavelength imaging
Metamaterial lenses (Superlens) are able to reconstruct nanometer sized images by producing a negative refractive index in each instance. This compensates for the swiftly decaying evanescent waves. Prior to metamaterials, numerous other techniques had been proposed and even demonstrated for creating super-resolution microscopy. As far back as 1928, Edward Hutchinson Synge, a scientist, is given credit for conceiving and developing the idea for what would ultimately become near-field scanning optical microscopy.
In 1974 proposals for two-dimensional fabrication techniques were presented. These proposals included contact imaging to create a pattern in relief, photolithography, electron lithography, X-ray lithography, or ion bombardment, on an appropriate planar substrate. The shared technological goals of the metamaterial lens and the variety of lithography aim to optically resolve features having dimensions much smaller than that of the vacuum wavelength of the exposing light. In 1981 two different techniques of contact imaging of planar (flat) submicroscopic metal patterns with blue light (400 nm) were demonstrated. One demonstration resulted in an image resolution of 100 nm and the other a resolution of 50 to 70 nm.
Since at least 1998 near field optical lithography was designed to create nanometer-scale features. Research on this technology continued as the first experimentally demonstrated negative index metamaterial came into existence in 2000–2001. The effectiveness of electron-beam lithography was also being researched at the beginning of the new millennium for nanometer-scale applications. Imprint lithography was shown to have desirable advantages for nanometer-scaled research and technology.
Advanced deep UV photolithography can now offer sub-100 nm resolution, yet the minimum feature size and spacing between patterns are determined by the diffraction limit of light. Its derivative technologies such as evanescent near-field lithography, near-field interference lithography, and phase-shifting mask lithography were developed to overcome the diffraction limit.
Analysis of the diffraction limit
The original problem of the perfect lens: The general expansion of an EM field emanating from a source consists of both propagating waves and near-field or evanescent waves. An example of a 2-D line source with an electric field which has S-polarization will have plane waves consisting of propagating and evanescent components, which advance parallel to the interface. As both the propagating and the smaller evanescent waves advance in a direction parallel to the medium interface, evanescent waves decay in the direction of propagation. Ordinary (positive index) optical elements can refocus the propagating components, but the exponentially decaying inhomogeneous components are always lost, leading to the diffraction limit for focusing to an image.
A superlens is a lens which is capable of subwavelength imaging, allowing for magnification of near field rays. Conventional lenses have a resolution on the order of one wavelength due to the so-called diffraction limit. This limit hinders imaging very small objects, such as individual atoms, which are much smaller than the wavelength of visible light. A superlens is able to beat the diffraction limit. An example is the initial lens described by Pendry, which uses a slab of material with a negative index of refraction as a flat lens. In theory, a perfect lens would be capable of perfect focus — meaning that it could perfectly reproduce the electromagnetic field of the source plane at the image plane.
The diffraction limit as restriction on resolution
The performance limitation of conventional lenses is due to the diffraction limit. Following Pendry (2000), the diffraction limit can be understood as follows. Consider an object and a lens placed along the z-axis so the rays from the object are traveling in the +z direction. The field emanating from the object can be written in terms of its angular spectrum method, as a superposition of plane waves:
where is a function of as:
Only the positive square root is taken as the energy is going in the +z direction. All of the components of the angular spectrum of the image for which is real are transmitted and re-focused by an ordinary lens. However, if
then becomes imaginary, and the wave is an evanescent wave whose amplitude decays as the wave propagates along the z-axis. This results in the loss of the high angular frequency components of the wave, which contain information about the high frequency (small scale) features of the object being imaged. The highest resolution that can be obtained can be expressed in terms of the wavelength:
A superlens overcomes the limit. A Pendry-type superlens has an index of n = −1 (ε = −1, µ = −1), and in such a material, transport of energy in the +z direction requires the z-component of the wave vector to have opposite sign:
For large angular frequencies, the evanescent wave now grows, so with proper lens thickness, all components of the angular spectrum can be transmitted through the lens undistorted. There are no problems with conservation of energy, as evanescent waves carry none in the direction of growth: the Poynting vector is oriented perpendicularly to the direction of growth. For traveling waves inside a perfect lens, the Poynting vector points in direction opposite to the phase velocity.
Effects of negative index of refraction
Normally when a wave passes through the interface of two materials, the wave appears on the opposite side of the normal. However, if the interface is between a material with a positive index of refraction and another material with a negative index of refraction, the wave will appear on the same side of the normal. Pendry's idea of a perfect lens is a flat material where n = −1. Such a lens allows for near field rays—which normally decay due to the diffraction limit—to focus once within the lens and once outside the lens, allowing for subwavelength imaging.
Superlens development and construction
Superlens construction was at one time thought to be impossible. In 2000, Pendry claimed that a simple slab of left-handed material would do the job. The experimental realization of such a lens took, however, some more time, because it is not that easy to fabricate metamaterials with both negative permittivity and permeability. Indeed, no such material exists naturally and construction of the required metamaterials is non-trivial. Furthermore, it was shown that the parameters of the material are extremely sensitive (the index must equal −1); small deviations make the subwavelength resolution unobservable. Due to the resonant nature of metamaterials, on which many (proposed) implementations of superlenses depend, metamaterials are highly dispersive. The sensitive nature of the superlens to the material parameters causes superlenses based on metamaterials to have a limited usable frequency range. This initial theoretical superlens design consisted of a metamaterial that compensated for wave decay and reconstructs images in the near field. Both propagating and evanescent waves could contribute to the resolution of the image.
Pendry also suggested that a lens having only one negative parameter would form an approximate superlens, provided that the distances involved are also very small and provided that the source polarization is appropriate. For visible light this is a useful substitute, since engineering metamaterials with a negative permeability at the frequency of visible light is difficult. Metals are then a good alternative as they have negative permittivity (but not negative permeability). Pendry suggested using silver due to its relatively low loss at the predicted wavelength of operation (356 nm). In 2003 Pendry's theory was first experimentally demonstrated by Parimi et al. at RF/microwave frequencies. In 2005, two independent groups verified Pendry's lens at UV range, both using thin layers of silver illuminated with UV light to produce "photographs" of objects smaller than the wavelength. Negative refraction of visible light was experimentally verified in an yttrium orthovanadate (YVO4) bicrystal in 2003.
In 2004, the first superlens with a negative refractive index provided resolution three times better than the diffraction limit and was demonstrated at microwave frequencies. In 2005, the first near field superlens was demonstrated by N.Fang et al., but the lens did not rely on negative refraction. Instead, a thin silver film was used to enhance the evanescent modes through surface plasmon coupling. Almost at the same time Melville and Blaikie succeeded with a near field superlens. Other groups followed. Two developments in superlens research were reported in 2008. In the second case, a metamaterial was formed from silver nanowires which were electrochemically deposited in porous aluminium oxide. The material exhibited negative refraction.
The superlens has not yet been demonstrated at visible or near-infrared frequencies (Nielsen, R. B.; 2010). Furthermore, as dispersive materials, these are limited to functioning at a single wavelength. Proposed solutions are metal–dielectric composites (MDCs) and multilayer lens structures. The multi-layer superlens appears to have better subwavelength resolution than the single layer superlens. Losses are less of a concern with the multi-layer system, but so far it appears to be impractical because of impedance mis-match.
When the world is observed through conventional lenses, the sharpness of the image is determined by and limited to the wavelength of light. Around the year 2000, a slab of negative index metamaterial was theorized to create a lens with capabilities beyond conventional (positive index) lenses. Pendry proposed that a thin slab of negative refractive metamaterial might overcome known problems with common lenses to achieve a "perfect" lens that would focus the entire spectrum, both the propagating as well as the evanescent spectra.
A slab of silver was proposed as the metamaterial. More specifically, such silver thin film can be regarded as a metasurface. As light moves away (propagates) from the source, it acquires an arbitrary phase. Through a conventional lens the phase remains consistent, but the evanescent waves decay exponentially. In the flat metamaterial DNG slab, normally decaying evanescent waves are contrarily amplified. Furthermore, as the evanescent waves are now amplified, the phase is reversed.
Therefore, a type of lens was proposed, consisting of a metal film metamaterial. When illuminated near its plasma frequency, the lens could be used for superresolution imaging that compensates for wave decay and reconstructs images in the near-field. In addition, both propagating and evanescent waves contribute to the resolution of the image.
Pendry suggested that left-handed slabs allow "perfect imaging" if they are completely lossless, impedance matched, and their refractive index is −1 relative to the surrounding medium. Theoretically, this would be a breakthrough in that the optical version resolves objects as minuscule as nanometers across. Pendry predicted that Double negative metamaterials (DNG) with a refractive index of n = −1, can act, at least in principle, as a "perfect lens" allowing imaging resolution which is limited not by the wavelength, but rather by material quality.
Other studies concerning the perfect lens
Further research demonstrated that Pendry's theory behind the perfect lens was not exactly correct. The analysis of the focusing of the evanescent spectrum (equations 13–21 in reference ) was flawed. In addition, this applies to only one (theoretical) instance, and that is one particular medium that is lossless, nondispersive and the constituent parameters are defined as:
- ε(ω) / ε0 = µ(ω) / µ0 = −1, which in turn results in a negative refraction of n = −1
However, the final intuitive result of this theory that both the propagating and evanescent waves are focused, resulting in a converging focal point within the slab and another convergence (focal point) beyond the slab turned out to be correct.
If the DNG metamaterial medium has a large negative index or becomes lossy or dispersive, Pendry's perfect lens effect cannot be realized. As a result, the perfect lens effect does not exist in general. According to FDTD simulations at the time (2001), the DNG slab acts like a converter from a pulsed cylindrical wave to a pulsed beam. Furthermore, in reality (in practice), a DNG medium must be and is dispersive and lossy, which can have either desirable or undesirable effects, depending on the research or application. Consequently, Pendry's perfect lens effect is inaccessible with any metamaterial designed to be a DNG medium.
Another analysis, in 2002, of the perfect lens concept showed it to be in error while using the lossless, dispersionless DNG as the subject. This analysis mathematically demonstrated that subtleties of evanescent waves, restriction to a finite slab and absorption had led to inconsistencies and divergencies that contradict the basic mathematical properties of scattered wave fields. For example, this analysis stated that absorption, which is linked to dispersion, is always present in practice, and absorption tends to transform amplified waves into decaying ones inside this medium (DNG).
A third analysis of Pendry's perfect lens concept, published in 2003, used the recent demonstration of negative refraction at microwave frequencies as confirming the viability of the fundamental concept of the perfect lens. In addition, this demonstration was thought to be experimental evidence that a planar DNG metamaterial would refocus the far field radiation of a point source. However, the perfect lens would require significantly different values for permittivity, permeability, and spatial periodicity than the demonstrated negative refractive sample.
This study agrees that any deviation from conditions where ε = µ = −1 results in the normal, conventional, imperfect image that degrades exponentially i.e., the diffraction limit. The perfect lens solution in the absence of losses is again, not practical, and can lead to paradoxical interpretations.
It was determined that although resonant surface plasmons are undesirable for imaging, these turn out to be essential for recovery of decaying evanescent waves. This analysis discovered that metamaterial periodicity has a significant effect on the recovery of types of evanescent components. In addition, achieving subwavelength resolution is possible with current technologies. Negative refractive indices have been demonstrated in structured metamaterials. Such materials can be engineered to have tunable material parameters, and so achieve the optimal conditions. Losses can be minimized in structures utilizing superconducting elements. Furthermore, consideration of alternate structures may lead to configurations of left-handed materials that can achieve subwavelength focusing. Such structures were being studied at the time.
Near-field imaging with magnetic wires
Pendry's theoretical lens was designed to focus both propagating waves and the near-field evanescent waves. From permittivity "ε" and magnetic permeability "µ" an index of refraction "n" is derived. The index of refraction determines how light is bent on traversing from one material to another. In 2003, it was suggested that a metamaterial constructed with alternating, parallel, layers of n = −1 materials and n = +1 materials, would be a more effective design for a metamaterial lens. It is an effective medium made up of a multi-layer stack, which exhibits birefringence, n2 = ∞, nx = 0. The effective refractive indices are then perpendicular and parallel, respectively.
Like a conventional lens, the z-direction is along the axis of the roll. The resonant frequency (w0) – close to 21.3 MHz – is determined by the construction of the roll. Damping is achieved by the inherent resistance of the layers and the lossy part of permittivity.
Simply put, as the field pattern is transferred from the input to the output face of a slab, so the image information is transported across each layer. This was experimentally demonstrated. To test the two-dimensional imaging performance of the material, an antenna was constructed from a pair of anti-parallel wires in the shape of the letter M. This generated a line of magnetic flux, so providing a characteristic field pattern for imaging. It was placed horizontally, and the material, consisting of 271 Swiss rolls tuned to 21.5 MHz, was positioned on top of it. The material does indeed act as an image transfer device for the magnetic field. The shape of the antenna is faithfully reproduced in the output plane, both in the distribution of the peak intensity, and in the “valleys” that bound the M.
A consistent characteristic of the very near (evanescent) field is that the electric and magnetic fields are largely decoupled. This allows for nearly independent manipulation of the electric field with the permittivity and the magnetic field with the permeability.
Furthermore, this is highly anisotropic system. Therefore, the transverse (perpendicular) components of the EM field which radiate the material, that is the wavevector components kx and ky, are decoupled from the longitudinal component kz. So, the field pattern should be transferred from the input to the output face of a slab of material without degradation of the image information.
Optical super lens with silver metamaterial
In 2003, a group of researchers showed that optical evanescent waves would be enhanced as they passed through a silver metamaterial lens. This was referred to as a diffraction-free lens. Although a coherent, high-resolution, image was not intended, nor achieved, regeneration of the evanescent field was experimentally demonstrated.
By 2003 it was known for decades that evanescent waves could be enhanced by producing excited states at the interface surfaces. However, the use of surface plasmons to reconstruct evanescent components was not tried until Pendry's recent proposal (see "Perfect lens" above). By studying films of varying thickness it has been noted that a rapidly growing transmission coefficient occurs, under the appropriate conditions. This demonstration provided direct evidence that the foundation of superlensing is solid, and suggested the path that will enable the observation of superlensing at optical wavelengths.
In 2005, a coherent, high-resolution, image was produced (based on the 2003 results). A thinner slab of silver (35 nm) was better for sub–diffraction-limited imaging, which results in one-sixth of the illumination wavelength. This type of lens was used to compensate for wave decay and reconstruct images in the near-field. Prior attempts to create a working superlens used a slab of silver that was too thick.
Objects were imaged as small as 40 nm across. In 2005 the imaging resolution limit for optical microscopes was at about one tenth the diameter of a red blood cell. With the silver superlens this results in a resolution of one hundredth of the diameter of a red blood cell.
Conventional lenses, whether man-made or natural, create images by capturing the propagating light waves all objects emit and then bending them. The angle of the bend is determined by the index of refraction and has always been positive until the fabrication of artificial negative index materials. Objects also emit evanescent waves that carry details of the object, but are unobtainable with conventional optics. Such evanescent waves decay exponentially and thus never become part of the image resolution, an optics threshold known as the diffraction limit. Breaking this diffraction limit, and capturing evanescent waves are critical to the creation of a 100-percent perfect representation of an object.
In addition, conventional optical materials suffer a diffraction limit because only the propagating components are transmitted (by the optical material) from a light source. The non-propagating components, the evanescent waves, are not transmitted. Moreover, lenses that improve image resolution by increasing the index of refraction are limited by the availability of high-index materials, and point by point subwavelength imaging of electron microscopy also has limitations when compared to the potential of a working superlens. Scanning electron and atomic force microscopes are now used to capture detail down to a few nanometers. However, such microscopes create images by scanning objects point by point, which means they are typically limited to non-living samples, and image capture times can take up to several minutes.
With current optical microscopes, scientists can only make out relatively large structures within a cell, such as its nucleus and mitochondria. With a superlens, optical microscopes could one day reveal the movements of individual proteins traveling along the microtubules that make up a cell's skeleton, the researchers said. Optical microscopes can capture an entire frame with a single snapshot in a fraction of a second. With superlenses this opens up nanoscale imaging to living materials, which can help biologists better understand cell structure and function in real time.
Advances of magnetic coupling in the THz and infrared regime provided the realization of a possible metamaterial superlens. However, in the near field, the electric and magnetic responses of materials are decoupled. Therefore, for transverse magnetic (TM) waves, only the permittivity needed to be considered. Noble metals, then become natural selections for superlensing because negative permittivity is easily achieved.
By designing the thin metal slab so that the surface current oscillations (the surface plasmons) match the evanescent waves from the object, the superlens is able to substantially enhance the amplitude of the field. Superlensing results from the enhancement of evanescent waves by surface plasmons.
The key to the superlens is its ability to significantly enhance and recover the evanescent waves that carry information at very small scales. This enables imaging well below the diffraction limit. No lens is yet able to completely reconstitute all the evanescent waves emitted by an object, so the goal of a 100-percent perfect image will persist. However, many scientists believe that a true perfect lens is not possible because there will always be some energy absorption loss as the waves pass through any known material. In comparison, the superlens image is substantially better than the one created without the silver superlens.
50-nm flat silver layer
In February 2004, an electromagnetic radiation focusing system, based on a negative index metamaterial plate, accomplished subwavelength imaging in the microwave domain. This showed that obtaining separated images at much less than the wavelength of light is possible. Also, in 2004, a silver layer was used for sub-micrometre near-field imaging. Super high resolution was not achieved, but this was intended. The silver layer was too thick to allow significant enhancements of evanescent field components.
In early 2005, feature resolution was achieved with a different silver layer. Though this was not an actual image, it was intended. Dense feature resolution down to 250 nm was produced in a 50 nm thick photoresist using illumination from a mercury lamp. Using simulations (FDTD), the study noted that resolution improvements could be expected for imaging through silver lenses, rather than another method of near field imaging.
Building on this prior research, super resolution was achieved at optical frequencies using a 50 nm flat silver layer. The capability of resolving an image beyond the diffraction limit, for far-field imaging, is defined here as superresolution.
The image fidelity is much improved over earlier results of the previous experimental lens stack. Imaging of sub-micrometre features has been greatly improved by using thinner silver and spacer layers, and by reducing the surface roughness of the lens stack. The ability of the silver lenses to image the gratings has been used as the ultimate resolution test, as there is a concrete limit for the ability of a conventional (far field) lens to image a periodic object – in this case the image is a diffraction grating. For normal-incidence illumination the minimum spatial period that can be resolved with wavelength λ through a medium with refractive index n is λ/n. Zero contrast would therefore be expected in any (conventional) far-field image below this limit, no matter how good the imaging resist might be.
The (super) lens stack here results in a computational result of a diffraction-limited resolution of 243 nm. Gratings with periods from 500 nm down to 170 nm are imaged, with the depth of the modulation in the resist reducing as the grating period reduces. All of the gratings with periods above the diffraction limit (243 nm) are well resolved. The key results of this experiment are super-imaging of the sub-diffraction limit for 200 nm and 170 nm periods. In both cases the gratings are resolved, even though the contrast is diminished, but this gives experimental confirmation of Pendry's superlensing proposal.
Negative index GRIN lenses
Gradient Index (GRIN) – The larger range of material response available in metamaterials should lead to improved GRIN lens design. In particular, since the permittivity and permeability of a metamaterial can be adjusted independently, metamaterial GRIN lenses can presumably be better matched to free space. The GRIN lens is constructed by using a slab of NIM with a variable index of refraction in the y direction, perpendicular to the direction of propagation z.
In 2005, a group proposed a theoretical way to overcome the near-field limitation using a new device termed a far-field superlens (FSL), which is a properly designed periodically corrugated metallic slab-based superlens.
Imaging was experimentally demonstrated in the far field, taking the next step after near-field experiments. The key element is termed as a far-field superlens (FSL) which consists of a conventional superlens and a nanoscale coupler.
Focusing beyond the diffraction limit with far-field time reversal
An approach is presented for subwavelength focusing of microwaves using both a time-reversal mirror placed in the far field and a random distribution of scatterers placed in the near field of the focusing point.
Once capability for near-field imaging was demonstrated, the next step was to project a near-field image into the far-field. This concept, including technique and materials, is dubbed "hyperlens".,
The capability of a metamaterial-hyperlens for sub-diffraction-limited imaging is shown below.
Sub-diffraction imaging in the far field
With conventional optical lenses, the far field is a limit that is too distant for evanescent waves to arrive intact. When imaging an object, this limits the optical resolution of lenses to the order of the wavelength of light These non-propagating waves carry detailed information in the form of high spatial resolution, and overcome limitations. Therefore, projecting image details, normally limited by diffraction into the far field does require recovery of the evanescent waves.
In essence steps leading up to this investigation and demonstration was the employment of an anisotropic metamaterial with a hyperbolic dispersion. The effect was such that ordinary evanescent waves propagate along the radial direction of the layered metamaterial. On a microscopic level the large spatial frequency waves propagate through coupled surface plasmon excitations between the metallic layers.
In 2007, just such an anisotropic metamaterial was employed as a magnifying optical hyperlens. The hyperlens consisted of a curved periodic stack of thin silver and alumina (at 35 nanometers thick) deposited on a half-cylindrical cavity, and fabricated on a quartz substrate. The radial and tangential permittivities have different signs.
Upon illumination, the scattered evanescent field from the object enters the anisotropic medium and propagates along the radial direction. Combined with another effect of the metamaterial, a magnified image at the outer diffraction limit-boundary of the hyperlens occurs. Once the magnified feature is larger than (beyond) the diffraction limit, it can then be imaged with a conventional optical microscope, thus demonstrating magnification and projection of a sub-diffraction-limited image into the far field.
The hyperlens magnifies the object by transforming the scattered evanescent waves into propagating waves in the anisotropic medium, projecting a spatial resolution high-resolution image into the far field. This type of metamaterials-based lens, paired with a conventional optical lens is therefore able to reveal patterns too small to be discerned with an ordinary optical microscope. In one experiment, the lens was able to distinguish two 35-nanometer lines etched 150 nanometers apart. Without the metamaterials, the microscope showed only one thick line.
In a control experiment, the line pair object was imaged without the hyperlens. The line pair could not be resolved because of the diffraction limit of the (optical) aperture was limited to 260 nm. Because the hyperlens supports the propagation of a very broad spectrum of wave vectors, it can magnify arbitrary objects with sub-diffraction-limited resolution.
Although this work appears to be limited by being only a cylindrical hyperlens, the next step is to design a spherical lens. That lens will exhibit three-dimensional capability. Near-field optical microscopy uses a tip to scan an object. In contrast, this optical hyperlens magnifies an image that is sub-diffraction-limited. The magnified sub-diffraction image is then projected into the far field.
The optical hyperlens shows a notable potential for applications, such as real-time biomolecular imaging and nanolithography. Such a lens could be used to watch cellular processes that have been impossible to see. Conversely, it could be used to project an image with extremely fine features onto a photoresist as a first step in photolithography, a process used to make computer chips. The hyperlens also has applications for DVD technology.
In 2010, spherical hyperlens for two dimensional imaging at visible frequencies is demonstrated experimentally. The spherical hyperlens based on silver and titanium oxide alternating layers has strong anisotropic hyperbolic dispersion allowing super-resolution with visible spectrum. The resolution is 160 nm at visible spectrum. It will enable biological imaging such as cell and DNA with a strong benefit of magnifying sub-diffraction resolution into far-field.
Super-imaging in the visible frequency range
Continual improvements in optical microscopy are needed to keep up with the progress in nanotechnology and microbiology. Advancement in spatial resolution is key. Conventional optical microscopy is limited by a diffraction limit which is on the order of 200 nanometers (wavelength). This means that viruses, proteins, DNA molecules and many other samples are hard to observe with a regular (optical) microscope. The lens previously demonstrated with negative refractive index material, a thin planar superlens, does not provide magnification beyond the diffraction limit of conventional microscopes. Therefore, images smaller than the conventional diffraction limit will still be unavailable.
Another approach achieving super-resolution at visible wavelength is recently developed spherical hyperlens based on silver and titanium oxide alternating layers. It has strong anisotropic hyperbolic dispersion allowing super-resolution with converting evanescent waves into propagating waves. This method is non-fluorescence based super-resolution imaging, which results in real-time imaging without any reconstruction of images and information.
Super resolution far-field microscopy techniques
By 2008 the diffraction limit has been surpassed and lateral imaging resolutions of 20 to 50 nm have been achieved by several "super-resolution" far-field microscopy techniques, including stimulated emission depletion (STED) and its related RESOLFT (reversible saturable optically linear fluorescent transitions) microscopy; saturated structured illumination microscopy (SSIM) ; stochastic optical reconstruction microscopy (STORM); photoactivated localization microscopy (PALM); and other methods using similar principles.
Cylindrical superlens via coordinate transformation
This began with a proposal by Pendry, in 2003. Magnifying the image required a new design concept in which the surface of the negatively refracting lens is curved. One cylinder touches another cylinder, resulting in a curved cylindrical lens which reproduced the contents of the smaller cylinder in magnified but undistorted form outside the larger cylinder. Coordinate transformations are required to curve the original perfect lens into the cylindrical, lens structure.
In 2007, a superlens utilizing coordinate transformation was again the subject. However, in addition to image transfer other useful operations were discussed; translation, rotation, mirroring and inversion as well as the superlens effect. Furthermore, elements that perform magnification are described, which are free from geometric aberrations, on both the input and output sides while utilizing free space sourcing (rather than waveguide). These magnifying elements also operate in the near and far field, transferring the image from near field to far field.
Nano-optics with metamaterials
Nanohole array as a lens
Work in 2007 demonstrated that a quasi-periodic array of nanoholes, in a metal screen, were able to focus the optical energy of a plane wave to form subwavelength spots (hot spots). The distances for the spots was a few tens of wavelengths on the other side of the array, or, in other words, opposite the side of the incident plane wave. The quasi-periodic array of nanoholes functioned as a light concentrator.
In June 2008, this was followed by the demonstrated capability of an array of quasi-crystal nanoholes in a metal screen. More than concentrating hot spots, an image of the point source is displayed a few tens of wavelengths from the array, on the other side of the array (the image plane). Also this type of array exhibited a 1 to 1 linear displacement, – from the location of the point source to its respective, parallel, location on the image plane. In other words, from x to x + δx. For example, other point sources were similarly displaced from x' to x' + δx', from x^ to x^ + δx^, and from x^^ to x^^ + δx^^, and so on. Instead of functioning as a light concentrator, this performs the function of conventional lens imaging with a 1 to 1 correspondence, albeit with a point source.
However, resolution of more complicated structures can be achieved as constructions of multiple point sources. The fine details, and brighter image, that are normally associated with the high numerical apertures of conventional lenses can be reliably produced. Notable applications for this technology arise when conventional optics is not suitable for the task at hand. For example, this technology is better suited for X-ray imaging, or nano-optical circuits, and so forth.
The metamaterial nanolens was constructed of millions of nanowires at 20 nanometers in diameter. These were precisely aligned and a packaged configuration was applied. The lens is able to depict a clear, high-resolution image of nano-sized objects because it uses both normal propagating EM radiation, and evanescent waves to construct the image. Super-resolution imaging was demonstrated over a distance of 6 times the wavelength (λ), in the far-field, with a resolution of at least λ/4. This is a significant improvement over previous research and demonstration of other near field and far field imaging, including nanohole arrays discussed below.
Light transmission properties of holey metal films
2009-12. The light transmission properties of holey metal films in the metamaterial limit, where the unit length of the periodic structures is much smaller than the operating wavelength, are analyzed theoretically.
Transporting an Image through a subwavelength hole
Theoretically it appears possible to transport a complex electromagnetic image through a tiny subwavelength hole with diameter considerably smaller than the diameter of the image, without losing the subwavelength details.
Nanoparticle imaging – quantum dots
When observing the complex processes in a living cell, significant processes (changes) or details are easy to overlook. This can more easily occur when watching changes that take a long time to unfold and require high-spatial-resolution imaging. However, recent research offers a solution to scrutinize activities that occur over hours or even days inside cells, potentially solving many of the mysteries associated with molecular-scale events occurring in these tiny organisms.
A joint research team, working at the National Institute of Standards and Technology (NIST) and the National Institute of Allergy and Infectious Diseases (NIAID), has discovered a method of using nanoparticles to illuminate the cellular interior to reveal these slow processes. Nanoparticles, thousands of times smaller than a cell, have a variety of applications. One type of nanoparticle called a quantum dot glows when exposed to light. These semiconductor particles can be coated with organic materials, which are tailored to be attracted to specific proteins within the part of a cell a scientist wishes to examine.
Notably, quantum dots last longer than many organic dyes and fluorescent proteins that were previously used to illuminate the interiors of cells. They also have the advantage of monitoring changes in cellular processes while most high-resolution techniques like electron microscopy only provide images of cellular processes frozen at one moment. Using quantum dots, cellular processes involving the dynamic motions of proteins, are observable (elucidated).
The research focused primarily on characterizing quantum dot properties, contrasting them with other imaging techniques. In one example, quantum dots were designed to target a specific type of human red blood cell protein that forms part of a network structure in the cell's inner membrane. When these proteins cluster together in a healthy cell, the network provides mechanical flexibility to the cell so it can squeeze through narrow capillaries and other tight spaces. But when the cell gets infected with the malaria parasite, the structure of the network protein changes.
Because the clustering mechanism is not well understood, it was decided to examine it with the quantum dots. If a technique could be developed to visualize the clustering, then the progress of a malaria infection could be understood, which has several distinct developmental stages.
Research efforts revealed that as the membrane proteins bunch up, the quantum dots attached to them are induced to cluster themselves and glow more brightly, permitting real time observation as the clustering of proteins progresses. More broadly, the research discovered that when quantum dots attach themselves to other nanomaterials, the dots' optical properties change in unique ways in each case. Furthermore, evidence was discovered that quantum dot optical properties are altered as the nanoscale environment changes, offering greater possibility of using quantum dots to sense the local biochemical environment inside cells.
Some concerns remain over toxicity and other properties. However, the overall findings indicate that quantum dots could be a valuable tool to investigate dynamic cellular processes.
The abstract from the related published research paper states (in part): Results are presented regarding the dynamic fluorescence properties of bioconjugated nanocrystals or quantum dots (QDs) in different chemical and physical environments. A variety of QD samples was prepared and compared: isolated individual QDs, QD aggregates, and QDs conjugated to other nanoscale materials...
- Pendry, J. B. (2000). "Negative Refraction Makes a Perfect Lens" (PDF). Physical Review Letters. 85 (18): 3966–9. Bibcode:2000PhRvL..85.3966P. doi:10.1103/PhysRevLett.85.3966. PMID 11041972.
- Zhang, Xiang; Liu,Zhaowei (2008). "Superlenses to overcome the diffraction limit" (Free PDF download). Nature Materials. 7 (6): 435–441. Bibcode:2008NatMa...7..435Z. doi:10.1038/nmat2141. PMID 18497850. Retrieved 2013-06-03.
- Aguirre, Edwin L. (2012-09-18). "Creating a 'Perfect' Lens for Super-Resolution Imaging". U-Mass Lowell News. doi:10.1117/1.3484153. Retrieved 2013-06-02.
- Kawata, S.; Inouye, Y.; Verma, P. (2009). "Plasmonics for near-field nano-imaging and superlensing". Nature Photonics. 3 (7): 388–394. Bibcode:2009NaPho...3..388K. doi:10.1038/nphoton.2009.111.
- Vinson, V; Chin, G. (2007). "Introduction to special issue – Lights, Camera, Action". Science. 316 (5828): 1143. doi:10.1126/science.316.5828.1143.
- Pendry, John (September 2004). "Manipulating the Near Field" (PDF). Optics & Photonics News.
- Anantha, S. Ramakrishna; J.B. Pendry; M.C.K. Wiltshire; W.J. Stewart (2003). "Imaging the Near Field" (PDF). Journal of Modern Optics. Taylor & Francis. 50 (9): 1419–1430. doi:10.1080/0950034021000020824.
- GB 541753, Dennis Gabor, "Improvements in or relating to optical systems composed of lenticules", published 1941
- Lauterbur, P. (1973). "Image Formation by Induced Local Interactions: Examples Employing Nuclear Magnetic Resonance". Nature. 242 (5394): 190–191. Bibcode:1973Natur.242..190L. doi:10.1038/242190a0.
- "Prof. Sir John Pendry, Imperial College, London". Colloquia Series. Research Laboratory of Electronics. 13 March 2007. Retrieved 2010-04-07.
- Yeager, A. (28 March 2009). "Cornering The Terahertz Gap". Science News. Retrieved 2010-03-02.
- Savo, S.; Andreone, A.; Di Gennaro, E. (2009). "Superlensing properties of one-dimensional dielectric photonic crystals". Optics Express. 17 (22): 19848–56. arXiv:. Bibcode:2009OExpr..1719848S. doi:10.1364/OE.17.019848. PMID 19997206.
- Parimi, P.; et al. (2003). "Imaging by Flat Lens using Negative Refraction". Nature. 426 (6965): 404. Bibcode:2003Natur.426..404P. doi:10.1038/426404a. PMID 14647372.
- Bullis, Kevin (2007-03-27). "Superlenses and Smaller Computer Chips". Technology Review magazine of Massachusetts Institute of Technology. pp. 2 pages. Retrieved 2010-01-13.
- Novotny, Lukas, (November 2007). "Adapted from "The History of Near-field Optics"" (PDF). In Wolf, Emil. Progress in Optics. Progress In Optics series. 50. Amsterdam: Elsevier. pp. 142–150. ISBN 978-0-444-53023-3.
- E.H. Synge (1928). "A suggested method for extending the microscopic resolution into the ultramicroscopic region". Philosophical Magazine and Journal of Science: Series 7. 6: 356–362. doi:10.1080/14786440808564615.
- E.H. Synge (1932). "An application of piezoelectricity to microscopy". Phil. Mag. 13: 297.
- Smith, H.I. (1974). "Fabrication techniques for surface-acoustic-wave and thin-film optical devices". Proceedings of the IEEE. 62 (10): 1361–1387. doi:10.1109/PROC.1974.9627.
- Srituravanich, W.; et al. (2004). "Plasmonic Nanolithography" (PDF). Nano Letters. 4 (6): 1085–1088. Bibcode:2004NanoL...4.1085S. doi:10.1021/nl049573q. Archived from the original (PDF) on April 15, 2010.
- Fischer, U. Ch.; Zingsheim, H. P. (1981). "Submicroscopic pattern replication with visible light". Journal of Vacuum Science and Technology. 19 (4): 881. Bibcode:1981JVST...19..881F. doi:10.1116/1.571227.
- Schmid, H.; et al. (1998). "Light-coupling masks for lensless, sub-wavelength optical lithography". Applied Physics Letters. 73 (19): 237. Bibcode:1998ApPhL..72.2379S. doi:10.1063/1.121362.
- Fang, N.; et al. (2005). "Sub–Diffraction-Limited Optical Imaging with a Silver Superlens". Science. 308 (5721): 534–7. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849.
- Garcia1, N.; Nieto-Vesperinas, M. (2002). "Left-Handed Materials Do Not Make a Perfect Lens". Physical Review Letters. 88 (20): 207403. Bibcode:2002PhRvL..88t7403G. doi:10.1103/PhysRevLett.88.207403. PMID 12005605.
- "David R Smith (May 10, 2004). "Breaking the diffraction limit". Institute of Physics. Retrieved May 31, 2009.
- Pendry, J. B. (2000). "Negative refraction makes a perfect lens". Phys. Rev. Lett. 85 (18): 3966–9. Bibcode:2000PhRvL..85.3966P. doi:10.1103/PhysRevLett.85.3966. PMID 11041972.
- Podolskiy, V.A.; Narimanov, EE (2005). "Near-sighted superlens". Opt. Lett. 30 (1): 75–7. arXiv:. Bibcode:2005OptL...30...75P. doi:10.1364/OL.30.000075. PMID 15648643.
- Tassin, P.; Veretennicoff, I; Vandersande, G (2006). "Veselago's lens consisting of left-handed materials with arbitrary index of refraction". Opt. Commun. 264 (1): 130–134. Bibcode:2006OptCo.264..130T. doi:10.1016/j.optcom.2006.02.013.
- Brumfiel, G (2009). "Metamaterials: Ideal focus" (online web page). Nature News. 459 (7246): 504–5. doi:10.1038/459504a. PMID 19478762.
- "Imaging by Flat Lens using Negative Refraction", P. V. Parimi, W. T. Lu, P. Vodo, and S. Sridhar, Nature, 426, 404 (2003).
- Melville, DOS; Blaikie, R (2005). "Super-resolution imaging through a planar silver layer". Optics Express. 13 (6): 2127–34. Bibcode:2005OExpr..13.2127M. doi:10.1364/OPEX.13.002127. PMID 19495100.
- Fang, Nicholas; Lee, H; Sun, C; Zhang, X (2005). "Sub–Diffraction-Limited Optical Imaging with a Silver Superlens". Science. 308 (5721): 534–7. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849.
- Zhang, Yong; Fluegel, B.; Mascarenhas, A. (2003). "Total Negative Refraction in Real Crystals for Ballistic Electrons and Light". Physical Review Letters. 91 (15): 157404. Bibcode:2003PhRvL..91o7404Z. doi:10.1103/PhysRevLett.91.157404. PMID 14611495.
- Belov, Pavel; Simovski, Constantin (2005). "Canalization of subwavelength images by electromagnetic crystals". Physical Review B. 71 (19): 193105. Bibcode:2005PhRvB..71s3105B. doi:10.1103/PhysRevB.71.193105.
- Grbic, A.; Eleftheriades, G. V. (2004). "Overcoming the Diffraction Limit with a Planar Left-handed Transmission-line Lens". Physical Review Letters. 92 (11): 117403. Bibcode:2004PhRvL..92k7403G. doi:10.1103/PhysRevLett.92.117403. PMID 15089166.
- Nielsen, R. B.; Thoreson, M. D.; Chen, W.; Kristensen, A.; Hvam, J. M.; Shalaev, V. M.; Boltasseva, A. (2010). "Toward superlensing with metal–dielectric composites and multilayers" (PDF). Applied Physics B. 100 (1): 93–100. Bibcode:2010ApPhB.100...93N. doi:10.1007/s00340-010-4065-z. Archived from the original (Free PDF download) on September 8, 2014.
- Fang, N.; Lee, H; Sun, C; Zhang, X (2005). "Sub-Diffraction-Limited Optical Imaging with a Silver Superlens". Science. 308 (5721): 534–7. Bibcode:2005Sci...308..534F. doi:10.1126/science.1108759. PMID 15845849.
- D.O.S. Melville, R.J. Blaikie, Optics Express 13, 2127 (2005)
- C. Jeppesen, R.B. Nielsen, A. Boltasseva, S. Xiao, N.A. Mortensen, A. Kristensen, Optics Express 17, 22543 (2009)
- Valentine, J.; et al. (2008). "Three-dimensional optical metamaterial with a negative refractive index". Nature. 455 (7211): 376–9. Bibcode:2008Natur.455..376V. doi:10.1038/nature07247. PMID 18690249.
- Yao, J.; et al. (2008). "Optical Negative Refraction in Bulk Metamaterials of Nanowires". Science. 321 (5891): 930. Bibcode:2008Sci...321..930Y. doi:10.1126/science.1157566. PMID 18703734.
- W. Cai, D.A. Genov, V.M. Shalaev, Phys. Rev. B 72, 193101 (2005)
- A.V. Kildishev,W. Cai, U.K. Chettiar, H.-K. Yuan, A.K. Sarychev, V.P. Drachev, V.M. Shalaev, J. Opt. Soc. Am. B 23, 423 (2006)
- L. Shi, L. Gao, S. He, B. Li, Phys. Rev. B 76, 045116 (2007)
- L. Shi, L. Gao, S. He, Proc. Int. Symp. Biophot. Nanophot. Metamat. (2006), pp. 463–466
- Z. Jacob, L.V. Alekseyev, E. Narimanov, Opt. Express 14, 8247 (2006)
- P.A. Belov, Y. Hao, Phys. Rev. B 73, 113110 (2006)
- P. Chaturvedi, N.X. Fang, Mater. Res. Soc. Symp. Proc. 919, 0919-J04-07 (2006)
- B. Wood, J.B. Pendry, D.P. Tsai, Phys. Rev. B 74, 115116 (2006)
- E. Shamonina, V.A. Kalinin, K.H. Ringhofer, L. Solymar, Electron. Lett. 37, 1243 (2001)
- Ziolkowski, R. W.; Heyman, E. (2001). "Wave propagation in media having negative permittivity and permeability" (PDF). Physical Review E. 64 (5): 056625. Bibcode:2001PhRvE..64e6625Z. doi:10.1103/PhysRevE.64.056625. Archived from the original (PDF) on July 17, 2010.
- Smolyaninov, Igor I.; Hung, YJ; Davis, CC (2007-03-27). "Magnifying Superlens in the Visible Frequency Range". Science. 315 (5819): 1699–1701. arXiv:. Bibcode:2007Sci...315.1699S. doi:10.1126/science.1138746. PMID 17379804.
- Dumé, B. (21 April 2005). "Superlens breakthrough". Physics World.
- Pendry, J. B. (18 February 2005). "Collection of photonics references".
- Smith, D.R.; et al. (2003). "Limitations on subdiffraction imaging with a negative refractive index slab" (PDF). Applied Physics Letters. 82 (10): 1506. arXiv:. Bibcode:2003ApPhL..82.1506S. doi:10.1063/1.1554779.
- Shelby, R. A.; Smith, D. R.; Schultz, S. (2001). "Experimental Verification of a Negative Index of Refraction". Science. 292 (5514): 77–9. Bibcode:2001Sci...292...77S. doi:10.1126/science.1058847. PMID 11292865.
- Wiltshire, M. c. k.; et al. (2003). "Metamaterial endoscope for magnetic field transfer: near field imaging with magnetic wires" (PDF). Optics Express. 11 (7): 709–15. Bibcode:2003OExpr..11..709W. doi:10.1364/OE.11.000709. PMID 19461782.
- Dumé, B. (4 April 2005). "Superlens breakthrough". Physics World. Retrieved 2009-11-10.
- Liu, Z.; et al. (2003). "Rapid growth of evanescent wave by a silver superlens" (PDF). Applied Physics Letters. 83 (25): 5184. Bibcode:2003ApPhL..83.5184L. doi:10.1063/1.1636250. Archived from the original (PDF) on June 24, 2010.
- Lagarkov, A. N.; V. N. Kissel (2004-02-18). "Near-Perfect Imaging in a Focusing System Based on a Left-Handed-Material Plate". Phys. Rev. Lett. 92 (7): 077401 (2004) [4 pages]. Bibcode:2004PhRvL..92g7401L. doi:10.1103/PhysRevLett.92.077401.
- Melville, David; Blaikie, Richard (2005-03-21). "Super-resolution imaging through a planar silver layer" (PDF). Optics Express. 13 (6): 2127–2134. Bibcode:2005OExpr..13.2127M. doi:10.1364/OPEX.13.002127. PMID 19495100. Retrieved 2009-10-23.
- Blaikie, Richard J; Melville, David O. S. (2005-01-20). "Imaging through planar silver lenses in the optical near field". J. Opt. Soc. Am. A. 7 (2): S176–S183. Bibcode:2005JOptA...7S.176B. doi:10.1088/1464-4258/7/2/023.
- Greegor RB, et al. (2005-08-25). "Simulation and testing of a graded negative index of refraction lens" (PDF). Applied Physics Letters. 87 (9): 091114. Bibcode:2005ApPhL..87i1114G. doi:10.1063/1.2037202. Archived from the original (PDF) on June 18, 2010. Retrieved 2009-11-01.
- Durant, Stéphane; et al. (2005-12-02). "Theory of the transmission properties of an optical far-field superlens for imaging beyond the diffraction limit" (PDF). J. Opt. Soc. Am. B. 23 (11): 2383–2392. Bibcode:2006JOSAB..23.2383D. doi:10.1364/JOSAB.23.002383. Retrieved 2009-10-26.
- Liu, Zhaowei; et al. (2007-05-22). "Experimental studies of far-field superlens for sub-diffractional optical imaging" (PDF). Optics Express. 15 (11): 6947–6954. Bibcode:2007OExpr..15.6947L. doi:10.1364/OE.15.006947. PMID 19547010. Archived from the original (PDF) on June 24, 2010. Retrieved 2009-10-26.
- Geoffroy, Lerosey; et al. (2007-02-27). "Focusing Beyond the Diffraction Limit with Far-Field Time Reversal". Science. 315 (5815): 1120–1122. Bibcode:2007Sci...315.1120L. doi:10.1126/science.1134824. PMID 17322059.
- Jacob, Z.; Alekseyev, L.; Narimanov, E. (2005). "Optical Hyperlens: Far-field imaging beyond the diffraction limit". Optics Express. 14 (18): 8247–56. arXiv:. Bibcode:2006OExpr..14.8247J. doi:10.1364/OE.14.008247. PMID 19529199.
- Salandrino, Alessandro; Nader Engheta (2006-08-16). "Far-field subdiffraction optical microscopy using metamaterial crystals: Theory and simulations". Phys. Rev. B. 74 (7): 075103. Bibcode:2006PhRvB..74g5103S. doi:10.1103/PhysRevB.74.075103.
- Wang, Junxia; Yang Xu Hongsheng Chen; Zhang, Baile (2012). "Ultraviolet dielectric hyperlens with layered graphene and boron nitride". arXiv: [physics.chem-ph].
- Liu, Z; et al. (2007-03-27). "Far-Field Optical Hyperlens Magnifying Sub-Diffraction-Limited Objects" (PDF). Science. 315 (5819): 1686. Bibcode:2007Sci...315.1686L. doi:10.1126/science.1137368. PMID 17379801. Archived from the original (PDF) on September 20, 2009.
- Rho, Junsuk; Ye, Ziliang; Xiong, Yi; Yin, Xiaobo; Liu, Zhaowei; Choi, Hyeunseok; Bartal, Guy; Zhang, Xiang (1 December 2010). "Spherical hyperlens for two-dimensional sub-diffractional imaging at visible frequencies" (PDF). Nature Communications. 1 (9): 143. Bibcode:2010NatCo...1E.143R. doi:10.1038/ncomms1148. Archived from the original (PDF) on August 31, 2012.
- Huang, Bo; Wang, W.; Bates, M.; Zhuang, X. (2008-02-08). "Three-Dimensional Super-Resolution Imaging by Stochastic Optical Reconstruction Microscopy". Science. 319 (5864): 810–813. Bibcode:2008Sci...319..810H. doi:10.1126/science.1153529. PMC . PMID 18174397.
- Pendry, John (2003-04-07). "Perfect cylindrical lenses" (PDF). Optics Express. 11 (7): 755. Bibcode:2003OExpr..11..755P. doi:10.1364/OE.11.000755. Retrieved 2009-11-04.
- Milton, Graeme W.; Nicorovici, Nicolae-Alexandru P.; McPhedran, Ross C.; Podolskiy, Viktor A. (2005-12-08). "A proof of superlensing in the quasistatic regime, and limitations of superlenses in this regime due to anomalous localized resonance". Proceedings of the Royal Society A. 461 (2064): 3999 (36 pages). Bibcode:2005RSPSA.461.3999M. doi:10.1098/rspa.2005.1570.
- Schurig, D.; J. B. Pendry; D. R. Smith (2007-10-24). "Transformation-designed optical elements". Optics Express. 15 (22): 14772 (10 pages). Bibcode:2007OExpr..1514772S. doi:10.1364/OE.15.014772.
- Tsang, Mankei; Psaltis, Demetri (2008). "Magnifying perfect lens and superlens design by coordinate transformation". Physical Review B. 77 (3): 035122. arXiv:. Bibcode:2008PhRvB..77c5122T. doi:10.1103/PhysRevB.77.035122.
- Huang FM, et al. (2008-06-24). "Nanohole Array as a Lens" (PDF). Nano Lett. American Chemical Society. 8 (8): 2469–2472. Bibcode:2008NanoL...8.2469H. doi:10.1021/nl801476v. PMID 18572971. Retrieved 2009-12-21.
- "Northeastern physicists develop 3D metamaterial nanolens that achieves super-resolution imaging". prototype super-resolution metamaterial nanonlens. Nanotechwire.com. 2010-01-18. Retrieved 2010-01-20.
- Casse, B. D. F.; Lu, W. T.; Huang, Y. J.; Gultepe, E.; Menon, L.; Sridhar, S. (2010). "Super-resolution imaging using a three-dimensional metamaterials nanolens". Applied Physics Letters. 96 (2): 023114. Bibcode:2010ApPhL..96b3114C. doi:10.1063/1.3291677.
- Jung, J. and; L. Martín-Moreno; F J García-Vidal (2009-12-09). "Light transmission properties of holey metal films in the metamaterial limit: effective medium theory and subwavelength imaging". New Journal of Physics. 11 (12): 123013. Bibcode:2009NJPh...11l3013J. doi:10.1088/1367-2630/11/12/123013.
- Silveirinha, Mário G.; Engheta, Nader (2009-03-13). "Transporting an Image through a Subwavelength Hole". Physical Review Letters. 102 (10): 103902. Bibcode:2009PhRvL.102j3902S. doi:10.1103/PhysRevLett.102.103902. PMID 19392114.
- Kang, Hyeong-Gon; Tokumasu, Fuyuki; Clarke, Matthew; Zhou, Zhenping; Tang, Jianyong; Nguyen, Tinh; Hwang, Jeeseong (2010). "Probing dynamic fluorescence properties of single and clustered quantum dots toward quantitative biomedical imaging of cells". Wiley Interdisciplinary Reviews: Nanomedicine and Nanobiotechnology. 2 (1): 48–58. doi:10.1002/wnan.62.
- The Quest for the Superlens By John B. Pendry and David R. Smith. Scientific American. July 2006. Free PDF download from Imperial College.
- Subwavelength imaging
- Professor Sir John Pendry at MIT – "The Perfect Lens: Resolution Beyond the Limits of Wavelength"
- Surface plasmon subwavelength optics 2009-12-05
- Superlenses to overcome the diffraction limit
- Breaking the diffracion limit Overview of superlens theory
- Flat Superlens Simulation EM Talk
- Superlens microscope gets up close
- Superlens breakthrough
- Superlens breaks optical barrier
- Materials with negative index of refraction by V.A. Podolskiy
- Optimizing the superlens: Manipulating geometry to enhance the resolution by V.A. Podolskiy and Nicholas A. Kuhta
- Now you see it, now you don't: cloaking device is not just sci-fi
- Initial page describes first demonstration of negative refraction in a natural material
- Negative-index materials made easy
- Simple 'superlens' sharpens focusing power – A lens able to focus 10 times more intensely than any conventional design could significantly enhance wireless power transmission and photolithography (New Scientist, 24 April 2008)
- Far-Field Optical Nanoscopy by Stefan W.Hell. VOL 316. SCIENCE. 25 MAY 2007
- Ultraviolet dielectric hyperlens with layered graphene and boron nitride, 22 May 2012 |
Amplitude, frequency, wavenumber, and phase shift are properties of waves that govern their physical behavior.
Each describes a separate parameter in the most general solution of the wave equation. Together, these properties account for a wide range of phenomena such as loudness, color, pitch, diffraction, and interference.
Waves propagating in some physical quantity obey the wave equation:
where is the velocity of the wave. Solutions to this equation are written as a linear superposition of right-traveling and left-traveling waves. These can be any arbitrary functions of the form and .
One set of simple examples are the so-called harmonic waves, which are sinusoidal:
where is the amplitude of the wave and and are some constants.
Using the sum rules for trigonometric functions, this solution can also be written in the form:
for some new constant called the amplitude, a constant called the wavenumber, and a constant called the phase shift. From the equation above, gives the maximum height of the wave, describes how tightly spaced the oscillations of the wave are, and describes how the sine function has been shifted to the left or right at time . The wavenumber is related to the wavelength describing the distance between adjacent peaks of a wave by:
To see that this is true, note that when the argument of the sine function changes by , one full oscillation has been completed. Therefore, comparing to should increase the argument by , which corresponds to the above definition of the wavenumber.
Note that some sources (particularly in chemistry) define the wavenumber differently and use a different symbol as in . In terms of physical significance, both definitions are essentially equivalent.
For some types of waves, such as light, the (angular) frequency of the wave, which describes how rapidly the wave oscillates in time, satisfies the equation:
The angular frequency is related to a quantity often labeled and also called the frequency by . With these new definitions, solutions to the wave equations can be written in a number of different forms, for example:
The amplitude is related to the energy per unit time per unit area (the intensity or power flux) carried by that wave. Specifically, the intensity is proportional to the square of the amplitude. This fact is usually proved on a case-by-case basis for different physical scenarios, rather than for general solutions of the wave equation.
Show that the power flux carried by a classical electromagnetic wave goes as the square of the electric or magnetic field amplitude.
where is the permeability of free space and and are the electric and magnetic fields respectively. For light in vacuum, the amplitude of the magnetic field goes as where is the speed of light, and the magnitude of the Poynting vector instantaneously goes as:
which is proportional to the square of the electric field (or magnetic field) amplitude as claimed.
Show that the average power carried by a wave on a string goes as the square of the amplitude of the wave.
Recall the formula for power exerted by a force on an object traveling velocity :
For small-amplitude waves on a string, only the forces and displacements in the vertical direction matter. If the string is at angle with respect to the horizontal, Newton's second law gives:
Substituting in for the derivatives from the solution to the wave equation , one finds the power:
which indeed goes as the square of the amplitude .
A third example is from electric current: it is well-known from Ohm's law that the power dissipated in a resistor goes as the square of the amplitude of the current through the resistor:
In the above examples, the power instantaneously went as the square of the peak amplitude. However, the peak amplitude rarely has physical meaning by itself. Typically, the power supplied by a wave is averaged over many periods; since the wave is not at peak amplitude for most of oscillation, it does not make sense for the peak amplitude to be the only important factor. Furthermore, for waves that are not harmonic (not sinusoidal), there may not be a single well-defined peak amplitude.
In these cases, it is more correct to use the root-mean-square amplitude derived by taking the square root of the average of over a period. When the waves are harmonic, averaging the square of the sine or cosine function over a period typically contributes a factor of .
If the solution to the wave equation describes sound waves, the intensity directly corresponds to the loudness of the wave, as typically measured in decibels. Since the intensity goes as the square of the amplitude , the loudness can be defined:
where is the amplitude of a sound wave at the threshold of human hearing, corresponding to a power of . The decibel scale therefore measures the loudness of sounds only as relative to the threshold of human hearing.
The frequency of a wave describes how rapidly the wave oscillates in time. As an example, for light waves in vacuum, the frequency and wavelength are interchangeable definitions. This is because light in vacuum obeys the relation:
and the speed of light is a constant. Therefore, the frequency and wavelength of light are equivalent definitions, which are used to define the spectrum of types of electromagnetic radiation:
Light also obeys the relation from quantum mechanics:
That is, quantum mechanics predicts that the energy of a photon is proportional to the frequency of light that the photon represents, with the proportionality constant called Planck's constant. Therefore, the wavelength, frequency, and energy of light described by a small number of photons (quantum mechanically, not classicaly) are all interchangeable definitions.
Human perception of color is directly related to the quantum-mechanical behavior of electronic transitions in atomic elements in the inner structure of the eye. Essentially, color corresponds to the energy of the incoming of photon and therefore to its frequency. Therefore, the frequency of light waves essentially represents color, which is consistent with the usual drawing of the spectrum of electromagnetic radiation.
Because of the relation , colors of light are often labeled by their wavelength: about for purple light and for red light. However, it is more precise to label colors by their frequency. This is an important clarification because the effective speed of light is not always . In materials with an index of refraction greater than one (anything other than vacuum), the constant absorption, emission, and scattering of light from atoms in the material causes the effective speed of light to appear lower than (in reality, the speed of light is always , but heuristically one can think of extra time for light to propagate being added by the time between absorption and reemission by an atom). Specifically, the speed of light is scaled down by the index of refraction : . When the speed of light is lower than , the wavelength of the light changes in the material, but the frequency does not. Since the color does not change when entering the material (consider viewing objects underwater: the color is not altered), the frequency more fundamentally describes the color.
In addition to representing colors, frequencies of sound/pressure waves are also well-known to correspond directly to pitch. Again, this is a consequence of the physics of the human body: the human ear is configured in such a way so that high frequencies are received in a different part of the ear than low frequencies, and the location of reception corresponds to pitch. High frequencies correspond to higher pitches and vice versa:
As described above, when waves enter media, their effective velocity can change and so the effective wavelength changes as well. Since , these effects are equivalently captured by the change in wavenumber.
The equation that relates the wavenumber to the angular frequency, is called a dispersion relation, and describe how the speed of waves varies at different wavelengths. The dispersion relation of waves in a particular medium is of paramount importance for describing both a) how information is transferred in waves through that medium and b) in describing the allowed energies for waves traveling through that medium (e.g. if the waves are the matter waves of electrons in a periodic crystal where the allowed values of momentum are restricted).
When the dispersion relation of waves in a medium is nontrivial, the velocity of the wave is no longer uniquely defined and so it is nontrivial to describe how information is transferred by the wave. This is especially important in ultrarelativistic physics where depending on how the velocity is described, some ways of describing the wave velocity may exceed the speed of light without violating causality. The two types of velocity are defined as follows:
Roughly, the phase velocity describes the speed of an individual "piece" of the wave while the group velocity describes the speed at which the overall shape of the wave propagates. Depending on context, either may correspond to the signal velocity which determines how the wave transmits information.
Find the dispersion relation for light waves and compute the phase and group velocities.
Light obeys the relation between frequency and wavelength:
Substituting in for the definitions of angular frequency and wavenumber and rearranging, one arrives at the dispersion relation:
Computing the group and phase velocities, one finds:
Since the group and phase velocities are the same, light waves are non-dispersive. An animation visualizing what it means for a wave to be non-dispersive is displayed below:
Given the dispersion relation for deep water waves:
with the acceleration due to gravity on Earth's surface, compute the phase and group velocities.
It is straightforward to compute the velocities from their definitions, but the result is instructive:
The phase velocity is twice the group velocity, and the waves are dispersive:
When waves of multiple wavelengths are superimposed, the wave shape more obviously disperses:
Dispersion is also literally responsible for the dispersion of light waves through a prism, as reproduced below:
In a material of index of refraction , light obeys the dispersion relation:
In most media, the index of refraction is a very weakly increasing function of wavenumber. The group velocity can thus be written in terms of the phase velocity as:
Since is weakly increasing, the group velocity is slightly less than the phase velocity. This fact is responsible for the increased deflection of higher frequencies in a prism.
Lastly, note (as explored in the below problem) that matter waves representing particles in quantum mechanics have a non-trivial dispersion relation. Since in quantum mechanics according to the de Broglie relation, we see that wavenumber corresponds directly to momentum in this context and the dispersion corresponds to the behavior of energy as a function of momentum in quantum mechanics.
The phase shift in solutions to the wave equation at first glance seems unimportant, since coordinates may always be shifted to set for one particular solution. However, what is important is the relative phase shift between two different solutions to the wave equation, which is responsible for interference and diffraction patterns.
To see why relative phase shift is important, consider the superposition of two identical waves that have a relative phase shift of :
These waves are called out of phase to denote the fact that the phase shift puts the peaks of one wave exactly opposite the peaks of the other. The result of the superposition is that the positive and negative peaks cancel, obtaining zero, which is called destructive interference.
If two waves are in phase, however, the peaks line up. This always occurs when the relative phase shift is zero, but also effectively occurs for small phase shifts. The result is constructive interference, where the peaks of the result are at a height given by the sum of the two original peaks:
Below, some examples of how superposition of waves at different phase shifts cause important interference and diffraction effects in physics are explored.
Photons corresponding to light of wavelength are fired at a barrier with two thin slits separated by a distance as shown in the diagram below. After passing through the slits, they hit a screen at a distance away, and the point of impact is measured. Remarkably, both the experiment and theory of quantum mechanics predict that the number of photons measured at each point along the screen follows a complicated series of peaks and troughs called an interference pattern as below. The photons must exhibit the wave behavior of a relative phase shift somehow to be responsible for this phenomenon. Find the condition for which maxima of the interference pattern occur on the screen.
Since , the angle from each of the slits is approximately the same and equal to . If is the vertical displacement to an interference peak from the midpoint between the slits, it is therefore true that:
Furthermore, there is a path difference between the two slits and the interference peak. Light from the lower slit must travel further to reach any particular spot on the screen, as in the diagram below:
The condition for constructive interference is that the path difference is exactly equal to an integer number of wavelengths. The phase shift of light traveling over an integer number of wavelengths is exactly , which is the same as no phase shift and therefore constructive interference. From the above diagram and basic trigonometry, one can write:
The first equality is always true; the second is the condition for constructive interference.
Now using , one can see that the condition for maxima of the interference pattern, corresponding to constructive interference, is:
i.e. the maxima occur at the vertical displacements of:
When light shines on a thin film like a soap bubble, an interference pattern results. This is because the light that reflects of the top surface of the thin film has a small phase shift from the light that reflects back out off the bottom surface of the thin film, which has traveled an extra distance related to the thickness of the film (see below diagram).
To complicate things, when light reflects off a medium of higher index of refraction, Maxwell's equations require that the phase of the light shift by .
If the thin film is of thickness , find the condition for destructive interference, in terms of , the wavelength of the light, the index of refraction of the film, and the angle of incidence with respect to the normal, when light entering from air shines on the film. Note that the index of refraction of the film is greater than that of air (for which ).
For destructive interference, the total extra distance traveled (scaled by the index of refraction) must be an integer number of wavelengths of the light. This is because the ray that reflects off the top surface of the film picks up a phase shift of . If the extra distance traveled (scaled by index of refraction) is an integer number of wavelengths, this extra phase shift puts the two rays perfectly out of phase, resulting in destructive interference. The reason for the scaling by index of refraction is because the effective velocity of light is slower when , so more phase is accumulated by traveling the same distance (frequency is the same, but velocity is slower, so there is more time for the frequency to accumulate phase .
To find the extra distance traveled in terms of , first use Snell's law to find the angle at which the light enters the film:
From the diagram, one can see that the extra distance traveled inside the film is :
There is an extra path difference, from the amount the light that reflects off the top travels before the second ray exits the film parallel to it. This is segment in the diagram. Some plane geometry (try it yourself!) gives the length of as:
The total extra path difference accounting for the index of refraction is therefore:
Using the expression for in terms of from Snell's law and the fact that the phase shift puts the rays perfectly out of phase, one finds the condition for destructive interference, where is any integer:
The concept of a relative phase shift is also responsible for the experimental technique of interferometry, which was for instance used at LIGO to discover gravitational waves. Interferometers send laser light down and back along two perpendicular tubes and measure the interference pattern where the light rays recombine. If the length of either arm is slightly longer or shorter than the other, the light picks up a small relative phase which is measured by the interference pattern.
By Lookang (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons. https://en.wikipedia.org/wiki/Wave#Traveling_waves
Hass, Jeffrey. An Acoustics Primer: Chapter 6. Center for Electronic and Computer Music, School of Music, Indiana University. http://www.indiana.edu/~emusic/acoustics/amplitude.htm.
Image from https://en.wikipedia.org/wiki/Light under Creative Commons licensing for reuse and modification.
Image from https://en.wikipedia.org/wiki/Guitar_harmonics under Creative Commons licensing for reuse and modification.
Images from https://en.wikipedia.org/wiki/Dispersion_relation under GFDL licensing for reuse.
Image from https://en.wikipedia.org/wiki/Double-slit_experiment under Creative Commons licensing for reuse and modification.
Image from https://en.wikipedia.org/wiki/Thin-film_interference under CC-2.5. |
Qualitative data collection process may be assessed through two different points of view—that of the questionnaire and the respondents. A respondent may not care about the classification of data he/she is inputting, but this information is important to the questionnaire as it helps to determine the method of analysis that will be used.
There are different methods of analysis which vary according to the type of data we are investigating. In statistics, there are two main types of data, namely; quantitative data and qualitative data.
For the sake of this article, we will be considering one of these two, which is the qualitative data.
Qualitative data is a type of data that describes information. It is investigative and also often open-ended, allowing respondents to fully express themselves.
Also known as categorical data, this data type isn’t necessarily measured using numbers but rather categorized based on properties, attributes, labels, and other identifiers. Numbers like national identification number, phone number, etc. are however regarded as qualitative data because they are categorical and unique to one individual.
Examples of qualitative data include sex (male or female), name, state of origin, citizenship, etc. A more practical example is a case whereby a teacher gives the whole class an essay that was assessed by giving comments on spelling, grammar, and punctuation rather than score.
Qualitative Data can be divided into two types, namely; Nominal and Ordinal Data
In statistics, nominal data (also known as nominal scale) is a classification of categorical variables, that do not provide any quantitative value. It is sometimes referred to as labelled or named data.
Coined from the Latin nomenclature “Nomen” (meaning name), it is used to label or name variables without providing any quantitative value. This is not true in some cases where nominal data takes a quantitative value.
However, this quantitative value lacks numeric characteristics. Unlike, interval or ratio data, nominal data cannot be manipulated using available mathematical operators.
For example, a researcher may need to generate a database of the phone numbers and location of a certain number of people. An online survey may be conducted using a closed open-ended question.
E.g: Enter your phone number with country code.
The best way to collect this data will be through closed open-ended options.
The country code will be a closed input option, while the phone number will be open.
Thus, ordinal data is a collection of ordinal variables. For example, the data collected from asking a question with a Likert scale is ordinal.
Other examples of ordinal data include the severity of a software bug (critical, high, medium, low), fastness of a runner, hotness of food, etc.
In some cases, ordinal data is classified as a quantitative data type or said to be in between qualitative and quantitative. This is because ordinal data exhibit both quantitative and qualitative characteristics.
Various Qualitative data examples are applied in both research and statistics. These examples vary and will, therefore, be separately highlighted below.
Open-ended question approach
What is your highest qualification? _____
Closed-ended Question approach
What is your highest qualification?
Which of the following payment platforms are you familiar with?
They may even take it further by asking questions like, "How did you hear about them?". This may even help them improve their marketing strategy.
Where is your country of residence? _____
The severity of a bug may be said to be critical, high, medium or low. This data can be collected on either a nominal or ordinal scale.
How will you rate the new menu?
This is a 5 point Likert scale, a common example of ordinal data.
During the voting process, we take nominal data of the candidate a voter is voting for. The frequency of votes incurred by each candidate is measured, and the candidate with the highest number of votes is made the winner. In statistical terms, we call this mode.
Each embassy in every country has a database of the immigrants coming into the country. For example, the Nigerian embassy in the US has a database of all the legal African migrants to America. This way, the US Government will have an estimate of the population of Africans in the US. Not only that but also personal details like gender, countries, etc. that may help in proper statistics.
During an event, organizers take nominal data of attendees, which include name, sex, phone number, etc. An example question like "Where did you hear about this event? " may help them determine which is the most effective marketing platform.
When trying to build a database of people with diverse backgrounds like different genders, races, classes, etc. we use qualitative data. For example, when employing people, organizations that care about having equal female representation take statistics of the number of male and female employees to balance gender.
Ordinal data is a data type that has a scale or order to it. This order is used to calculate the midpoint of a set of qualitative data.
For example, qualitative data on the order of arrangement of goods in a supermarket will help us determine the goods at the centre of the supermarket. This may even be a factor in determining whether the position of good influences the number of sales.
Characteristics of Qualitative Data
Qualitative data is of two types, namely; nominal data and ordinal data.
Qualitative data sometimes takes up numeric values but doesn't have numeric properties. This is a common case in ordinal data.
Ordinal data have a scale and order to it. However, this scale does not have a standard measurement.
Qualitative data is analyzed using frequency, mode and median distributions, where nominal data is analyzed with mode while ordinal data uses both.
When collecting qualitative data, researchers are interested in how, i.e., specific details around the occurrence of an event, with a particular interest in the perspective of the subject of study. Some of the techniques used in collecting qualitative data are explained below:
This is the process of studying a subject for a given period to access some information. This may be done with or without consent of the subject that is being observed.
Observation may be done in several ways. It is not necessarily done by looking at the subject for a long period.
It may be through reading materials written by or about the subject, stalking on social media, asking about them, etc.
An interview means a one-on-one conversation between two groups of people where one part interrogates the other party. The word group is being used because at times we may have two or more interviewers and two or more interviewees.
In recent times, we now have phone interviews and Skype (video) interviews.
The subject may be interviewed to collect qualitative data directly from them.
This is a very common technique for collecting qualitative data from a group of respondents. Traditional questionnaires are printed on paper and given to the respondents to fill and handed back to the researcher.
Researchers can now create online surveys and send them to respondents to fill. This is better than the traditional method because it automatically collects the data and prepares for analysis.
Quantitative data analysis is the process of moving from the qualitative data collected into some form of explanation or interpretation of the subject under investigation. There are two main stages of qualitative data analysis.
This is the first stage of qualitative data analysis, where raw data is converted into something meaningful and readable. This is done in four steps:
Coding is a major step in analyzing qualitative data. It is the process of classifying data by grouping them into meaningful categories to easily analyze them.
Things to note when developing codes:
Closely review the developed categories and use them to code your data. Having teamwork on data coding will accommodate different perspectives.
Don't be afraid to include or remove subcategories as you move on. This may turn out to be needed in the case that the codes are too broad or too detailed.
The Coding Process
This is the point where you take a break from the hard work. Step back and observe the coded data for emerging themes, patterns and relationships.
Here, you check for similarities and differences and see what each group is depicting.
This is the process of streamlining the remaining chunk of data and keeping it brief. All parts of the data should be summarised to get them ready for analysis.
After completing the first stage, the data is ready for analysis. There are two main data analysis approaches used, namely; deductive and inductive approach.
The deductive approach to qualitative data analysis is the process of analysis that is based on an existing structure or hypothesis. Researchers pick an interesting social theory and test its implications with data.
This approach is fairly easy since the researcher already has an idea about the likely results of the analysis before conducting the research. It is usually associated with scientific investigations.
The inductive approach to qualitative data analysis is the process of developing a new theory or hypothesis for data analysis. Researchers find themes, patterns, and relationships in the data and work to develop a theory that can explain them.
This is a more difficult and time-consuming approach compared to the former.
By collecting qualitative data using Formplus, researchers have access to tools that will make their research simple and easy. Data analysis is made easy with an efficient data collection tool that records real-time data.
This online form builder helps businesses conduct better customer satisfaction surveys with qualitative data. The data collected through Formplus can be exported in different formats that are compatible with data analysis tools.
Formplus eliminates stress and needs for manpower that may arise from dealing with qualitative data. No matter how big the sample size is, Formplus makes collection easy for both respondents and questionnaires.
To collect qualitative data using Formplus builder, follow these steps:
Formplus gives you a 21-day free trial to test all features and start collecting quantitative data from online surveys. Pricing plan starts after trial expiration at $20 monthly, with reasonable discounts for Education and Non-Governmental Organizations.
We will be creating a sample qualitative data collection form that takes the name (nominal data) and bug severity level (ordinal data) from a respondent.
Save the form and preview
Qualitative data in statistics is similar to nouns and adjectives in the English language, where nominal data is the noun while ordinal data is the adjective. This comparison is an attempt towards breaking down the meaning of qualitative data into relatable terms for proper understanding.
A proper understanding of what qualitative data is aids researchers in identifying them, using them and choosing the best method of analysis for them.
In qualitative data analysis, we break data into smaller bits and group them before analysis. This is to properly understand the data and ease the analysis process.
You may also like:
The underlying need for Data collection is to capture quality evidence that seeks to answer all the questions that have been posed. Through ...
Research and statistics are two important things that are not mutually exclusive as they go hand in hand in most cases. The role of ...
When working with statistical data, researchers need to get acquainted with the data types used—categorical and numerical data. The ...
Interval data is quantitative data measured along a scale. By discussing its definition, characteristics etc., we will have a better ... |
What is a Neural Network?
Artificial neural networks (ANN), more commonly referred to as neural networks (NN), are computing systems inspired by the biological neural networks that constitute human brains.
Neural Networks: A Brief History
Neural networks may seem new and exciting, but the field itself is not new at all. Frank Rosenblatt, an American psychologist, conceptualized and tried to build a machine that responds like the human mind in 1958. He named his machine “Perceptron.”
For all practical purposes, artificial neural networks learn by example, in a manner similar to their biological counterparts. External inputs are received, processed, and actioned in the same way the human brain does.
The Layered Structure of Neural Networks
We know that different sections of the human brain are wired to process various kinds of information. These parts of the brain are arranged hierarchically in levels. As information enters the brain, each layer, or level, of neurons does its particular job of processing the incoming information, deriving insights, and passing them on to the next and more senior layer. For example, when you walk past a bakery, your brain will respond to the aroma of freshly baked bread in stages:
- Data input: The smell of freshly baked bread
- Thought: That reminds me of my childhood
- Decision making: I think I’ll buy some of that bread
- Memory: But I’ve already eaten lunch
- Reasoning: Maybe I could have a snack
- Action: Can I have a loaf of that bread, please?
This is how the brain works in stages. Artificial neural networks work in a similar manner. Neural networks try to simulate this multi-layered approach to processing various information inputs and basing decisions on them.
At a cellular, or individual neuron level, the functions are fine-tuned. Neurons are the nerve cells in the brain. Nerve cells have fine extensions known as dendrites. They receive signals and then transmit them to the cell body. The cell body processes the stimuli and makes the decision to trigger signals to other neurons in the network. If the cell decides to do so, the extension on the cell body known as the axon will conduct the signal to other cells via chemical transmission. The working of neural networks is inspired by the function of the neurons in our brain, though the technological mechanism of action is different from the biological one.
How Neural Networks Function Similar to The Human Brain
An artificial neural network in its most basic form has three layers of neurons. Information flows from one to the next, just as it does in the human brain:
- The input layer: the data’s entry point into the system
- The hidden layer: where the information gets processed
- The output layer: where the system decides how to proceed based on the data
More complex artificial neural networks will have multiple layers, some hidden.
The neural network functions via a collection of nodes or connected units, just like artificial neurons. These nodes loosely model the neuron network in the animal brain. Just like its biological counterpart, an artificial neuron receives a signal in the form of a stimulus, processes it, and signals other neurons connected to it.
But the similarities end there.
The Neuronal Workings Of An Artificial Neural Network
In an artificial neural network, the artificial neuron receives a stimulus in the form of a signal that is a real number. Then:
- The output of each neuron is computed by a nonlinear function of the sum of its inputs.
- The connections among the neurons are called edges.
- Both neurons and edges have a weight. This parameter adjusts and changes as the learning proceeds.
- The weight increases or decreases the strength of the signal at a connection.
- Neurons may have a threshold. A signal is sent onward only if the aggregate signal crosses this threshold.
As mentioned earlier, neurons aggregate into layers. Different layers may perform different modifications on their inputs. Signals flit from the first layer (the input layer) to the last layer (the output layer) in the manner discussed above, sometimes after traversing the layers multiple times.
Neural networks inherently contain some manner of a learning rule, which modifies the weights of the neural connections in accordance with the input patterns they are presented with, just as a growing child learns to recognize animals from examples of animals.
Neural Networks and Deep Learning
It is impossible to talk about neural networks without mentioning deep learning. The terms “neural networks” and “deep learning” are often used interchangeably, although they are distinct from each other. However, the two are closely connected as one depends on the other to function. If neural networks did not exist, neither would deep learning:
- Deep learning forms the cutting edge of an entity already of the forefront, artificial intelligence (AI).
- Deep learning is different from machine learning, which is designed to teach computers to process and learn from data.
- With deep learning, the computer continually trains itself to process data, learn from it, and build more capabilities. The multiple layers of more complex artificial neural networks are what make this possible.
- Complex neural networks contain an input layer and an output layer just like simple-form neural networks, but they also pack in multiple hidden layers. Therefore, they are called a deep neural network and are conducive to deep learning.
- A deep learning system teaches itself and becomes more “knowledgeable” as it goes along, filtering information through multiple hidden layers, in a manner similar to the human brain with all its complexities.
Why Deep Learning Matters for Organizations
Deep learning is like the new gold rush or the latest oil discovery in the tech world. The potential of deep learning has piqued the interest of big, established corporations as well as nascent startups and everything in between. Why?
It is part of the data-driven big picture, in particular, thanks to the rise in the importance of big data. If you think of internet-derived data as crude oil stored in databases, data warehouses, and data lakes, waiting to be drilled into with various data analytics tools, deep learning is the oil refinery that takes the crude data and converts it into final products you can use.
Deep learning is the endgame in a market flooded with analytical tools sitting on a hotbed of data: without an efficient and state-of-the-art processing unit, extracting anything of value is just not possible.
Deep learning has the potential to replace humans by automating repetitive tasks. However, deep learning cannot replace the thought processes of a human scientist or engineer creating and maintaining deep learning applications.
Making the Distinction Between Machine Learning And Other Kinds Of Learning
When it comes to the how of machine learning, it is all about training the learning algorithms such as linear regression, K-means, Decision Trees, Random Forest, K-nearest neighbors (KNN) algorithm, and support vector machine or SVM algorithm.
These algorithms sift through datasets, learning as they go along to adapt to new situations and look for interesting and insightful data patterns. Data is the key substrate for these algorithms to function at their best.
The datasets used for training machine learning can be labeled. The dataset comes with an answer sheet to inform the computer of the right answer. For example, a computer scanning an inbox for spam can refer to a labeled dataset to understand which emails are spam and which ones are legitimate. This is called supervised learning. Supervised regression or classification is accomplished by means of linear regression and K-nearest neighbors algorithm.
When datasets are not labeled, and algorithms like K-means are employed and directed to aggregate cluster patterns without the benefit of any reference sheets, it is called unsupervised learning.
Neural Networks and Fuzzy Logic
As an aside, it is also important to make the distinction between neural networks and fuzzy logic. Fuzzy logic allows making concrete decisions based on imprecise or ambiguous data. On the other hand, neural networks attempt to incorporate human-like thinking processes to solve problems without first designing mathematical models.
How Do Neural Networks Differ from Conventional Computing?
To better understand how computing works with an artificial neural network, a conventional “serial” computer and its software process information must be understood.
A serial computer has a central processor that can address an array of memory locations where data and instructions are stored. The processor reads instructions and any data the instruction needs from within memory addresses. The instruction is then executed and the results saved in a specified memory location.
In a serial system or a standard parallel one, the computational steps are deterministic, sequential, and logical. Furthermore, the state of a given variable can be tracked from one operation to another.
The Workings of Neural Networks
In contrast, artificial neural networks are neither sequential nor necessarily deterministic. They do not contain any complex central processors. Instead, they are made up of several simple processors that take the weighted sum of their inputs from other processors.
Neural networks do not execute programmed instructions. They respond in parallel (either in a simulated manner or actual) to the pattern of inputs presented to it.
Neural networks do not contain any separate memory addresses for data storage. Instead, information is contained in the overall activation state of the network. Knowledge is represented by the network itself, which is quite literally more than the sum of its individual components.
Advantages of Neural Networks Over Conventional Techniques
Neural networks can be expected to self-train quite efficiently in case of problems where the relationships are dynamic or nonlinear. This ability is further enhanced by if the internal data patterns are strong. It also depends to some extent on the application itself.
Neural networks are an analytical alternative to standard techniques somewhat limited to ideas such as strict assumptions of linearity, normality, and variable independence.
The ability of neural networks to examine a variety of relationships makes it easier for the user to quickly model phenomena that may have been quite difficult, or even impossible, to comprehend otherwise.
Limitations of Neural Networks
There are some specific issues potential users should be aware of, particularly in connection with backpropagation neural networks and certain other types of networks.
Process is Not Explainable
Backpropagation neural networks have been referred to as the ultimate black box. Apart from outlining the general architecture and possibly using some random numbers as seeding, all the user needs to do is provide the input, keep an on it training, and then receive the output. Some software packages allow users to sample the network’s progress over time. The learning itself in these cases progresses on its own.
The final output is a trained network that is autonomous in the sense that it does not provide equations or coefficients defining a relationship beyond its own, internal mathematics. The network itself is the final equation of the relationship.
Slower to Train
In addition, backpropagation networks tend to be slower to train than other types of networks and sometimes require thousands of epochs. This is because the machine’s central processing unit must compute the function of each node and connection separately. This can be highly cumbersome and cause problems in very large networks containing a huge amount of data. Contemporary machines do work fast enough to sidestep this issue, though.
Applications of Neural Networks
Neural networks are universal approximators. They happen to work best if the system has a high tolerance for error.
Neural networks are useful:
- For understanding associations or discovering regular elements within a set of patterns
- Where the data is enormous either in volume or in the diversity of parameters
- Relationships between variables are vaguely understood
- Where conventional approaches fall short in describing relationships
This beautiful, biology-inspired paradigm is one of the most elegant technological developments of our era.
Top 3 Ways to Use Data Science
Data scientists are an incredibly scarce commodity. One way of addressing the shortage in expert...
Nine Steps Applicable to Any Data Science Issue
You can facilitate access to data, understand its structure, and prepare it for implementing... |
This false-color image from NASA's Cassini spacecraft shows Titan in ultraviolet and infrared wavelengths.
Credit: NASA/JPL/Space Science Institute
An untold number of cosmic impacts could have created the mysteriously thick atmosphere of Saturn's largest moon Titan, suggest experiments with laser guns.
Titan has always stood out as the only moon in the solar system with a substantial atmosphere. In fact, the surface pressure on Titan is 50 percent greater than the pressure on Earth. [Photos: The Rings and Moons of Saturn]
The main ingredient of Titan's atmosphere is nitrogen, just as it is on Earth. Where this nitrogen came from has long been debated. For instance, it could be primordial, accumulating as Titan formed, or it could have originated later.
Weighing the options
In 2005, the Huygens probe carried by NASA's Cassini spacecraft to Saturn ruled out a primordial origin for this nitrogen. Titan's atmosphere apparently has extremely low levels of the isotope argon-36, while high amounts are expected in an atmosphere rich in primordial nitrogen.
There are a number of other explanations for how this atmospheric nitrogen might have formed after Titan's birth. For instance, sunlight in Titan's atmosphere might have broken apart ammonia, a molecule made of nitrogen and hydrogen.
However, nearly all these suggestions require that Titan formed at relatively high temperatures, which would have led the moon to differentiate into a rocky core and an icy mantle layer, and Cassini's radar scans suggested that Titan is not fully differentiated. Comets loaded with nitrogen might have delivered it to Titan, but that would have also led to higher levels of argon-36 than currently seen.
Now scientists in Japan suggest that countless numbers of asteroids and comets slamming into ammonia ice on Titan could have converted it to nitrogen gas several hundred million years after the moon's formation.
"Our results suggest that hypervelocity impacts have played a key role," researcher Yasuhito Sekine, a planetary scientist at the University of Tokyo, told SPACE.com.
Solar system dodgeball
During an era known as the Late Heavy Bombardment about four billion years ago, the solar system was very much like a shooting gallery, with cosmic impacts regularly blasting planets and moons. To see if such impacts would deliver enough energy to convert ammonia ice to nitrogen, researchers used laser guns and "bullets" made of gold, platinum or copper foil. The beams vaporized the back of these bullets, propelling them at high speeds at targets made of ammonia and water ice.
The researchers found "ammonia is very easily converted to nitrogen molecule by impacts," Sekine said.
They calculated that 330 million billion tons (300 million billion metric tons) worth of impactors could have produced the current amount of nitrogen seen on Titan, "a plausible mass of impactors during the Late Heavy Bombardment," noted planetary scientist Catherine Neish at Johns Hopkins University, who did not take part in this research.
"It's an interesting new hypothesis," Neish told SPACE.com. "Differentiating between the different hypotheses will require a more detailed understanding of Titan's internal structure, and the composition of comets and-or other Saturnian satellites." She suggested that a future mission to a comet would very likely provide key evidence to help confirm or refute the idea.
One question would be where all the craters from such impacts might be. Titan has only about 50 recognized craters, Neish said. "Does this imply that Titan's surface is very young?" she asked, suggesting a young surface could have covered up most of the craters on Titan.
The scientists detailed their findings online May 8 in the journal Nature Geoscience. |
Science, Gr. 9. Atoms & Elements Unit – Density. Density Calculations Worksheet I. density = mass UNITS OF DENSITY. volume solids (g/cm3) liquids (g/mL)
Date added: April 22, 2012 - Views: 204
The 6th grade student will continue to build on their previous understanding of the properties of matter by classifying substances by their physical and chemical properties. ... Buoyancy applies to both liquids and gases and is determined in part by density and fluid displacement.
Date added: October 26, 2011 - Views: 310
EARTH SYSTEMS: ATMOSPHERE. The emphasis in this 6th grade unit will be to describe the components of the atmosphere including oxygen, nitrogen, and water vapor, and identify the role of atmospheric movement in weather change.
Date added: October 22, 2011 - Views: 44
Creating Histograms and Approximating Probability Density Functions. A histogram is used to display data by groups (or bins) in a bar graph format. A histogram gives you a visual way to view frequencies. The . Histogram.
Date added: January 27, 2012 - Views: 91
6TH GRADE EARTH SCIENCE SYLLABUS AND CURRICULUM INFORMATION. ... of the student’s grade. Class Work. Activities, worksheets, ... density, and composition. d. Describe processes that change rocks and the surface of the earth (volcanoes .
Date added: October 29, 2012 - Views: 12
For more information on current content standards by grade level, visit the Dept of Education website at: ... In class assignments will consist of worksheets, ... 6th grade Math/Science Last modified by: Sarah Perry
Date added: September 27, 2013 - Views: 4
Calculators, worksheets, measurement. Lab activities. Vocabulary. B1.Acquire/ Understand /Use. Comprehension. ... Explain density, dissolving, compression, ... 6th Grade Science Curriculum Map 2010 pg. 4
Date added: August 2, 2013 - Views: 7
Worksheets and quizzes - labs. Teacher observations - Chapter/Unit. Test. ... Sixth Grade Science Pg. 10 GRADING PERIOD: 3rd and 6th 6 weeks. Time Frame Unit/SOLs SOL# Strand Resource. Assessments ... • Air pressure and density • Layers of the Atmosphere. 6.1 embedded. 6.4 c,d,e,g. 6.6 a,b,c ...
Date added: August 1, 2012 - Views: 68
V. Supplementary worksheets, materials and handouts: For class: Picture of different scales that are used to measure mass . ... 6th grade level: Density is the mass of a substance per unit volume. In other words, it is the mass of a sample ...
Date added: May 9, 2013 - Views: 8
The density of water at 4oC is known to be 1.00 g/mL. Kayla experimentally found the density of water to be 1.075 g/mL. What is her percent error? ...
Date added: November 8, 2012 - Views: 4
6th Grade Science. Tentative / Weekly Assignment Sheet. For the Week of Oct 24, ... - Complete “Density” GIZMO for extra credit. - Use textbook to complete Chapter 1 worksheets (pages 5-16) [due Tue, Nov 1] Thu.
Date added: August 4, 2013 - Views: 1
Interactive Chalkboard CD_ROM Foldable Worksheets, p. 9. Note taking Worksheets, pp: 37-39 Portfolio Assessment, ... determining the density of a pencil. Science Online: ... Unit: Life Structure and Classification Level: 6th Grade Week of: ...
Date added: May 7, 2012 - Views: 58
This skill is introduced in grade 5 but more exposure strengthens it significantly. Activity 2: Collecting Data (SI GLEs: 2, 6, 8, 11, 12, 19, 22, 23; ... This unit also introduces the concept of density and illustrates how density can be used as an example of a physical property.
Date added: May 9, 2013 - Views: 40
1 lb brush = 1/400 cu yd (assuming a similar bulk density to sawdust) 13 lbs manure = 1/115 cu yd. 400 : 115 = 3.5. 2d) What moisture content would this mixture give you? Show your math. Poultry manure moisture is 30%; brush moisture is 25%.
Date added: October 31, 2011 - Views: 10
... (approx. 34” square) and a notebook with lesson plans, worksheets, and activities; string or flexible measuring tape; clear grid overlay, calculators. ... high-density, etc.; students can ... 6th Grade: * use cardinal directions (GPS) ...
Date added: May 2, 2013 - Views: 24
... the compacted standards (in grades 7 and 8) in Appendix A of the CCSS. Also discuss the “California additions” in 6th and 7th grade and in the 8th grade Algebra I ... provides worksheets and addresses the memorization of mathematics is less important than curriculum that develops ...
Date added: July 15, 2012 - Views: 145
Land Use Maps – Creation and interpretation of Population Density maps. COURSE OF STUDY: 7th GRADE WORLD GEOGRAPHY CURRICULUM. STANDARD # 1: Explain how people in North America developed a “western world-view” based on a set of common characteristics.
Date added: August 26, 2011 - Views: 271
Equation Grade expected The table below shows the lengths and corresponding ideal weights of sand sharks. Length 60 62 64 66 68 70 72 Weight 105 114 124 131 139 149 158 Predict the weight of a sand shark whose length is 75 inches.
Date added: October 7, 2011 - Views: 213
XI Grade (Semester 1) Chapter 1. Worksheet 6th . Topic : Cumulative Frequency Distribution. TIME : 3 X 45 minutes. SMAK ST. ALBERTUS (ST. ... Mass ( g) Cumulative frequency Mass ( g) Frequency Frequency density 8 8 19 57 89 141 216 266 290 300 The histogram. Exercise 6. The speeds of 100 motor ...
Date added: October 6, 2011 - Views: 29
In the 6th grade, students learn about ... Density is introduced formally in the 8th grade. Disclaimer: ... Record in the worksheets the results of each test. Step 5: Repeat Steps 2-4 with the other liquids. Use fresh pieces of plastics for each liquid.
Date added: November 22, 2011 - Views: 6
6TH Grade MATHEMATICS. In Grade 6, instructional time should focus on four critical areas: (1) connecting ratio and rate to whole number multiplication and division. and using concepts of ratio and rate to solve problems; (2) completing.
Date added: December 12, 2013 - Views: 11
Life Cycle of a Star Lesson Plan. Name: Courtney M. Mutschler Grade level: 5/6th. Subject Area: Concept/Vocabulary Topic: Life Cycle of a Star. Learning Objectives for the unit:
Date added: October 7, 2011 - Views: 177
Grade 5. Commonwealth of Virginia. Department of Education. Richmond, Virginia. 2006 ... Use the completed worksheets for immediate assessment. ... (Salt water is heavier than fresh water, i.e., it has a greater density.) 3.
Date added: October 13, 2011 - Views: 196
Discipline Grade Eight Grade Nine Grade Ten Grade Eleven Grade Twelve Algebra I Required Required Required Required Required Geometry Required ... variable and can interpret the probability of an outcome as the area of a region under the graph of the probability density function associated ...
Date added: August 13, 2011 - Views: 271
Grade 8 Mathematics Unit 2: Rates, Ratios, and Proportions. Time Frame: Approximately four weeks. Unit Description. ... Density, velocity, and monetary conversions are connected to algebraic relationships. Analyses of rates of change of sides, ...
Date added: August 8, 2011 - Views: 250
Grade 6 Science (Honors) PACING & STANDARDS ... area, volume, density in scientific scenarios. Unit Activities: Metric Staircase. Chapter 2, Sections 1, 2, 3. Practice of liquid and solid measurement. Metric ... Complete Disney Science worksheets on film. - Complete cross-curricular worksheets ...
Date added: May 7, 2013 - Views: 4
Middle School Science Group QCTeach 2006. Rationale for changes: We made several changes to the original 6th grade science lesson on density. After watching the video lesson, we identified that the teacher had a strong grasp of the content but was lacking in materials for students to manipulate.
Date added: December 7, 2013 - Views: 1
Which composition describes rocks light in color and low density? Felsic. Which composition describes rocks dark in color and high density? Mafic. Describe 2 key identifying features of Igneous rocks- 1) Glassy 2) Intergrown crystals. Which Igneous Rock?
Date added: May 13, 2013 - Views: 7
... 40 minute periods Grade level 4th-6th Grade Curriculum fit Social Studies Materials For ... Population Density Maps of The U.S. 1810-1910 (Westward movement of the ... Either have students turn in their United States maps and their worksheets or put them in a safe place until the ...
Date added: May 21, 2013 - Views: 3
6th Grade Content & Standard(s) Code S6CS3-4 Name of Unit The Metric System Page Numbers. Unit Decisions. Pages 1-9 Acquisitions Lessons, Pages 11-18,24
Date added: June 18, 2012 - Views: 20
Grades: The target grade level is 5th-6th grade, ... mass, density, radius, orbital distance, orbital period, rotational period, average temperature, ... Worksheets and Final assessment questions. Overall cooperation and individual behavior.
Date added: November 3, 2011 - Views: 25
6th grade: parent rocks and soil types ... Density, thermocline, coriolis effect, barrier islands ... and phases of the moon. How to relate the position of the earth and sun (review) through the models and worksheets I will create a sun-earth system and explain the rotation, revolution ...
Date added: August 26, 2013 - Views: 9
... width, perimeter, area, mass, volume, and density will be recorded over the 5-day period, and the results graphed and evaluated. Note that this means you will have to measure them 6 times, ... Complete the gro-beast lab worksheets. ... Your grade will be calculated roughly as follows.
Date added: November 15, 2011 - Views: 12
Grade Level: 6th Class Title: ... Properties of substances, such as boiling point, solubility, density and melting point, Difference between substances, compounds, and mixtures, Relationships between atoms, molecules, ... Completed worksheets. Assessments.
Date added: July 25, 2013 - Views: 4
6th Grade Science. Source of the lesson: UTeach Outreach. ... Supplementary materials needed for each class and worksheets “Hot Air Balloon Construction” sheet ... Density is the mass of an object divided the object’s volume.
Date added: June 4, 2013 - Views: 3
is created by each grade level and available most weeks on Thursday. ... - Subtopics: Weather Measurement; The Atmosphere; Density of Gases; Heating of the Earth’s Surface; Seasonal Changes Across the Earth; ... Worksheets, Short-answer questions.
Date added: March 15, 2012 - Views: 25
6th grade – Earth Science. ... Students can add more information to their worksheets. Engage 2. Review of the air chamber demonstration. ... Students will understand the relationship between temperature and density and how materials of different densities interact.
Date added: October 6, 2013 - Views: 1
Worksheets: Adopt-An-Element Baby Book Project ... DENSITY. Sodium, Potassium and Calcium DESCRIPTORS ... are synthetic, that is, human-made. All of the rare earth metals are found in group 3 of the periodic table, and the 6th and 7th periods. 6/19/2013 YCS GRADE 7 SCIENCE: UNIT 6
Date added: March 4, 2014 - Views: 5
density mass volume weight Sources. Fisher, D., Brozo, W.G., Frey, N., & Ivey, G. (2006). 50 content area strategies for . adolescent ... GRADE LEVEL SPAN Author: wbrozo Last modified by: CSESSION Created Date: 6/6/2012 11:27:00 PM Company:
Date added: January 18, 2013 - Views: 343
Spelling worksheets are given weekly, ... 6th Grade Science: ... They will practice using metric units of length, determine mass, and explore the concept of density. Energy: heat, light and renewable energy resources---energy of the future
Date added: May 23, 2013 - Views: 10
Ninth Grade. Remediation Math. Table of Contents. Unit 1: Number and Operation 1. ... Density is a measure of how tightly packed the particles are in a ... Another suggestion is to have integer operation worksheets available as an alternate activity for those students who have trouble meeting ...
Date added: March 9, 2014 - Views: 16
6th Grade STM and ET Standards from Kentucky’s POS ... Density of the material is represented as the slope of each line—this means that the ratios would be equal (in proportion). ... Materials for each station (see worksheets)
Date added: November 22, 2013 - Views: 2
By the end of 4th grade, ... I believe these generators can make the process of producing appropriate math facts worksheets as painless and efficient as possible. ... 6th session. Please let me know that you have received this document .
Date added: November 8, 2011 - Views: 247
The fellows and teachers work together for one continuous academic year in 6th, 7th, or 8th grade science ... Students will create a model of the Earth. After labeling the model, students will complete the Time Zone activity and worksheets. 1. orbit. 2 ... Density is mass per unit volume ...
Date added: June 13, 2012 - Views: 24
The county has a population of approximately 100,000, and a population density approximately half the Maryland average. The economic activity of the county is diverse, including agriculture, fishing, ... I did worksheets today. ... 6th grade mathematics teacher, Elmtree MS) Overall, ...
Date added: May 17, 2013 - Views: 4
Saturn is the 6th planet from the sun and the second largest. It is a planet with very low density ... The two teaching strategies I plan to use for formative assessment include follow along worksheets and jigsaws.
Date added: December 2, 2013 - Views: 4
Their density is extremely low so that they are soft enough to be cut with a knife. (1 outer level electron) Group 2: Alkaline-earth Metals – Slightly less reactive than alkali metals. They are silver colored and more dense than alkali metals.
Date added: October 26, 2012 - Views: 333
6th Grade Content. Middle School. 11th Grade Content. High School. 7th Grade Content. ... then teacher will give worksheets in CRB covering concepts. ... This rate of mass in grams per milliliter of volume is the density of the mineral.
Date added: June 27, 2013 - Views: 14
... 4th, 6th, 7th Morgan Park High School D. Hawes [email protected] By appt. 2nd, 3rd, 4th, 6th, 7th 1744 W. Pryor Ave ... scientific lab equipment, worksheets and supplemental readings. Students in some classes will use ... Density. Uncertainty. Significant figures . Dimensional analysis.
Date added: December 10, 2013 - Views: 5
· Demonstrate basic principles of fluid dynamics, including hydrostatic pressure, density, salinity, and buoyancy ... The highest grade possible the student can earn on the retest is a 75 ... 6th period – 1 box Kleenex. 7th period – 1 ...
Date added: March 9, 2013 - Views: 27 |
Data Types in Java
We know that we need a variable to store the data. Internally, a variable represents a memory location where data is stored. When we use a variable in java program, we have to declare first it as:
Here, “x” is a variable that can store int (integer) type data. It means that int represents the nature of data which can be stored into x. Thus, int is also called data type in java.
Basically, data type in java is a term that specifies memory size and type of values that can be stored into the memory location. In other words, data types define the different values that variable can take.
1. Variable x can store an integer number like 100 as:
Here, = represents that value 100 is stored into x.
2. String name=”Deep”; // Here, String is a data type and name is a variable which can take only string values.
3. int num=10; // Here, int is data type and num is variable which can only take integer values like 10, 20, 30, 40 and so on. The semicolon is used to end a particular statement in java like pull stop in the English language so that java will know that statement is completed.
Java language provides several data types as shown in the below figure. All data types are divided into two categorized. They are as follows:
1. Primitive data types (also called intrinsic or built-in types)
2. Non-primitive data types (also called derived or reference data type)
Primitive Data types in Java
Primitive data types are those data types whose variables can store only one value at a time. You cannot store multiple values of the same type. These data types are predefined in Java. They are named by a Keyword.
int x; // valid
x=10; // valid because “x” store only one value at a time because it is the primitive type variable.
x=10, 20 ,30 40; // invalid
Primitive data types are not user-defined data-types. i.e. Programmers cannot develop primitive data types.
Types of Primitive data types in Java
Java defines eight primitive data types: boolean, char, byte, short, int, long, char, float, and double. These can be further categorized into four groups. They are as follows:
Now it is important to understand the memory limitations which decides which data type should be used for a particular number. For example, when you define the age of a person, the age of any person will not cross 120.
In this case, using short data type is enough instead of using long which will take a big memory. Therefore, you will have to understand the important terms for every data types.
1. Memory size allocated
Each data type has some memory size defined in Java. Whenever a variable is declared with a data type, the memory size is automatically defined in the RAM by the JVM.
If we declare int a; then the size of the memory is defined as 4 bytes.
2. Default value
Every primitive data type has default values defined in java. When the programmer does not declare any value to the variables, default values will be assigned by the JVM during the object creation.
3. Range of values
Integer data types in Java
2. Byte data type is an 8-bit signed two’s complement integer.
4. A byte keyword represents from 0 to 127 on the positive side and from -1 to -128 on the negative side. So it can represent the total 256(2^8) numbers.
5. Default value of byte is 0.
In the preceding example program, we have declared byte data type for variable num with value 100 which is stored into num. Byte represents any value between -128 to 127.
But when you will assign 150 value at the place of 100 to variable num, you would get compilation problem: Type mismatch error: cannot convert from int to byte because the value is out of the range of byte type. The range of byte is -128 to +127.
1. Short data type has greater memory size than byte and less than Integer.
2. A short data type is a 16-bit signed two’s complement integer.
3. The default memory size is allocated 16 bit i.e 4 bytes.
4. It represents 0 to 32767 on the positive side and on the negative side, from -1 to -32768. So it can represent a total of 65536(2^16) numbers.
5. The default value is 0.
Let’s see an example program.
Program source code 2:
We cannot use byte data type in the above example because a byte cannot hold 200 value but short can hold the value 200 because of its wider range.
1. This data type is mostly used for integer values in the Java programming.
2. Int data type is a 32-bit signed two’s complement integer.
3. It has wider range from -2,147,483,648 to 2,147,483,648.
4. The memory size is 32 bits i.e 4 bytes and default value is 0.
Let’s create a program where we will store two values into a and b with data type int.
Program source code 3:
1. This data type is mostly used for a huge number where int type is not large enough to store the desired value.
2. A long data type is a 64-bit signed two’s complement integer.
3. Default memory size allocated to this data type is 64 bits i.e 8 bytes and default value is 0.
4. It has wide range from -9,223,372,036,854,775,808(-2^63) to 9,223,372,036,854,775,808(-2^63-1). Long data type is useful when big whole numbers are needed.
Here, -2334456 has been stored into num with type long. L represents that JVM will consider it as a long value and will allot 8 bytes to it.
Let’s create a program where we will calculate the distance traveled by light in 1000 days using long data type. Let’s see the following source code.
Program source code 4:
It is clear from the above example program that the distance value could not have been held in an int variable.
Table 1: Size and Range of Integer Data types
|Type||Size||Minimum value||Maximum value|
|short||Two bytes||-32, 768||32, 767|
|int||Four bytes||-2, 147, 483, 648||2, 147, 483, 647|
|long||Eight bytes||-9, 223, 372, 036, 854, 775, 808||9, 223, 372, 036, 854, 775, 807|
Floating Point Types
Floating-point types are useful to hold numbers containing decimal point or fractional part. For example, 3.14, -2.567, 0.00034, etc. are called floating-point numbers. There are two kinds of floating-point types: float and double.
1. A float data type is used to represent the decimal number which can hold 6 to 7 decimal digits.
2. It is used to save the memory in large arrays of floating-point numbers.
3. The float data type is a single-precision 32-bit IEE 754 floating-point.
4. The default memory size allocated for this data type is 32 bits i.e 4 bytes and default value is 0.0f.
Here, if you do not write “f” then JVM will consider as double and would have allotted 8 bytes. In this case, it will give an error ” Type mismatch: cannot convert from double to float”. But if we use f, JVM will consider it as float value and allot only 4 bytes.
1. A double data type is also used to represent decimal number up to 15 decimal digits accurately.
2. The double data type is a double-precision 64-bit IEE 754 floating-point.
3. Memory size is 64 bits i.e 8 bytes and default value is 0.0d.
double distance=1.50e9; // Here, e represents x 10 to the power. Hence, 1.50e9 means 1.50*10^9. It is called scientific notation of representing number.
Let’s make a program where we will use double variables to calculate area of circle.
Program source code 5:
Table 2: Size and Range of Floating point Data types
|Type||Size||Minimum value||Maximum value|
Character data type
1. A char data type is mainly used to store a single character like P, a, b, z, x, etc.
2. It is a single 16-bit Unicode character.
3. Memory size taken by a single char is 2 bytes.
4. It can represent a range of 0 to 65536 characters.
5. A default value for char is ‘u0000’ which represents blank space or single space. Here, ‘u’ represents that character is a Unicode.
Let’s take an example program based on the character data type.
Program source code 6:
In the above example program, ch1 is assigned value 88 which is ASCII value and specifies letter X. ch3 is assigned value A and then it is incremented. So, ch3 will now store B, the next character in the ASCII sequence.
Boolean Data types
1. boolean data type represents one bit of information as either true or false. i.e. there is only two possibles value true or false. Internally, JVM uses one bit of storage to represent a boolean value.
2. It is generally used to test a particular conditional statement during the execution of program.
3. Boolean data type takes zero bytes of memory.
4. Default value is false.
In the above all examples, we assigned a value of the variable, assigned value will be printed as output. Suppose if you do not assign the value of the variable, JVM will assign the default value and the default value will be print. Let’s see the program source code.
Program source code 7:
Why take boolean data types zero bytes of memory?
Boolean data type takes zero bytes of memory space because boolean data type in Java is implemented by Sun Micro System using the concept of a flip-flop. A flip-flop is a general-purpose register which stores one bit of information (one for true and zero for false).
Different ways to initialize values and output
1. int a=10; // Initialization.
2. int a, b, c; // Initialization.
Output: o, o, o
3. int a=20, b, c;
Output: 20, 0, 0
4. int a=10, b=20, c;
Output: 10, 20, 0
5. int a=10, b=20, c=30;
Output: 10, 20, 30
Hope that this tutorial has covered almost all important points related to primitive data types in java with example programs. I hope that you will have understood this topic clearly and enjoyed it. In the next tutorial, we will learn non-primitive data types in java.
Next ➤ Non-primitive data types
⏪ PrevNext ⏩ |
Spotting patterns can be an important first step - explaining why it is appropriate to generalise is the next step, and often the most interesting and important.
Pick a square within a multiplication square and add the numbers on
each diagonal. What do you notice?
Semicircles are drawn on the sides of a rectangle ABCD. A circle passing through points ABCD carves out four crescent-shaped regions. Prove that the sum of the areas of the four crescents is equal to. . . .
You can work out the number someone else is thinking of as follows. Ask a friend to think of any natural number less than 100. Then ask them to tell you the remainders when this number is divided by. . . .
Pick the number of times a week that you eat chocolate. This number must be more than one but less than ten.
Multiply this number by 2. Add 5 (for Sunday). Multiply by 50... Can you explain why it. . . .
This shape comprises four semi-circles. What is the relationship
between the area of the shaded region and the area of the circle on
AB as diameter?
Find the area of the annulus in terms of the length of the chord
which is tangent to the inner circle.
Points A, B and C are the centres of three circles, each one of which touches the other two. Prove that the perimeter of the triangle ABC is equal to the diameter of the largest circle.
Make an eight by eight square, the layout is the same as a
chessboard. You can print out and use the square below. What is the
area of the square? Divide the square in the way shown by the red
dashed. . . .
Do you know how to find the area of a triangle? You can count the
squares. What happens if we turn the triangle on end? Press the
button and see. Try counting the number of units in the triangle
now. . . .
A little bit of algebra explains this 'magic'. Ask a friend to pick 3 consecutive numbers and to tell you a multiple of 3. Then ask them to add the four numbers and multiply by 67, and to tell you. . . .
In how many ways can you arrange three dice side by side on a
surface so that the sum of the numbers on each of the four faces
(top, bottom, front and back) is equal?
What happens to the perimeter of triangle ABC as the two smaller
circles change size and roll around inside the bigger circle?
When number pyramids have a sequence on the bottom layer, some interesting patterns emerge...
Prove that, given any three parallel lines, an equilateral triangle
always exists with one vertex on each of the three lines.
This is the second article on right-angled triangles whose edge lengths are whole numbers.
Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him?
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
You have been given nine weights, one of which is slightly heavier
than the rest. Can you work out which weight is heavier in just two
weighings of the balance?
Take any whole number between 1 and 999, add the squares of the
digits to get a new number. Make some conjectures about what
happens in general.
Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make?
This is an interactivity in which you have to sort the steps in the
completion of the square into the correct order to prove the
formula for the solutions of quadratic equations.
These formulae are often quoted, but rarely proved. In this article, we derive the formulae for the volumes of a square-based pyramid and a cone, using relatively simple mathematical concepts.
Can you discover whether this is a fair game?
There are four children in a family, two girls, Kate and Sally, and
two boys, Tom and Ben. How old are the children?
Liam's house has a staircase with 12 steps. He can go down the steps one at a time or two at time. In how many different ways can Liam go down the 12 steps?
This article discusses how every Pythagorean triple (a, b, c) can be illustrated by a square and an L shape within another square. You are invited to find some triples for yourself.
The diagonal of a square intersects the line joining one of the unused corners to the midpoint of the opposite side. What do you notice about the line segments produced?
The first of two articles on Pythagorean Triples which asks how many right angled triangles can you find with the lengths of each side exactly a whole number measurement. Try it!
Imagine we have four bags containing numbers from a sequence. What numbers can we make now?
What is the area of the quadrilateral APOQ? Working on the building
blocks will give you some insights that may help you to work it
Choose a couple of the sequences. Try to picture how to make the next, and the next, and the next... Can you describe your reasoning?
If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable.
Decide which of these diagrams are traversable.
Show that if you add 1 to the product of four consecutive numbers
the answer is ALWAYS a perfect square.
How many pairs of numbers can you find that add up to a multiple of 11? Do you notice anything interesting about your results?
Can you make sense of these three proofs of Pythagoras' Theorem?
Arrange the numbers 1 to 16 into a 4 by 4 array. Choose a number.
Cross out the numbers on the same row and column. Repeat this
process. Add up you four numbers. Why do they always add up to 34?
What fractions can you divide the diagonal of a square into by simple folding?
Four identical right angled triangles are drawn on the sides of a
square. Two face out, two face in. Why do the four vertices marked
with dots lie on one line?
The largest square which fits into a circle is ABCD and EFGH is a square with G and H on the line CD and E and F on the circumference of the circle. Show that AB = 5EF.
Similarly the largest. . . .
Some puzzles requiring no knowledge of knot theory, just a careful
inspection of the patterns. A glimpse of the classification of
knots and a little about prime knots, crossing numbers and. . . .
Find the largest integer which divides every member of the
following sequence: 1^5-1, 2^5-2, 3^5-3, ... n^5-n.
Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem?
In this 7-sandwich: 7 1 3 1 6 4 3 5 7 2 4 6 2 5 there are 7 numbers between the 7s, 6 between the 6s etc. The article shows which values of n can make n-sandwiches and which cannot.
An article which gives an account of some properties of magic squares.
Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas.
Find the smallest positive integer N such that N/2 is a perfect
cube, N/3 is a perfect fifth power and N/5 is a perfect seventh
Take any prime number greater than 3 , square it and subtract one.
Working on the building blocks will help you to explain what is
special about your results.
Imagine two identical cylindrical pipes meeting at right angles and think about the shape of the space which belongs to both pipes. Early Chinese mathematicians call this shape the mouhefanggai.
If you know the sizes of the angles marked with coloured dots in
this diagram which angles can you find by calculation? |
Adjectives are words that modify the nouns or pronouns.Adjectives denote the quality, quantity, number, state etc of the nouns or pronouns they qualify and add something to their meaning. They add beauty and sweetness to the language. Words like white, black, sweet, bitter, big, small, happy, sad, one, two, first, second, some, any etc are some of the examples of adjectives. Adjectives can be classified into the following 12 types:
1. Proper Adjectives: Proper adjectives are derived from proper nouns. Proper adjectives are always capitalized. Examples of proper adjectives include British, American, Indian, Islamic, Vedic, Latin, Greek etc. Most English adjectives are common adjectives and need no capitalisation.
2. Qualitative Adjectives or Adjectives of Quality:
Qualitative adjectives or adjectives of quality describe the quality or state of persons, places, things etc. Tall, big, honest, poor, rich, happy, sad, good, bad, old, young, angry etc are the words that describe the quality or state of nouns. So they are adjectives of quality. Most adjectives of quality can be divided into three sub types:
a. Positive Adjectives or Adjectives of Positive Degree:
When qualitative adjectives simply describe someone or something without making any comparison, they are called positive adjectives or adjectives of positive degree.
Example: Good, tall, small, happy, sad, honest, long, beautiful etc.
b. Comparative Adjectives or Adjectives of Comparative Degree:
When the qualitative adjectives not only describe but make comparison between two persons, two things or two groups, they are called comparative adjectives or adjectives of comparative degree. Qualitative Adjectives of one syllable form their comparative forms by adding er and those of more than one syllable form their comparative forms by adding less and more. Example:
This book is better than that.
He is taller than I.
Their house is smaller than that of ours.
He is more honest than his friends. etc.
Better, taller, smaller, more honest etc are the comparative forms of good, tall, small and honest respectively.
c. Superlative Adjectives or Adjectives of Superlative Degree:
When more than two persons or things are compared, superlative adjectives are used. Qualitative adjectives of one syllable form their superlative forms by adding est. Qualitative adjectives of more than one syllable form their superlative forms by adding least and most.
He is the tallest boy in the class.
He was the greatest of all the teachers.
The Brahmaputra is the biggest river in India.
She is the most beautiful girl in the city.
You don't have even the least knowledge of English.
3. Quantitative Adjectives or Adjectives of Quantity and Numeral Adjectives or Adjectives of Number:
Adjectives of quantity or number denote the quantity or number of nouns. They answer how many or how much about nouns. Adjectives like some, any, great etc may denote either quantity or number depending on the nouns they qualify. If the nouns are countable, some, any and great denote the number. If the nouns are uncountable, they denote the quantity. Little and much are adjectives of quantity. They never indicate the number. Some, any, great, much, little etc are also called indefinite adjectives because they don't show the fixed amount or number of anything. Some adjectives of number are called definite numeral adjectives because they show the fixed number like one, two, three, first, second, third etc. Definite numerals are again divided into cardinal and ordinal adjectives. One, two, three etc are examples of cardinals. They show the fixed number. First, second, third etc are called ordinals. They show the order of persons or things.
Examples of Quantitative and Numeral Adjectives:
There is little milk in the pot.
I don't have any money today.
Give me some water.
He took great pains to do the work.
Few people were present there.
I have bought many books.
Give me only one glass of water.
Bring two bottles of water.
He is the first boy of the class.
This is the second time he met me.
4. Possessive Adjectives:
Possessive Adjectives are derived from personal pronouns. My, your, her, their, etc are called possessive adjectives for they modify the nouns they precede. These adjectives indicate the relationship, possession or ownership of someone or something.
This is my book.
It is their house.
What is your name?
She came with her mother etc.
5. Article Adjectives:
The indefinite articles 'a' and 'an' and the definite article 'the' are also a type of adjectives as they modify the nouns they precede.
Distributive adjectives refer to all the members of a group as individuals. That is, they take into consideration the whole group but talk of only one member of the group. Each, every, either and neither are distributive adjectives.
Each boy was given a prize.
Every child wants love and attention from the parents.
Either book will do.
I'll join neither party.
7. Demonstrative Adjectives:
Demonstrative adjectives point out the persons, places or things they modify. There are four demonstrative adjectives: this, that, these and those. They denote the nearness or remoteness of persons or things.
This book is very useful to children.
Those mangoes are not sweet.
8. Interrogative Adjectives:
Interrogative adjectives are used to ask questions. There are three interrogative adjectives: what, which and whose. Other interrogative words don't modify nouns. So they are pronouns.
What book do you want?
Which shop are you talking about?
Whose pen is this?
9. Emphasising Adjectives: Very and own are emphasising adjectives. Own emphasizes the nouns which are already emphasized, that is, modified by possessive adjectives. Very emphasizes nouns which may or may not be emphasized, that is, modified by other nouns.
I saw it with my own eyes.
I have written the letter with my own hand. This is the very book I want.
I live in this very house.
10. Coordinate and Non Coordinate Adjectives:
When two or more adjectives qualifying a noun can be separated by commas or and, they are called coordinate adjectives. When two or more adjectives qualifying a noun cannot be separated by commas or and, they are called non coordinate adjectives. The main difference between coordinate and non coordinate adjectives is that the order of coordinate adjectives can be changed without losing the intended meaning of the sentence but that of non coordinate adjectives cannot be changed without losing the meaning. Moreover commas and the word 'and' can be used in coordinate adjectives. Let us look at some examples of coordinate adjectives:
She is a talented, beautiful girl.
He is a happy, rich man.
The adjectives can also be arranged as
She is a beautiful, talented girl.
He is a rich, happy man.
The word 'and' can also be used as
She is a talented and beautiful girl.
He is a happy and rich man.
Still they make sense. Now look at the following examples:
He has taken my two books.
I have bought two black shirts.
We can not change the order of adjectives or put commas or and between them. The intended meaning of the adjectives will be lost.
11. Compound Adjectives:
When two or more words combine to function as single adjectives, they are called compound adjectives.
Dr Bhupen Hazarika is a well known musician.
He is a part-time worker.
He is a Kind-hearted man.
This is a gluten free diet.
Tagore is a world-famous personality.
She is very open minded.
The compound words well known, part time, Kind-hearted, gluten free, world famous and open minded are used as adjectives here.
12. Attributive and Predicative Adjectives:
Adjectives placed before the nouns they qualify are called attributive adjectives.
Good boy, big house, red pen etc.
Adjectives placed after the verbs are called predicative adjectives.
He is angry.
She is beautiful.
We are happy etc.
Some adjectives can be used only predicatively.
I am afraid.
She is alone etc.
Afraid, alone etc are never placed before the nouns.
It has to be remembered that some adjectives can belong to more than one category. For example, some and any belong to both quantitative and numeral adjectives. Adjectives like red, blue, small, big may sometimes function as qualitative and sometimes as coordinate adjectives. Moreover, some adjectives like some, any, each, every, this, that etc sometimes function as pronouns also. |
When astronauts sent back to Earth the iconic Blue Marble image in 1972, the picture galvanized the nascent environmental movement, demonstrating to the public how "tiny, vulnerable, and incredibly lonely" our planet is.
Ironically, a society capable of taking that photo is also one that is capable of grave environmental damage. As engineer Laurent Pambaguian put it to me, we're "living at a time when life is comfortable and we have not destroyed the planet yet.” That time may not be long.
Not your average, terrestrial environmentalist, Pambaguian is part of the European Space Agency’s Clean Space Initiative, which claims that “reaching for the sky leaves footprints on the ground.” It seeks to understand the environmental impact of space exploration, then find ways to reduce it.
“Before, we didn’t take too much care. ‘There is plenty of room in space,’ we thought. Then we realized the room was very crowded,” says Pambaguian. NASA claims that more than 500,000 pieces of debris, ranging from the size of a marble to eight tons, are in orbit. These scattered fragments travel at speeds up to 17,500 mph. In the forthcoming movie Gravity, a piece of satellite debris destroys a shuttle, but even much smaller objects such as chips of paint could damage a satellite, space station, or a spacecraft carrying astronauts.
Left uncollected, the debris stands to collide, creating clouds of fragments that would lead to an irreversible pollution problem. A 2009 study performed by all the major space agencies—including ESA, NASA, and Roscosmos—revealed that even if no further space launches occur, the amount of orbital debris will continue to increase. More than simply littering Earth’s low orbits, we would be hindering our ability to safely travel beyond it.
The only way to preserve key orbits is to remove the debris, like picking up scraps of refuse blowing down a highway. Debris experts recommend removing at least five objects every year for the next 50 years. The approach shouldn’t leave debris behind as it cleans up the sky.
“It’s an extremely challenging mission,” says Luisa Innocenti, the head of the Clean Space Office. “Getting close to the debris is dangerous because you need to maneuver around the uncontrolled object.”
This means developing a guidance and navigation control system where chasers stay close to the targeted debris. A capturing mechanism—a big net, a harpoon, a robotic arm, or a giant tentacle that, amid the stars, would clamp down on the object—would collect the debris and return it to Earth. The goal is to have a mission in 2022.
Innocenti also emphasizes the need to design satellites that won’t become debris once they reach the end of their expected five-to-10 year life spans. Most satellites are equipped with collision avoidance maneuvers—until the last bead of propellant burns away and their uncontrolled orbits begin. The Clean Space Initiative is developing technology for satellites to return to the Earth’s atmosphere where they can safely burn down (called Design for Demise).
“We produce very few spacecraft, so we believe our impact is very limited, but we also believe it is our moral obligation to make it better, whatever it is,” says Innocenti. The initiative is also developing new technology, such as non-toxic propellant, since the current fuel, hydrazine, is carcinogenic. Innocenti claims that being green does not mean being more expensive, it means streamlining with less energy and materials.
NASA has also taken steps to minimize its impact on Earth and in space. The agency collaborates with the Department of Defense to characterize orbital debris. Their Sustainability Base, where emerging technologies are tested, is one of the greenest Federal buildings in the country. NASA claims it leaves virtually no carbon footprint. Engineers are also striving to prevent the cross-contamination of millions of microbes between Mars and Earth.
But according to Pambaguian, a large obstacle stands in the way. “We always have an enemy that at the same time is a friend—heritage.” If a piece of technology works the way it’s supposed to, why shouldn’t it be used again? Qualifying new technology can take years and cost millions of dollars. That means relying on technology that was designed before the awareness of the environmental effects of spacecraft, launchers, and propellant emerged. And because space programs take years to develop, “state of the art” technology used to conceive a spacecraft can become obsolete the day it’s launched.
“We have to convince people that this change of technology is opening up new possibilities and new opportunities,” says Pambaguian. Already, new legislative demands and regulations such as the European Commission’s regulation on the Registration, Evaluation and Authorization of Chemicals could threaten to impose limitations on materials that the space industry currently considers essential. By pioneering a more eco-friendly approach, space agencies can be on the frontier of not just space development, but human exploration.
Every species has at one time explored in search of food and habitat for the sake of survival. And by the time an asteroid collides with Earth or the sun consumes our planet, leaving nothing but a white dwarf star some billions of years from now, the discoveries from our quests into space will be vital to avoiding our extinction.
So it is that space exploration is a necessary chapter in the story of our long-term survival. Scientists have even studied the chemistry on Venus and learned about chemicals destroying the stratospheric ozone. “It’s a fantastic objective,” says Pambaguian. “But if you prepare the technology, move to another planet, and make the same mistakes, I don’t see what you have gained.”
More From The Atlantic
- How Google Uses Data to Build a Better Worker
- Why Facebook and Twitter Are Fighting Over Your Television
- This Electronics Store's Stock Went Up Over 1,000% Because People Thought It Was Twitter
- Space & Astronomy
- space exploration |
Silhouettes and waist circumferences representing normal, overweight, and obese
|Classification and external resources|
Obesity is a medical condition in which excess body fat has accumulated to the extent that it may have a negative effect on health, leading to reduced life expectancy and/or increased health problems. In Western countries, people are considered obese when their body mass index (BMI), a measurement obtained by dividing a person's weight by the square of the person's height, exceeds 30 kg/m2, with the range 25-30 kg/m2 defined as overweight. Some East Asian countries use stricter criteria.
Obesity increases the likelihood of various diseases, particularly heart disease, type 2 diabetes, obstructive sleep apnea, certain types of cancer, and osteoarthritis. Obesity is most commonly caused by a combination of excessive food energy intake, lack of physical activity, and genetic susceptibility, although a few cases are caused primarily by genes, endocrine disorders, medications, or psychiatric illness. Evidence to support the view that some obese people eat little yet gain weight due to a slow metabolism is limited. On average, obese people have a greater energy expenditure than their thin counterparts due to the energy required to maintain an increased body mass.
Dieting and exercising are the main treatments for obesity. Diet quality can be improved by reducing the consumption of energy-dense foods, such as those high in fat and sugars, and by increasing the intake of dietary fiber. With a suitable diet, anti-obesity drugs may be taken to reduce appetite or decrease fat absorption. If diet, exercise, and medication are not effective, a gastric balloon may assist with weight loss, or surgery may be performed to reduce stomach volume and/or bowel length, leading to feeling full earlier and a reduced ability to absorb nutrients from food.
Obesity is a leading preventable cause of death worldwide, with increasing rates in adults and children. Authorities view it as one of the most serious public health problems of the 21st century. Obesity is stigmatized in much of the modern world (particularly in the Western world), though it was widely seen as a symbol of wealth and fertility at other times in history and still is in some parts of the world. In 2013, the American Medical Association classified obesity as a disease.
- 1 Classification
- 2 Effects on health
- 3 Causes
- 4 Pathophysiology
- 5 Public health
- 6 Management
- 7 Epidemiology
- 8 History
- 9 Society and culture
- 10 Childhood obesity
- 11 Other animals
- 12 Notes
- 13 Further reading
Obesity is a medical condition in which excess body fat has accumulated to the extent that it may have an adverse effect on health. It is defined by body mass index (BMI) and further evaluated in terms of fat distribution via the waist–hip ratio and total cardiovascular risk factors. BMI is closely related to both percentage body fat and total body fat.
In children, a healthy weight varies with age and sex. Obesity in children and adolescents is defined not as an absolute number but in relation to a historical normal group, such that obesity is a BMI greater than the 95th percentile. The reference data on which these percentiles were based date from 1963 to 1994, and thus have not been affected by the recent increases in weight.
|30.0||35.0||class I obesity|
|35.0||40.0||class II obesity|
|40.0||class III obesity|
BMI is defined as the subject's weight divided by the square of their height and is calculated as follows.
- where m and h are the subject's weight and height respectively.
BMI is usually expressed in kilograms per square metre, resulting when weight is measured in kilograms and height in metres. To convert from pounds per square inch multiply by 703 (kg/m2)/(lb/sq in).
Some modifications to the WHO definitions have been made by particular bodies. The surgical literature breaks down "class III" obesity into further categories whose exact values are still disputed.
- Any BMI ≥ 35 or 40 kg/m2 is severe obesity.
- A BMI of ≥ 35 kg/m2 and experiencing obesity-related health conditions or ≥40–44.9 kg/m2 is morbid obesity.
- A BMI of ≥ 45 or 50 kg/m2 is super obesity.
As Asian populations develop negative health consequences at a lower BMI than Caucasians, some nations have redefined obesity; the Japanese have defined obesity as any BMI greater than 25 kg/m2 while China uses a BMI of greater than 28 kg/m2.
Effects on health
Excessive body weight is associated with various diseases, particularly cardiovascular diseases, diabetes mellitus type 2, obstructive sleep apnea, certain types of cancer, osteoarthritis and asthma. As a result, obesity has been found to reduce life expectancy.
Obesity is one of the leading preventable causes of death worldwide. Large-scale American and European studies have found that mortality risk is lowest at a BMI of 20–25 kg/m2 in non-smokers and at 24–27 kg/m2 in current smokers, with risk increasing along with changes in either direction. In Asians risk begins to increase between 22–25 kg/m2. A BMI above 32 kg/m2 has been associated with a doubled mortality rate among women over a 16-year period. In the United States obesity is estimated to cause 111,909 to 365,000 deaths per year, while 1 million (7.7%) of deaths in Europe are attributed to excess weight. On average, obesity reduces life expectancy by six to seven years, a BMI of 30–35 kg/m2 reduces life expectancy by two to four years, while severe obesity (BMI > 40 kg/m2) reduces life expectancy by ten years.
Obesity increases the risk of many physical and mental conditions. These comorbidities are most commonly shown in metabolic syndrome, a combination of medical disorders which includes: diabetes mellitus type 2, high blood pressure, high blood cholesterol, and high triglyceride levels.
Complications are either directly caused by obesity or indirectly related through mechanisms sharing a common cause such as a poor diet or a sedentary lifestyle. The strength of the link between obesity and specific conditions varies. One of the strongest is the link with type 2 diabetes. Excess body fat underlies 64% of cases of diabetes in men and 77% of cases in women.
Health consequences fall into two broad categories: those attributable to the effects of increased fat mass (such as osteoarthritis, obstructive sleep apnea, social stigmatization) and those due to the increased number of fat cells (diabetes, cancer, cardiovascular disease, non-alcoholic fatty liver disease). Increases in body fat alter the body's response to insulin, potentially leading to insulin resistance. Increased fat also creates a proinflammatory state, and a prothrombotic state.
|Medical field||Condition||Medical field||Condition|
|Endocrinology and Reproductive medicine||Gastrointestinal|
|Rheumatology and Orthopedics||Urology and Nephrology|
Although the negative health consequences of obesity in the general population are well supported by the available evidence, health outcomes in certain subgroups seem to be improved at an increased BMI, a phenomenon known as the obesity survival paradox. The paradox was first described in 1999 in overweight and obese people undergoing hemodialysis, and has subsequently been found in those with heart failure and peripheral artery disease (PAD).
In people with heart failure, those with a BMI between 30.0 and 34.9 had lower mortality than those with a normal weight. This has been attributed to the fact that people often lose weight as they become progressively more ill. Similar findings have been made in other types of heart disease. People with class I obesity and heart disease do not have greater rates of further heart problems than people of normal weight who also have heart disease. In people with greater degrees of obesity, however, the risk of further cardiovascular events is increased. Even after cardiac bypass surgery, no increase in mortality is seen in the overweight and obese. One study found that the improved survival could be explained by the more aggressive treatment obese people receive after a cardiac event. Another found that if one takes into account chronic obstructive pulmonary disease (COPD) in those with PAD, the benefit of obesity no longer exists.
At an individual level, a combination of excessive food energy intake and a lack of physical activity is thought to explain most cases of obesity. A limited number of cases are due primarily to genetics, medical reasons, or psychiatric illness. In contrast, increasing rates of obesity at a societal level are felt to be due to an easily accessible and palatable diet, increased reliance on cars, and mechanized manufacturing.
A 2006 review identified ten other possible contributors to the recent increase of obesity: (1) insufficient sleep, (2) endocrine disruptors (environmental pollutants that interfere with lipid metabolism), (3) decreased variability in ambient temperature, (4) decreased rates of smoking, because smoking suppresses appetite, (5) increased use of medications that can cause weight gain (e.g., atypical antipsychotics), (6) proportional increases in ethnic and age groups that tend to be heavier, (7) pregnancy at a later age (which may cause susceptibility to obesity in children), (8) epigenetic risk factors passed on generationally, (9) natural selection for higher BMI, and (10) assortative mating leading to increased concentration of obesity risk factors (this would increase the number of obese people by increasing population variance in weight). While there is substantial evidence supporting the influence of these mechanisms on the increased prevalence of obesity, the evidence is still inconclusive, and the authors state that these are probably less influential than the ones discussed in the previous paragraph.
Dietary energy supply per capita varies markedly between different regions and countries. It has also changed significantly over time. From the early 1970s to the late 1990s the average food energy available per person per day (the amount of food bought) increased in all parts of the world except Eastern Europe. The United States had the highest availability with 3,654 calories (15,290 kJ) per person in 1996. This increased further in 2003 to 3,754 calories (15,710 kJ). During the late 1990s Europeans had 3,394 calories (14,200 kJ) per person, in the developing areas of Asia there were 2,648 calories (11,080 kJ) per person, and in sub-Saharan Africa people had 2,176 calories (9,100 kJ) per person. Total food energy consumption has been found to be related to obesity.
The widespread availability of nutritional guidelines has done little to address the problems of overeating and poor dietary choice. From 1971 to 2000, obesity rates in the United States increased from 14.5% to 30.9%. During the same period, an increase occurred in the average amount of food energy consumed. For women, the average increase was 335 calories (1,400 kJ) per day (1,542 calories (6,450 kJ) in 1971 and 1,877 calories (7,850 kJ) in 2004), while for men the average increase was 168 calories (700 kJ) per day (2,450 calories (10,300 kJ) in 1971 and 2,618 calories (10,950 kJ) in 2004). Most of this extra food energy came from an increase in carbohydrate consumption rather than fat consumption. The primary sources of these extra carbohydrates are sweetened beverages, which now account for almost 25 percent of daily food energy in young adults in America, and potato chips. Consumption of sweetened drinks such as soft drinks, fruit drinks, iced tea, and energy and vitamin water drinks is believed to be contributing to the rising rates of obesity and to an increased risk of metabolic syndrome and type 2 diabetes.
As societies become increasingly reliant on energy-dense, big-portions, and fast-food meals, the association between fast-food consumption and obesity becomes more concerning. In the United States consumption of fast-food meals tripled and food energy intake from these meals quadrupled between 1977 and 1995.
Agricultural policy and techniques in the United States and Europe have led to lower food prices. In the United States, subsidization of corn, soy, wheat, and rice through the U.S. farm bill has made the main sources of processed food cheap compared to fruits and vegetables. Calorie count laws and nutrition facts labels attempt to steer people toward making healthier food choices, including awareness of how much food energy is being consumed.
Obese people consistently under-report their food consumption as compared to people of normal weight. This is supported both by tests of people carried out in a calorimeter room and by direct observation.
A sedentary lifestyle plays a significant role in obesity. Worldwide there has been a large shift towards less physically demanding work, and currently at least 30% of the world's population gets insufficient exercise. This is primarily due to increasing use of mechanized transportation and a greater prevalence of labor-saving technology in the home. In children, there appear to be declines in levels of physical activity due to less walking and physical education. World trends in active leisure time physical activity are less clear. The World Health Organization indicates people worldwide are taking up less active recreational pursuits, while a study from Finland found an increase and a study from the United States found leisure-time physical activity has not changed significantly.
In both children and adults, there is an association between television viewing time and the risk of obesity. A review found 63 of 73 studies (86%) showed an increased rate of childhood obesity with increased media exposure, with rates increasing proportionally to time spent watching television.
Like many other medical conditions, obesity is the result of an interplay between genetic and environmental factors. Polymorphisms in various genes controlling appetite and metabolism predispose to obesity when sufficient food energy is present. As of 2006, more than 41 of these sites on the human genome have been linked to the development of obesity when a favorable environment is present. People with two copies of the FTO gene (fat mass and obesity associated gene) have been found on average to weigh 3–4 kg more and have a 1.67-fold greater risk of obesity compared with those without the risk allele. The differences in BMI between people that are due to genetics varies depending on the population examined from 6% to 85%.
Obesity is a major feature in several syndromes, such as Prader–Willi syndrome, Bardet–Biedl syndrome, Cohen syndrome, and MOMO syndrome. (The term "non-syndromic obesity" is sometimes used to exclude these conditions.) In people with early-onset severe obesity (defined by an onset before 10 years of age and body mass index over three standard deviations above normal), 7% harbor a single point DNA mutation.
Studies that have focused on inheritance patterns rather than on specific genes have found that 80% of the offspring of two obese parents were also obese, in contrast to less than 10% of the offspring of two parents who were of normal weight. Different people exposed to the same environment have different risks of obesity due to their underlying genetics.
The thrifty gene hypothesis postulates that, due to dietary scarcity during human evolution, people are prone to obesity. Their ability to take advantage of rare periods of abundance by storing energy as fat would be advantageous during times of varying food availability, and individuals with greater adipose reserves would be more likely to survive famine. This tendency to store fat, however, would be maladaptive in societies with stable food supplies. This theory has received various criticisms, and other evolutionarily-based theories such as the drifty gene hypothesis and the thrifty phenotype hypothesis have also been proposed.
Certain physical and mental illnesses and the pharmaceutical substances used to treat them can increase risk of obesity. Medical illnesses that increase obesity risk include several rare genetic syndromes (listed above) as well as some congenital or acquired conditions: hypothyroidism, Cushing's syndrome, growth hormone deficiency, and the eating disorders: binge eating disorder and night eating syndrome. However, obesity is not regarded as a psychiatric disorder, and therefore is not listed in the DSM-IVR as a psychiatric illness. The risk of overweight and obesity is higher in patients with psychiatric disorders than in persons without psychiatric disorders.
Certain medications may cause weight gain or changes in body composition; these include insulin, sulfonylureas, thiazolidinediones, atypical antipsychotics, antidepressants, steroids, certain anticonvulsants (phenytoin and valproate), pizotifen, and some forms of hormonal contraception.
While genetic influences are important to understanding obesity, they cannot explain the current dramatic increase seen within specific countries or globally. Though it is accepted that energy consumption in excess of energy expenditure leads to obesity on an individual basis, the cause of the shifts in these two factors on the societal scale is much debated. There are a number of theories as to the cause but most believe it is a combination of various factors.
The correlation between social class and BMI varies globally. A review in 1989 found that in developed countries women of a high social class were less likely to be obese. No significant differences were seen among men of different social classes. In the developing world, women, men, and children from high social classes had greater rates of obesity. An update of this review carried out in 2007 found the same relationships, but they were weaker. The decrease in strength of correlation was felt to be due to the effects of globalization. Among developed countries, levels of adult obesity, and percentage of teenage children who are overweight, are correlated with income inequality. A similar relationship is seen among US states: more adults, even in higher social classes, are obese in more unequal states.
Many explanations have been put forth for associations between BMI and social class. It is thought that in developed countries, the wealthy are able to afford more nutritious food, they are under greater social pressure to remain slim, and have more opportunities along with greater expectations for physical fitness. In undeveloped countries the ability to afford food, high energy expenditure with physical labor, and cultural values favoring a larger body size are believed to contribute to the observed patterns. Attitudes toward body weight held by people in one's life may also play a role in obesity. A correlation in BMI changes over time has been found among friends, siblings, and spouses. Stress and perceived low social status appear to increase risk of obesity.
Smoking has a significant effect on an individual's weight. Those who quit smoking gain an average of 4.4 kilograms (9.7 lb) for men and 5.0 kilograms (11.0 lb) for women over ten years. However, changing rates of smoking have had little effect on the overall rates of obesity.
In the United States the number of children a person has is related to their risk of obesity. A woman's risk increases by 7% per child, while a man's risk increases by 4% per child. This could be partly explained by the fact that having dependent children decreases physical activity in Western parents.
In the developing world urbanization is playing a role in increasing rate of obesity. In China overall rates of obesity are below 5%; however, in some cities rates of obesity are greater than 20%.
Malnutrition in early life is believed to play a role in the rising rates of obesity in the developing world. Endocrine changes that occur during periods of malnutrition may promote the storage of fat once more food energy becomes available.
Consistent with cognitive epidemiological data, numerous studies confirm that obesity is associated with cognitive deficits. Whether obesity causes cognitive deficits, or vice versa is unclear at present.
The study of the effect of infectious agents on metabolism is still in its early stages. Gut flora has been shown to differ between lean and obese humans. There is an indication that gut flora in obese and lean individuals can affect the metabolic potential. This apparent alteration of the metabolic potential is believed to confer a greater capacity to harvest energy contributing to obesity. Whether these differences are the direct cause or the result of obesity has yet to be determined unequivocally.
An association between viruses and obesity has been found in humans and several different animal species. The amount that these associations may have contributed to the rising rate of obesity is yet to be determined.
There are many possible pathophysiological mechanisms involved in the development and maintenance of obesity. This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory. These investigators postulated that leptin was a satiety factor. In the ob/ob mouse, mutations in the leptin gene resulted in the obese phenotype opening the possibility of leptin therapy for human obesity. However, soon thereafter J. F. Caro's laboratory could not detect any mutations in the leptin gene in humans with obesity. On the contrary Leptin expression was increased proposing the possibility of Leptin-resistance in human obesity. Since this discovery, many other hormonal mechanisms have been elucidated that participate in the regulation of appetite and food intake, storage patterns of adipose tissue, and development of insulin resistance. Since leptin's discovery, ghrelin, insulin, orexin, PYY 3-36, cholecystokinin, adiponectin, as well as many other mediators have been studied. The adipokines are mediators produced by adipose tissue; their action is thought to modify many obesity-related diseases.
Leptin and ghrelin are considered to be complementary in their influence on appetite, with ghrelin produced by the stomach modulating short-term appetitive control (i.e. to eat when the stomach is empty and to stop when the stomach is stretched). Leptin is produced by adipose tissue to signal fat storage reserves in the body, and mediates long-term appetitive controls (i.e. to eat more when fat storages are low and less when fat storages are high). Although administration of leptin may be effective in a small subset of obese individuals who are leptin deficient, most obese individuals are thought to be leptin resistant and have been found to have high levels of leptin. This resistance is thought to explain in part why administration of leptin has not been shown to be effective in suppressing appetite in most obese people.
While leptin and ghrelin are produced peripherally, they control appetite through their actions on the central nervous system. In particular, they and other appetite-related hormones act on the hypothalamus, a region of the brain central to the regulation of food intake and energy expenditure. There are several circuits within the hypothalamus that contribute to its role in integrating appetite, the melanocortin pathway being the most well understood. The circuit begins with an area of the hypothalamus, the arcuate nucleus, that has outputs to the lateral hypothalamus (LH) and ventromedial hypothalamus (VMH), the brain's feeding and satiety centers, respectively.
The arcuate nucleus contains two distinct groups of neurons. The first group coexpresses neuropeptide Y (NPY) and agouti-related peptide (AgRP) and has stimulatory inputs to the LH and inhibitory inputs to the VMH. The second group coexpresses pro-opiomelanocortin (POMC) and cocaine- and amphetamine-regulated transcript (CART) and has stimulatory inputs to the VMH and inhibitory inputs to the LH. Consequently, NPY/AgRP neurons stimulate feeding and inhibit satiety, while POMC/CART neurons stimulate satiety and inhibit feeding. Both groups of arcuate nucleus neurons are regulated in part by leptin. Leptin inhibits the NPY/AgRP group while stimulating the POMC/CART group. Thus a deficiency in leptin signaling, either via leptin deficiency or leptin resistance, leads to overfeeding and may account for some genetic and acquired forms of obesity.
The World Health Organization (WHO) predicts that overweight and obesity may soon replace more traditional public health concerns such as undernutrition and infectious diseases as the most significant cause of poor health. Obesity is a public health and policy problem because of its prevalence, costs, and health effects. The United States Preventive Services Task Force recommends screening for all adults followed by behavioral interventions in those who are obese. Public health efforts seek to understand and correct the environmental factors responsible for the increasing prevalence of obesity in the population. Solutions look at changing the factors that cause excess food energy consumption and inhibit physical activity. Efforts include federally reimbursed meal programs in schools, limiting direct junk food marketing to children, and decreasing access to sugar-sweetened beverages in schools. When constructing urban environments, efforts have been made to increase access to parks and to develop pedestrian routes.
Many countries and groups have published reports pertaining to obesity. In 1998, the first US Federal guidelines were published, titled "Clinical Guidelines on the Identification, Evaluation, and Treatment of Overweight and Obesity in Adults: The Evidence Report". In 2006 the Canadian Obesity Network published the "Canadian Clinical Practice Guidelines (CPG) on the Management and Prevention of Obesity in Adults and Children". This is a comprehensive evidence-based guideline to address the management and prevention of overweight and obesity in adults and children.
In 2004, the United Kingdom Royal College of Physicians, the Faculty of Public Health and the Royal College of Paediatrics and Child Health released the report "Storing up Problems", which highlighted the growing problem of obesity in the UK. The same year, the House of Commons Health Select Committee published its "most comprehensive inquiry [...] ever undertaken" into the impact of obesity on health and society in the UK and possible approaches to the problem. In 2006, the National Institute for Health and Clinical Excellence (NICE) issued a guideline on the diagnosis and management of obesity, as well as policy implications for non-healthcare organizations such as local councils. A 2007 report produced by Sir Derek Wanless for the King's Fund warned that unless further action was taken, obesity had the capacity to cripple the National Health Service financially.
Comprehensive approaches are being looked at to address the rising rates of obesity. The Obesity Policy Action (OPA) framework divides measure into 'upstream' policies, 'midstream' policies, 'downstream' policies. 'Upstream' policies look at changing society, 'midstream' policies try to alter individuals' behavior to prevent obesity, and 'downstream' policies try to treat currently afflicted people.
The main treatment for obesity consists of dieting and physical exercise. Diet programs may produce weight loss over the short term, but maintaining this weight loss is frequently difficult and often requires making exercise and a lower food energy diet a permanent part of a person's lifestyle. All types of low-carbohydrate and low-fat diets appear equally beneficial. The heart disease and diabetes risks associated with different diets also appear to be similar. Success rates of long-term weight loss maintenance with lifestyle changes are low, ranging from 2–20%. Dietary and lifestyle changes are effective in limiting excessive weight gain in pregnancy and improve outcomes for both the mother and the child. Intensive behavioral counseling is recommended in those who are both obese and have other risk factors for heart disease.
Three medications, orlistat (Xenical), lorcaserin (Belviq) and a combination of phentermine and topiramate (Qsymia) are currently available and have evidence for long term use. Weight loss with orlistat is modest, an average of 2.9 kg (6.4 lb) at 1 to 4 years. Its use is associated with high rates of gastrointestinal side effects and concerns have been raised about negative effects on the kidneys. The other two medications are available in the United States but not Europe. Lorcaserin results in an average 3.1 kg weight loss (3% of body weight) greater than placebo over a year; however, it may increase heart valve problems. A combination of phentermine and topiramate is also somewhat effective; however, it may be associated with heart problems. There is no information on how these drugs affect longer-term complications of obesity such as cardiovascular disease or death.
The most effective treatment for obesity is bariatric surgery. Surgery for severe obesity is associated with long-term weight loss, improvement in obesity related conditions, and decreased overall mortality. One study found a weight loss of between 14% and 25% (depending on the type of procedure performed) at 10 years, and a 29% reduction in all cause mortality when compared to standard weight loss measures. Complications occur in about 17% of cases and reoperation is needed in 7% of cases. Due to its cost and risks, researchers are searching for other effective yet less invasive treatments including devices that occupy space in the stomach.
In earlier historical periods obesity was rare, and achievable only by a small elite, although already recognised as a problem for health. But as prosperity increased in the Early Modern period, it affected increasingly larger groups of the population. In 1997 the WHO formally recognized obesity as a global epidemic. As of 2008 the WHO estimates that at least 500 million adults (greater than 10%) are obese, with higher rates among women than men. The rate of obesity also increases with age at least up to 50 or 60 years old and severe obesity in the United States, Australia, and Canada is increasing faster than the overall rate of obesity.
Once considered a problem only of high-income countries, obesity rates are rising worldwide and affecting both the developed and developing world. These increases have been felt most dramatically in urban settings. The only remaining region of the world where obesity is not common is sub-Saharan Africa.
Obesity is from the Latin obesitas, which means "stout, fat, or plump". Ēsus is the past participle of edere (to eat), with ob (over) added to it. The Oxford English Dictionary documents its first usage in 1611 by Randle Cotgrave.
Ancient Greek medicine recognizes obesity as a medical disorder, and records that the Ancient Egyptians saw it in the same way. Hippocrates wrote that "Corpulence is not only a disease itself, but the harbinger of others". The Indian surgeon Sushruta (6th century BCE) related obesity to diabetes and heart disorders. He recommended physical work to help cure it and its side effects. For most of human history mankind struggled with food scarcity. Obesity has thus historically been viewed as a sign of wealth and prosperity. It was common among high officials in Europe in the Middle Ages and the Renaissance as well as in Ancient East Asian civilizations.
With the onset of the industrial revolution it was realized that the military and economic might of nations were dependent on both the body size and strength of their soldiers and workers. Increasing the average body mass index from what is now considered underweight to what is now the normal range played a significant role in the development of industrialized societies. Height and weight thus both increased through the 19th century in the developed world. During the 20th century, as populations reached their genetic potential for height, weight began increasing much more than height, resulting in obesity. In the 1950s increasing wealth in the developed world decreased child mortality, but as body weight increased heart and kidney disease became more common. During this time period insurance companies realized the connection between weight and life expectancy and increased premiums for the obese.
Many cultures throughout history have viewed obesity as the result of a character flaw. The obesus or fat character in Greek comedy was a glutton and figure of mockery. During Christian times food was viewed as a gateway to the sins of sloth and lust. In modern Western culture, excess weight is often regarded as unattractive, and obesity is commonly associated with various negative stereotypes. People of all ages can face social stigmatization, and may be targeted by bullies or shunned by their peers.
Public perceptions in Western society regarding healthy body weight differ from those regarding the weight that is considered ideal – and both have changed since the beginning of the 20th century. The weight that is viewed as an ideal has become lower since the 1920s. This is illustrated by the fact that the average height of Miss America pageant winners increased by 2% from 1922 to 1999, while their average weight decreased by 12%. On the other hand, people's views concerning healthy weight have changed in the opposite direction. In Britain the weight at which people considered themselves to be overweight was significantly higher in 2007 than in 1999. These changes are believed to be due to increasing rates of adiposity leading to increased acceptance of extra body fat as being normal.
The first sculptural representations of the human body 20,000–35,000 years ago depict obese females. Some attribute the Venus figurines to the tendency to emphasize fertility while others feel they represent "fatness" in the people of the time. Corpulence is, however, absent in both Greek and Roman art, probably in keeping with their ideals regarding moderation. This continued through much of Christian European history, with only those of low socioeconomic status being depicted as obese.
During the Renaissance some of the upper class began flaunting their large size, as can be seen in portraits of Henry VIII of England and Alessandro del Borro. Rubens (1577–1640) regularly depicted full-bodied women in his pictures, from which derives the term Rubenesque. These women, however, still maintained the "hourglass" shape with its relationship to fertility. During the 19th century, views on obesity changed in the Western world. After centuries of obesity being synonymous with wealth and social status, slimness began to be seen as the desirable standard.
Society and culture
In addition to its health impacts, obesity leads to many problems including disadvantages in employment and increased business costs. These effects are felt by all levels of society from individuals, to corporations, to governments.
In 2005, the medical costs attributable to obesity in the US were an estimated $190.2 billion or 20.6% of all medical expenditures, while the cost of obesity in Canada was estimated at CA$2 billion in 1997 (2.4% of total health costs). The total annual direct cost of overweight and obesity in Australia in 2005 was A$21 billion. Overweight and obese Australians also received A$35.6 billion in government subsidies. The estimate range for annual expenditures on diet products is $40 billion to $100 billion in the US alone.
Obesity prevention programs have been found to reduce the cost of treating obesity-related disease. However, the longer people live, the more medical costs they incur. Researchers therefore conclude that reducing obesity may improve the public's health, but it is unlikely to reduce overall health spending.
Obesity can lead to social stigmatization and disadvantages in employment. When compared to their normal weight counterparts, obese workers on average have higher rates of absenteeism from work and take more disability leave, thus increasing costs for employers and decreasing productivity. A study examining Duke University employees found that people with a BMI over 40 kg/m2 filed twice as many workers' compensation claims as those whose BMI was 18.5–24.9 kg/m2. They also had more than 12 times as many lost work days. The most common injuries in this group were due to falls and lifting, thus affecting the lower extremities, wrists or hands, and backs. The Alabama State Employees' Insurance Board approved a controversial plan to charge obese workers $25 a month for health insurance that would otherwise be free unless they take steps to lose weight and improve their health. These measures started in January 2010 and apply to those state workers whose BMI exceeds 35 kg/m2 and who fail to make improvements in their health after one year.
Some research shows that obese people are less likely to be hired for a job and are less likely to be promoted. Obese people are also paid less than their non-obese counterparts for an equivalent job; obese women on average make 6% less and obese men make 3% less.
Specific industries, such as the airline, healthcare and food industries, have special concerns. Due to rising rates of obesity, airlines face higher fuel costs and pressures to increase seating width. In 2000, the extra weight of obese passengers cost airlines US$275 million. The healthcare industry has had to invest in special facilities for handling severely obese patients, including special lifting equipment and bariatric ambulances. Costs for restaurants are increased by litigation accusing them of causing obesity. In 2005 the US Congress discussed legislation to prevent civil lawsuits against the food industry in relation to obesity; however, it did not become law.
With the American Medical Association's 2013 classification of obesity as a chronic disease, it is thought that health insurance companies will more likely pay for obesity treatment, counseling and surgery, and the cost of research and development of fat treatment pills or gene therapy treatments should be more affordable if insurers help to subsidize their cost. The AMA classification is not legally binding, however, so health insurers still have the right to reject coverage for a treatment or procedure.
In 2014, The European Court of Justice ruled that morbid obesity is a disability. The Court argued that if an employee's obesity prevents him from "full and effective participation of that person in professional life on an equal basis with other workers", then it shall be considered a disability and that firing someone on such grounds is discriminatory.
The principal goal of the fat acceptance movement is to decrease discrimination against people who are overweight and obese. However, some in the movement are also attempting to challenge the established relationship between obesity and negative health outcomes.
A number of organizations exist that promote the acceptance of obesity. They have increased in prominence in the latter half of the 20th century. The US-based National Association to Advance Fat Acceptance (NAAFA) was formed in 1969 and describes itself as a civil rights organization dedicated to ending size discrimination.
The International Size Acceptance Association (ISAA) is a non-governmental organization (NGO) which was founded in 1997. It has more of a global orientation and describes its mission as promoting size acceptance and helping to end weight-based discrimination. These groups often argue for the recognition of obesity as a disability under the US Americans With Disabilities Act (ADA). The American legal system, however, has decided that the potential public health costs exceed the benefits of extending this anti-discrimination law to cover obesity.
The healthy BMI range varies with the age and sex of the child. Obesity in children and adolescents is defined as a BMI greater than the 95th percentile. The reference data that these percentiles are based on is from 1963 to 1994 and thus has not been affected by the recent increases in rates of obesity. Childhood obesity has reached epidemic proportions in the 21st century, with rising rates in both the developed and developing world. Rates of obesity in Canadian boys have increased from 11% in the 1980s to over 30% in the 1990s, while during this same time period rates increased from 4 to 14% in Brazilian children.
As with obesity in adults, many factors contribute to the rising rates of childhood obesity. Changing diet and decreasing physical activity are believed to be the two most important causes for the recent increase in the incidence of child obesity. Because childhood obesity often persists into adulthood and is associated with numerous chronic illnesses, children who are obese are often tested for hypertension, diabetes, hyperlipidemia, and fatty liver. Treatments used in children are primarily lifestyle interventions and behavioral techniques, although efforts to increase activity in children have had little success. In the United States, medications are not FDA approved for use in this age group.
Obesity in pets is common in many countries. In the United States, 23–41% of dogs are overweight, and about 5.1% are obese. The rate of obesity in cats was slightly higher at 6.4%. In Australia the rate of obesity among dogs in a veterinary setting has been found to be 7.6%. The risk of obesity in dogs is related to whether or not their owners are obese; however, there is no similar correlation between cats and their owners.
- WHO 2000 p.6
- Haslam DW, James WP (2005). "Obesity". Lancet (Review) 366 (9492): 1197–209. doi:10.1016/S0140-6736(05)67483-1. PMID 16198769.
- WHO 2000 p.9
- Kushner, Robert (2007). Treatment of the Obese Patient (Contemporary Endocrinology). Totowa, NJ: Humana Press. p. 158. ISBN 1-59745-400-1. Retrieved April 5, 2009.
- Adams JP, Murphy PG (July 2000). "Obesity in anaesthesia and intensive care". Br J Anaesth 85 (1): 91–108. doi:10.1093/bja/85.1.91. PMID 10927998.
- NICE 2006 p.10–11
- Imaz I, Martínez-Cervell C, García-Alvarez EE, Sendra-Gutiérrez JM, González-Enríquez J (July 2008). "Safety and effectiveness of the intragastric balloon for obesity. A meta-analysis". Obes Surg 18 (7): 841–6. doi:10.1007/s11695-007-9331-8. PMID 18459025.
- Barness LA, Opitz JM, Gilbert-Barness E (December 2007). "Obesity: genetic, molecular, and environmental aspects". American Journal of Medical Genetics 143A (24): 3016–34. doi:10.1002/ajmg.a.32035. PMID 18000969.
- Woodhouse R (2008). "Obesity in art: A brief overview". Front Horm Res. Frontiers of Hormone Research 36: 271–86. doi:10.1159/000115370. ISBN 978-3-8055-8429-6. PMID 18230908.
- Pollack, Andrew (June 18, 2013). "A.M.A. Recognizes Obesity as a Disease". New York Times. Archived from the original on June 18, 2013.
- Weinstock, Matthew (June 21, 2013). "The Facts About Obesity". H&HN. American Hospital Association. Retrieved June 24, 2013.
- Sweeting HN (2007). "Measurement and Definitions of Obesity In Childhood and Adolescence: A field guide for the uninitiated". Nutr J 6 (1): 32. doi:10.1186/1475-2891-6-32. PMC 2164947. PMID 17963490.
- NHLBI p.xiv
- Gray DS, Fujioka K (1991). "Use of relative weight and Body Mass Index for the determination of adiposity". J Clin Epidemiol 44 (6): 545–50. doi:10.1016/0895-4356(91)90218-X. PMID 2037859.
- "Healthy Weight: Assessing Your Weight: BMI: About BMI for Children and Teens". Center for disease control and prevention. Retrieved April 6, 2009.
- Flegal KM, Ogden CL, Wei R, Kuczmarski RL, Johnson CL (June 2001). "Prevalence of overweight in US children: comparison of US growth charts from the Centers for Disease Control and Prevention with other reference values for body mass index". Am. J. Clin. Nutr. 73 (6): 1086–93. PMID 11382664.
- "BMI classification". World Health Organization. Retrieved 15 February 2014.
- 1 (lb/sq in) is more precisely 703.06957964 (kg/m2).
- Sturm R (July 2007). "Increases in morbid obesity in the USA: 2000–2005". Public Health 121 (7): 492–6. doi:10.1016/j.puhe.2007.01.006. PMC 2864630. PMID 17399752.
- Kanazawa M, Yoshiike N, Osaka T, Numba Y, Zimmet P, Inoue S (December 2002). "Criteria and classification of obesity in Japan and Asia-Oceania". Asia Pac J Clin Nutr. 11 Suppl 8: S732–S737. doi:10.1046/j.1440-6047.11.s8.19.x. PMID 12534701.
- Bei-Fan Z (December 2002). "Predictive values of body mass index and waist circumference for risk factors of certain related diseases in Chinese adults: study on optimal cut-off points of body mass index and waist circumference in Chinese adults". Asia Pac J Clin Nutr. 11 Suppl 8: S685–93. doi:10.1046/j.1440-6047.11.s8.9.x. PMID 12534691.
- Poulain M, Doucet M, Major GC, Drapeau V, Sériès F, Boulet LP, Tremblay A, Maltais F (April 2006). "The effect of obesity on chronic respiratory diseases: pathophysiology and therapeutic strategies". CMAJ 174 (9): 1293–9. doi:10.1503/cmaj.051299. PMC 1435949. PMID 16636330.
- Berrington de Gonzalez A, Hartge P, Cerhan JR, Flint AJ, Hannan L, MacInnis RJ, Moore SC, Tobias GS, Anton-Culver H, Freeman LB, Beeson WL, Clipp SL, English DR, Folsom AR, Freedman DM, Giles G, Hakansson N, Henderson KD, Hoffman-Bolton J, Hoppin JA, Koenig KL, Lee IM, Linet MS, Park Y, Pocobelli G, Schatzkin A, Sesso HD, Weiderpass E, Willcox BJ, Wolk A, Zeleniuch-Jacquotte A, Willett WC, Thun MJ (2010). "Body-mass index and mortality among 1.46 million white adults". The New England Journal of Medicine 363 (23): 2211–9. doi:10.1056/NEJMoa1000367. PMC 3066051. PMID 21121834.
- Mokdad AH, Marks JS, Stroup DF, Gerberding JL (March 2004). "Actual causes of death in the United States, 2000" (PDF). JAMA 291 (10): 1238–45. doi:10.1001/jama.291.10.1238. PMID 15010446.
- Allison DB, Fontaine KR, Manson JE, Stevens J, VanItallie TB (October 1999). "Annual deaths attributable to obesity in the United States". JAMA 282 (16): 1530–8. doi:10.1001/jama.282.16.1530. PMID 10546692.
- Whitlock G, Lewington S, Sherliker P, Clarke R, Emberson J, Halsey J, Qizilbash N, Collins R, Peto R (March 2009). "Body-mass index and cause-specific mortality in 900 000 adults: collaborative analyses of 57 prospective studies". Lancet 373 (9669): 1083–96. doi:10.1016/S0140-6736(09)60318-4. PMC 2662372. PMID 19299006.
- Calle EE, Thun MJ, Petrelli JM, Rodriguez C, Heath CW (October 1999). "Body-mass index and mortality in a prospective cohort of U.S. adults". N. Engl. J. Med. 341 (15): 1097–105. doi:10.1056/NEJM199910073411501. PMID 10511607.
- Pischon T, Boeing H, Hoffmann K, Bergmann M, Schulze MB, Overvad K, van der Schouw YT, Spencer E, Moons KG, Tjønneland A et al. (November 2008). "General and abdominal adiposity and risk of death in Europe". N. Engl. J. Med. 359 (20): 2105–20. doi:10.1056/NEJMoa0801891. PMID 19005195.
- WHO Expert, Consultation (Jan 10, 2004). "Appropriate body-mass index for Asian populations and its implications for policy and intervention strategies.". Lancet 363 (9403): 157–63. doi:10.1016/s0140-6736(03)15268-3. PMID 14726171.
- Manson JE, Willett WC, Stampfer MJ, Colditz GA, Hunter DJ, Hankinson SE, Hennekens CH, Speizer FE (1995). "Body weight and mortality among women". N. Engl. J. Med. 333 (11): 677–85. doi:10.1056/NEJM199509143331101. PMID 7637744.
- Tsigos C, Hainer V, Basdevant A, Finer N, Fried M, Mathus-Vliegen E, Micic D, Maislos M, Roman G, Schutz Y, Toplak H, Zahorska-Markiewicz B (April 2008). "Management of Obesity in Adults: European Clinical Practice Guidelines" (PDF). The European Journal of Obesity 1 (2): 106–16. doi:10.1159/000126822. PMID 20054170.
- Fried M, Hainer V, Basdevant A, Buchwald H, Deitel M, Finer N, Greve JW, Horber F, Mathus-Vliegen E, Scopinaro N, Steffen R, Tsigos C, Weiner R, Widhalm K (April 2007). "Inter-disciplinary European guidelines on surgery of severe obesity". Int J Obes (Lond) 31 (4): 569–77. doi:10.1038/sj.ijo.0803560. PMID 17325689.
- Peeters A, Barendregt JJ, Willekens F, Mackenbach JP, Al Mamun A, Bonneux L (January 2003). "Obesity in adulthood and its consequences for life expectancy: A life-table analysis". Annals of Internal Medicine 138 (1): 24–32. doi:10.7326/0003-4819-138-1-200301070-00008. PMID 12513041.
- Grundy SM (2004). "Obesity, metabolic syndrome, and cardiovascular disease". J. Clin. Endocrinol. Metab. 89 (6): 2595–600. doi:10.1210/jc.2004-0372. PMID 15181029.
- Seidell 2005 p.9
- Bray GA (2004). "Medical consequences of obesity". J. Clin. Endocrinol. Metab. 89 (6): 2583–9. doi:10.1210/jc.2004-0535. PMID 15181027.
- Shoelson SE, Herrero L, Naaz A (May 2007). "Obesity, inflammation, and insulin resistance". Gastroenterology 132 (6): 2169–80. doi:10.1053/j.gastro.2007.03.059. PMID 17498510.
- Shoelson SE, Lee J, Goldfine AB (July 2006). "Inflammation and insulin resistance". J. Clin. Invest. 116 (7): 1793–801. doi:10.1172/JCI29069. PMC 1483173. PMID 16823477.
- Dentali F, Squizzato A, Ageno W (July 2009). "The metabolic syndrome as a risk factor for venous and arterial thrombosis". Semin. Thromb. Hemost. 35 (5): 451–7. doi:10.1055/s-0029-1234140. PMID 19739035.
- Yusuf S, Hawken S, Ounpuu S, Dans T, Avezum A, Lanas F, McQueen M, Budaj A, Pais P, Varigos J, Lisheng L (2004). "Effect of potentially modifiable risk factors associated with myocardial infarction in 52 countries (the INTERHEART study): Case-control study". Lancet 364 (9438): 937–52. doi:10.1016/S0140-6736(04)17018-9. PMID 15364185.
- Darvall KA, Sam RC, Silverman SH, Bradbury AW, Adam DJ (February 2007). "Obesity and thrombosis". Eur J Vasc Endovasc Surg 33 (2): 223–33. doi:10.1016/j.ejvs.2006.10.006. PMID 17185009.
- Yosipovitch G, DeVore A, Dawn A (June 2007). "Obesity and the skin: skin physiology and skin manifestations of obesity". J. Am. Acad. Dermatol. 56 (6): 901–16; quiz 917–20. doi:10.1016/j.jaad.2006.12.004. PMID 17504714.
- Hahler B (June 2006). "An overview of dermatological conditions commonly associated with the obese patient". Ostomy Wound Manage 52 (6): 34–6, 38, 40 passim. PMID 16799182.
- Arendas K, Qiu Q, Gruslin A (2008). "Obesity in pregnancy: pre-conceptional to postpartum consequences". Journal of Obstetrics and Gynaecology Canada : JOGC = Journal D'obstétrique Et Gynécologie Du Canada : JOGC 30 (6): 477–88. PMID 18611299.
- Anand G, Katz PO (2008). "Gastroesophageal reflux disease and obesity". Rev Gastroenterol Disord 8 (4): 233–9. PMID 19107097.
- Harney D, Patijn J (2007). "Meralgia paresthetica: diagnosis and management strategies". Pain Med (Review) 8 (8): 669–77. doi:10.1111/j.1526-4637.2006.00227.x. PMID 18028045.
- Bigal ME, Lipton RB (January 2008). "Obesity and chronic daily headache". Curr Pain Headache Rep (Review) 12 (1): 56–61. doi:10.1007/s11916-008-0011-8. PMID 18417025.
- Sharifi-Mollayousefi A, Yazdchi-Marandi M, Ayramlou H, Heidari P, Salavati A, Zarrintan S, Sharifi-Mollayousefi A (February 2008). "Assessment of body mass index and hand anthropometric measurements as independent risk factors for carpal tunnel syndrome". Folia Morphol. (Warsz) 67 (1): 36–42. PMID 18335412.
- Beydoun MA, Beydoun HA, Wang Y (May 2008). "Obesity and central obesity as risk factors for incident dementia and its subtypes: A systematic review and meta-analysis". Obes Rev (Meta-analysis) 9 (3): 204–18. doi:10.1111/j.1467-789X.2008.00473.x. PMID 18331422.
- Wall M (March 2008). "Idiopathic intracranial hypertension (pseudotumor cerebri)". Curr Neurol Neurosci Rep (Review) 8 (2): 87–93. doi:10.1007/s11910-008-0015-0. PMID 18460275.
- Munger KL, Chitnis T, Ascherio A (2009). "Body size and risk of MS in two cohorts of US women". Neurology (Comparative Study) 73 (19): 1543–50. doi:10.1212/WNL.0b013e3181c0d6e0. PMC 2777074. PMID 19901245.
- Basen-Engquist, Karen; Chang, Maria (16 November 2010). "Obesity and Cancer Risk: Recent Review and Evidence". Current Oncology Reports 13 (1): 71–76. doi:10.1007/s11912-010-0139-7. PMID 21080117.
- Choi HK, Atkinson K, Karlson EW, Curhan G (April 2005). "Obesity, weight change, hypertension, diuretic use, and risk of gout in men: the health professionals follow-up study". Arch. Intern. Med. (Research Support) 165 (7): 742–8. doi:10.1001/archinte.165.7.742. PMID 15824292.
- Tukker A, Visscher TL, Picavet HS (April 2008). "Overweight and health problems of the lower extremities: osteoarthritis, pain and disability". Public Health Nutr (Research Support) 12 (3): 1–10. doi:10.1017/S1368980008002103. PMID 18426630.
- Molenaar EA, Numans ME, van Ameijden EJ, Grobbee DE (November 2008). "[Considerable comorbidity in overweight adults: results from the Utrecht Health Project]". Ned Tijdschr Geneeskd (English abstract) (in Dutch) 152 (45): 2457–63. PMID 19051798.
- Esposito K, Giugliano F, Di Palo C, Giugliano G, Marfella R, D'Andrea F, D'Armiento M, Giugliano D (2004). "Effect of lifestyle changes on erectile dysfunction in obese men: A randomized controlled trial". JAMA (Randomized Controlled Trial) 291 (24): 2978–84. doi:10.1001/jama.291.24.2978. PMID 15213209.
- Hunskaar S (2008). "A systematic review of overweight and obesity as risk factors and targets for clinical intervention for urinary incontinence in women". Neurourol. Urodyn. (Review) 27 (8): 749–57. doi:10.1002/nau.20635. PMID 18951445.
- Ejerblad E, Fored CM, Lindblad P, Fryzek J, McLaughlin JK, Nyrén O (2006). "Obesity and risk for chronic renal failure". J. Am. Soc. Nephrol. (Research Support) 17 (6): 1695–702. doi:10.1681/ASN.2005060638. PMID 16641153.
- Makhsida N, Shah J, Yan G, Fisch H, Shabsigh R (September 2005). "Hypogonadism and metabolic syndrome: Implications for testosterone therapy". J. Urol. (Review) 174 (3): 827–34. doi:10.1097/01.ju.0000169490.78443.59. PMID 16093964.
- Pestana IA, Greenfield JM, Walsh M, Donatucci CF, Erdmann D (October 2009). "Management of "buried" penis in adulthood: an overview". Plast. Reconstr. Surg. (Review) 124 (4): 1186–95. doi:10.1097/PRS.0b013e3181b5a37f. PMID 19935302.
- Schmidt DS, Salahudeen AK (2007). "Obesity-survival paradox-still a controversy?". Semin Dial (Review) 20 (6): 486–92. doi:10.1111/j.1525-139X.2007.00349.x. PMID 17991192.
- U.S. Preventive Services Task Force (June 2003). "Behavioral counseling in primary care to promote a healthy diet: recommendations and rationale". Am Fam Physician (Review) 67 (12): 2573–6. PMID 12825847.
- Habbu A, Lakkis NM, Dokainish H (October 2006). "The obesity paradox: Fact or fiction?". Am. J. Cardiol. (Review) 98 (7): 944–8. doi:10.1016/j.amjcard.2006.04.039. PMID 16996880.
- Romero-Corral A, Montori VM, Somers VK, Korinek J, Thomas RJ, Allison TG, Mookadam F, Lopez-Jimenez F (2006). "Association of bodyweight with total mortality and with cardiovascular events in coronary artery disease: A systematic review of cohort studies". Lancet (Review) 368 (9536): 666–78. doi:10.1016/S0140-6736(06)69251-9. PMID 16920472.
- Oreopoulos A, Padwal R, Kalantar-Zadeh K, Fonarow GC, Norris CM, McAlister FA (July 2008). "Body mass index and mortality in heart failure: A meta-analysis". Am. Heart J. (Meta-analysis, Review) 156 (1): 13–22. doi:10.1016/j.ahj.2008.02.014. PMID 18585492.
- Oreopoulos A, Padwal R, Norris CM, Mullen JC, Pretorius V, Kalantar-Zadeh K (February 2008). "Effect of obesity on short- and long-term mortality postcoronary revascularization: A meta-analysis". Obesity (Silver Spring) (Meta-analysis) 16 (2): 442–50. doi:10.1038/oby.2007.36. PMID 18239657.
- Diercks DB, Roe MT, Mulgund J, Pollack CV, Kirk JD, Gibler WB, Ohman EM, Smith SC, Boden WE, Peterson ED (July 2006). "The obesity paradox in non-ST-segment elevation acute coronary syndromes: Results from the Can Rapid risk stratification of Unstable angina patients Suppress ADverse outcomes with Early implementation of the American College of Cardiology/American Heart Association Guidelines Quality Improvement Initiative". Am Heart J (Research Support) 152 (1): 140–8. doi:10.1016/j.ahj.2005.09.024. PMID 16824844.
- Lau DC, Douketis JD, Morrison KM, Hramiak IM, Sharma AM, Ur E (April 2007). "2006 Canadian clinical practice guidelines on the management and prevention of obesity in adults and children summary". CMAJ (Practice Guideline, Review) 176 (8): S1–13. doi:10.1503/cmaj.061409. PMC 1839777. PMID 17420481.
- Bleich S, Cutler D, Murray C, Adams A (2008). "Why is the developed world obese?". Annu Rev Public Health (Research Support) 29: 273–95. doi:10.1146/annurev.publhealth.29.020907.090954. PMID 18173389.
- Drewnowski A, Specter SE (January 2004). "Poverty and obesity: the role of energy density and energy costs". Am. J. Clin. Nutr. (Review) 79 (1): 6–16. PMID 14684391.
- Nestle M, Jacobson MF (2000). "Halting the obesity epidemic: a public health policy approach". Public Health Rep (Research Support) 115 (1): 12–24. doi:10.1093/phr/115.1.12. PMC 1308552. PMID 10968581.
- James WP (March 2008). "The fundamental drivers of the obesity epidemic". Obes Rev (Review) 9 (Suppl 1): 6–13. doi:10.1111/j.1467-789X.2007.00432.x. PMID 18307693.
- Keith SW, Redden DT, Katzmarzyk PT, Boggiano MM, Hanlon EC, Benca RM, Ruden D, Pietrobelli A, Barger JL, Fontaine KR, Wang C, Aronne LJ, Wright SM, Baskin M, Dhurandhar NV, Lijoi MC, Grilo CM, DeLuca M, Westfall AO, Allison DB (2006). "Putative contributors to the secular increase in obesity: Exploring the roads less traveled". Int J Obes (Lond) (Review) 30 (11): 1585–94. doi:10.1038/sj.ijo.0803326. PMID 16801930.
- "EarthTrends: Nutrition: Calorie supply per capita". World Resources Institute. Archived from the original on 2011-06-11. Retrieved Oct 18, 2009.
- "USDA: frsept99b". United States Department of Agriculture. Retrieved January 10, 2009.
- "Diet composition and obesity among Canadian adults". Statistics Canada.
- National Control for Health Statistics. "Nutrition For Everyone". Centers for Disease Control and Prevention. Retrieved 2008-07-09.
- Marantz PR, Bird ED, Alderman MH (March 2008). "A call for higher standards of evidence for dietary guidelines". Am J Prev Med 34 (3): 234–40. doi:10.1016/j.amepre.2007.11.017. PMID 18312812.
- Flegal KM, Carroll MD, Ogden CL, Johnson CL (October 2002). "Prevalence and trends in obesity among US adults, 1999–2000". JAMA 288 (14): 1723–1727. doi:10.1001/jama.288.14.1723. PMID 12365955.
- Wright JD, Kennedy-Stephenson J, Wang CY, McDowell MA, Johnson CL (February 2004). "Trends in intake of energy and macronutrients—United States, 1971–2000". MMWR Morb Mortal Wkly Rep 53 (4): 80–2. PMID 14762332.
- Caballero B (2007). "The global epidemic of obesity: An overview". Epidemiol Rev 29: 1–5. doi:10.1093/epirev/mxm012. PMID 17569676.
- Mozaffarian D, Hao T, Rimm EB, Willett WC, Hu FB (23 June 2011). "Changes in Diet and Lifestyle and Long-Term Weight Gain in Women and Men". The New England Journal of Medicine (Meta-analysis) 364 (25): 2392–404. doi:10.1056/NEJMoa1014296. PMC 3151731. PMID 21696306.
- Malik VS, Schulze MB, Hu FB (August 2006). "Intake of sugar-sweetened beverages and weight gain: a systematic review". Am. J. Clin. Nutr. (Review) 84 (2): 274–88. PMC 3210834. PMID 16895873.
- Olsen NJ, Heitmann BL (January 2009). "Intake of calorically sweetened beverages and obesity". Obes Rev (Review) 10 (1): 68–75. doi:10.1111/j.1467-789X.2008.00523.x. PMID 18764885.
- Malik VS, Popkin BM, Bray GA, Després JP, Willett WC, Hu FB (November 2010). "Sugar-sweetened beverages and risk of metabolic syndrome and type 2 diabetes: a meta-analysis". Diabetes Care (Meta-analysis, Review) 33 (11): 2477–83. doi:10.2337/dc10-1079. PMC 2963518. PMID 20693348.
- Rosenheck R (November 2008). "Fast food consumption and increased caloric intake: a systematic review of a trajectory towards weight gain and obesity risk". Obes Rev (Review) 9 (6): 535–47. doi:10.1111/j.1467-789X.2008.00477.x. PMID 18346099.
- Lin BH, Guthrie J and Frazao E (1999). "Nutrient contribution of food away from home". In Frazão E. Agriculture Information Bulletin No. 750: America's Eating Habits: Changes and Consequences. Washington, DC: US Department of Agriculture, Economic Research Service. pp. 213–239.
- Pollan, Michael (22 April 2007). "You Are What You Grow". New York Times. Retrieved 2007-07-30.
- Kopelman and Caterson 2005:324.
- Metabolism alone doesn't explain how thin people stay thin. John Schieszer (The Medical Post).
- Seidell 2005 p.10
- "WHO: Obesity and overweight". World Health Organization. Archived from the original on December 18, 2008. Retrieved January 10, 2009.
- "WHO | Physical Inactivity: A Global Public Health Problem". World Health Organization. Retrieved February 22, 2009.
- Ness-Abramof R, Apovian CM (February 2006). "Diet modification for treatment and prevention of obesity". Endocrine (Review) 29 (1): 5–9. doi:10.1385/ENDO:29:1:135. PMID 16622287.
- Salmon J, Timperio A (2007). "Prevalence, trends and environmental influences on child and youth physical activity". Med Sport Sci (Review). Medicine and Sport Science 50: 183–99. doi:10.1159/000101391. ISBN 978-3-318-01396-2. PMID 17387258.
- Borodulin K, Laatikainen T, Juolevi A, Jousilahti P (June 2008). "Thirty-year trends of physical activity in relation to age, calendar time and birth cohort in Finnish adults". Eur J Public Health (Research Support) 18 (3): 339–44. doi:10.1093/eurpub/ckm092. PMID 17875578.
- Brownson RC, Boehmer TK, Luke DA (2005). "Declining rates of physical activity in the United States: what are the contributors?". Annu Rev Public Health (Review) 26: 421–43. doi:10.1146/annurev.publhealth.26.021304.144437. PMID 15760296.
- Gortmaker SL, Must A, Sobol AM, Peterson K, Colditz GA, Dietz WH (April 1996). "Television viewing as a cause of increasing obesity among children in the United States, 1986–1990". Arch Pediatr Adolesc Med (Review) 150 (4): 356–62. doi:10.1001/archpedi.1996.02170290022003. PMID 8634729.
- Vioque J, Torres A, Quiles J (December 2000). "Time spent watching television, sleep duration and obesity in adults living in Valencia, Spain". Int. J. Obes. Relat. Metab. Disord. (Research Support) 24 (12): 1683–8. doi:10.1038/sj.ijo.0801434. PMID 11126224.
- Tucker LA, Bagwell M (July 1991). "Television viewing and obesity in adult females" (PDF). Am J Public Health 81 (7): 908–11. doi:10.2105/AJPH.81.7.908. PMC 1405200. PMID 2053671.
- "Media + Child and Adolescent Health: A Systematic Review" (PDF). Ezekiel J. Emanuel. Common Sense Media. 2008. Retrieved April 6, 2009.
- Mary Jones. "Case Study: Cataplexy and SOREMPs Without Excessive Daytime Sleepiness in Prader Willi Syndrome. Is This the Beginning of Narcolepsy in a Five Year Old?". European Society of Sleep Technologists. Retrieved April 6, 2009.
- Poirier P, Giles TD, Bray GA, Hong Y, Stern JS, Pi-Sunyer FX, Eckel RH (May 2006). "Obesity and cardiovascular disease: pathophysiology, evaluation, and effect of weight loss". Arterioscler. Thromb. Vasc. Biol. (Review) 26 (5): 968–76. doi:10.1161/01.ATV.0000216787.85457.f3. PMID 16627822.
- Loos RJ, Bouchard C (May 2008). "FTO: the first gene contributing to common forms of human obesity". Obes Rev (Review) 9 (3): 246–50. doi:10.1111/j.1467-789X.2008.00481.x. PMID 18373508.
- Yang W, Kelly T, He J (2007). "Genetic epidemiology of obesity". Epidemiol Rev (Review) 29: 49–61. doi:10.1093/epirev/mxm004. PMID 17566051.
- Walley AJ, Asher JE, Froguel P (June 2009). "The genetic contribution to non-syndromic human obesity". Nature Reviews Genetics (Review) 10 (7): 431–42. doi:10.1038/nrg2594. PMID 19506576.
- Farooqi S, O'Rahilly S (December 2006). "Genetics of obesity in humans". Endocr. Rev. (Review) 27 (7): 710–18. doi:10.1210/er.2006-0040. PMID 17122358.
- Kolata,Gina (2007). Rethinking thin: The new science of weight loss – and the myths and realities of dieting. Picador. p. 122. ISBN 0-312-42785-9.
- Walley, Andrew J., Asher, Julian E., Froguel, Philippe (July 2009). "The genetic contribution to non-syndromic human obesity.". Nat Rev Genet. (Review) 10 (7): 431–42. doi:10.1038/nrg2594. PMID 19506576.
However, it is also clear that genetics greatly influences this situation, giving individuals in the same 'obesogenic' environment significantly different risks of becoming obese.
- Chakravarthy MV, Booth FW (2004). "Eating, exercise, and "thrifty" genotypes: Connecting the dots toward an evolutionary understanding of modern chronic diseases". J. Appl. Physiol. (Review) 96 (1): 3–10. doi:10.1152/japplphysiol.00757.2003. PMID 14660491.
- Wells JC (2009). "Thrift: A guide to thrifty genes, thrifty phenotypes and thrifty norms". International Journal of Obesity (Review) 33 (12): 1331–1338. doi:10.1038/ijo.2009.175. PMID 19752875.
- Wells JC (2011). "The thrifty phenotype: An adaptation in growth or metabolism?". American Journal of Human Biology (Review) 23 (1): 65–75. doi:10.1002/ajhb.21100. PMID 21082685.
- Rosén T, Bosaeus I, Tölli J, Lindstedt G, Bengtsson BA (1993). "Increased body fat mass and decreased extracellular fluid volume in adults with growth hormone deficiency". Clin. Endocrinol. (Oxf) 38 (1): 63–71. doi:10.1111/j.1365-2265.1993.tb00974.x. PMID 8435887.
- Zametkin AJ, Zoon CK, Klein HW, Munson S (February 2004). "Psychiatric aspects of child and adolescent obesity: a review of the past 10 years". J Am Acad Child Adolesc Psychiatry (Review) 43 (2): 134–50. doi:10.1097/00004583-200402000-00008. PMID 14726719.
- Chiles C, van Wattum PJ (2010). "Psychiatric aspects of the obesity crisis". Psychiatr Times 27 (4): 47–51.
- Yach D, Stuckler D, Brownell KD (January 2006). "Epidemiologic and economic consequences of the global epidemics of obesity and diabetes". Nat. Med. 12 (1): 62–6. doi:10.1038/nm0106-62. PMID 16397571.
- Sobal J, Stunkard AJ (March 1989). "Socioeconomic status and obesity: A review of the literature". Psychol Bull (Review) 105 (2): 260–75. doi:10.1037/0033-2909.105.2.260. PMID 2648443.
- McLaren L (2007). "Socioeconomic status and obesity". Epidemiol Rev (Review) 29: 29–48. doi:10.1093/epirev/mxm001. PMID 17478442.
- Wilkinson, Richard; Pickett, Kate (2009). The Spirit Level: Why More Equal Societies Almost Always Do Better. London: Allen Lane. pp. 91–101. ISBN 978-1-84614-039-6.
- Christakis NA, Fowler JH (2007). "The Spread of Obesity in a Large Social Network over 32 Years". New England Journal of Medicine (Research Support) 357 (4): 370–379. doi:10.1056/NEJMsa066082. PMID 17652652.
- Björntorp P (2001). "Do stress reactions cause abdominal obesity and comorbidities?". Obesity Reviews 2 (2): 73–86. doi:10.1046/j.1467-789x.2001.00027.x. PMID 12119665.
- Goodman E, Adler NE, Daniels SR, Morrison JA, Slap GB, Dolan LM (2003). "Impact of objective and subjective social status on obesity in a biracial cohort of adolescents". Obesity Reviews (Research Support) 11 (8): 1018–26. doi:10.1038/oby.2003.140. PMID 12917508.
- Flegal KM, Troiano RP, Pamuk ER, Kuczmarski RJ, Campbell SM (November 1995). "The influence of smoking cessation on the prevalence of overweight in the United States". N. Engl. J. Med. 333 (18): 1165–70. doi:10.1056/NEJM199511023331801. PMID 7565970.
- Chiolero A, Faeh D, Paccaud F, Cornuz J (1 April 2008). "Consequences of smoking for body weight, body fat distribution, and insulin resistance". Am. J. Clin. Nutr. (Review) 87 (4): 801–9. PMID 18400700.
- Weng HH, Bastian LA, Taylor DH, Moser BK, Ostbye T (2004). "Number of children associated with obesity in middle-aged women and men: results from the health and retirement study". J Women's Health (Larchmt) (Comparative Study) 13 (1): 85–91. doi:10.1089/154099904322836492. PMID 15006281.
- Bellows-Riecken KH, Rhodes RE (February 2008). "A birth of inactivity? A review of physical activity and parenthood". Prev Med (Review) 46 (2): 99–110. doi:10.1016/j.ypmed.2007.08.003. PMID 17919713.
- "Obesity and Overweight" (PDF). World Health Organization. Retrieved February 22, 2009.
- Caballero B (March 2001). "Introduction. Symposium: Obesity in developing countries: biological and ecological factors". J. Nutr. (Review) 131 (3): 866S–870S. PMID 11238776.
- Smith E, Hay P, Campbell L, Trollor JN (2011). "A review of the association between obesity and cognitive function across the lifespan: implications for novel approaches to prevention and treatment". Obesity Reviews (Review) 12 (9): 740–755. doi:10.1111/j.1467-789X.2011.00920.x. PMID 21991597.
- DiBaise JK, Zhang H, Crowell MD, Krajmalnik-Brown R, Decker GA, Rittmann BE (April 2008). "Gut microbiota and its possible relationship with obesity". Mayo Clinic proceedings. Mayo Clinic (Review) 83 (4): 460–9. doi:10.4065/83.4.460. PMID 18380992.
- Falagas ME, Kompoti M (July 2006). "Obesity and infection". Lancet Infect Dis (Review) 6 (7): 438–46. doi:10.1016/S1473-3099(06)70523-0. PMID 16790384.
- Flier JS (2004). "Obesity wars: Molecular progress confronts an expanding epidemic". Cell (Review) 116 (2): 337–50. doi:10.1016/S0092-8674(03)01081-X. PMID 14744442.
- Zhang, Y; Proenca, R; Maffei, M; Barone, M; Leopold, L; Friedman, JM (Dec 1, 1994). "Positional cloning of the mouse obese gene and its human homologue.". Nature (Research Support) 372 (6505): 425–32. doi:10.1038/372425a0. PMID 7984236.
- Considine, RV; Considine, EL; Williams, CJ; Nyce, MR; Magosin, SA; Bauer, TL; Rosato, EL; Colberg, J; Caro, JF (Jun 1995). "Evidence against either a premature stop codon or the absence of obese gene mRNA in human obesity.". The Journal of Clinical Investigation (Research Support) 95 (6): 2986–8. doi:10.1172/jci118007. PMC 295988. PMID 7769141.
- Hamann A, Matthaei S (1996). "Regulation of energy balance by leptin". Exp. Clin. Endocrinol. Diabetes (Review) 104 (4): 293–300. doi:10.1055/s-0029-1211457. PMID 8886745.
- Boulpaep, Emile L.; Boron, Walter F. (2003). Medical physiology: A cellular and molecular approach. Philadelphia: Saunders. p. 1227. ISBN 0-7216-3256-4.
- Loscalzo, Joseph; Fauci, Anthony S.; Braunwald, Eugene; Dennis L. Kasper; Hauser, Stephen L; Longo, Dan L. (2008). Harrison's principles of internal medicine. McGraw-Hill Medical. ISBN 0-07-146633-9.
- Satcher D (2001). The Surgeon General's Call to Action to Prevent and Decrease Overweight and Obesity. U.S. Dept. of Health and Human Services, Public Health Service, Office of Surgeon General. ISBN 978-0-16-051005-2.
- Moyer VA (4 September 2012). "Screening for and management of obesity in adults: U.S. Preventive Services Task Force recommendation statement". Annals of Internal Medicine (Practice Guideline) 157 (5): 373–8. doi:10.7326/0003-4819-157-5-201209040-00475. PMID 22733087.
- Brook Barnes (2007-07-18). "Limiting Ads of Junk Food to Children". New York Times. Retrieved 2008-07-24.
- "Fewer Sugary Drinks Key to Weight Loss - healthfinder.gov". U.S. Department of Health and Human Services. Retrieved Oct 18, 2009.
- Brennan Ramirez LK, Hoehner CM, Brownson RC, Cook R, Orleans CT, Hollander M, Barker DC, Bors P, Ewing R, Killingsworth R, Petersmarck K, Schmid T, Wilkinson W (December 2006). "Indicators of activity-friendly communities: An evidence-based consensus process". Am J Prev Med (Research Support) 31 (6): 530–32. doi:10.1016/j.amepre.2006.07.026. PMID 17169714.
- National Heart, Lung, and Blood Institute (1998). Clinical Guidelines on the Identification, Evaluation, and Treatment of Overweight and Obesity in Adults (PDF). International Medical Publishing, Inc. ISBN 1-58808-002-1.
- Storing up problems; the medical case for a slimmer nation. London: Royal College of Physicians. 2004-02-11. ISBN 1-86016-200-2.
- Great Britain Parliament House of Commons Health Committee (May 2004). Obesity – Volume 1 – HCP 23-I, Third Report of session 2003–04. Report, together with formal minutes. London, UK: TSO (The Stationery Office). ISBN 978-0-215-01737-6. Retrieved 2007-12-17.
- "Obesity: guidance on the prevention, identification, assessment and management of overweight and obesity in adults and children" (PDF). National Institute for Health and Clinical Excellence(NICE). National Health Services (NHS). 2006. Retrieved April 8, 2009.
- Wanless, Sir Derek; Appleby, John; Harrison, Anthony; Patel, Darshan (2007). Our Future Health Secured? A review of NHS funding and performance. London, UK: The King's Fund. ISBN 1-85717-562-X.
- Sacks G, Swinburn B, Lawrence M (January 2009). "Obesity Policy Action framework and analysis grids for a comprehensive policy approach to reducing obesity". Obes Rev 10 (1): 76–86. doi:10.1111/j.1467-789X.2008.00524.X. PMID 18761640.
- Strychar I (January 2006). "Diet in the management of weight loss". CMAJ (Review) 174 (1): 56–63. doi:10.1503/cmaj.045037. PMC 1319349. PMID 16389240.
- Shick SM, Wing RR, Klem ML, McGuire MT, Hill JO, Seagle H (April 1998). "Persons successful at long-term weight loss and maintenance continue to consume a low-energy, low-fat diet". J Am Diet Assoc 98 (4): 408–13. doi:10.1016/S0002-8223(98)00093-5. PMID 9550162.
- Tate DF, Jeffery RW, Sherwood NE, Wing RR (1 April 2007). "Long-term weight losses associated with prescription of higher physical activity goals. Are higher levels of physical activity protective against weight regain?". Am. J. Clin. Nutr. (Randomized Controlled Trial) 85 (4): 954–9. PMID 17413092.
- Johnston, Bradley C.; Kanters, Steve; Bandayrel, Kristofer; Wu, Ping; Naji, Faysal; Siemieniuk, Reed A.; Ball, Geoff D. C.; Busse, Jason W.; Thorlund, Kristian; Guyatt, Gordon; Jansen, Jeroen P.; Mills, Edward J. (3 September 2014). "Comparison of Weight Loss Among Named Diet Programs in Overweight and Obese Adults". JAMA 312 (9): 923. doi:10.1001/jama.2014.10397.
- Naude, CE; Schoonees, A; Senekal, M; Young, T; Garner, P; Volmink, J (2014). "Low carbohydrate versus isoenergetic balanced diets for reducing weight and cardiovascular risk: a systematic review and meta-analysis.". PLOS ONE (Research Support) 9 (7): e100652. doi:10.1371/journal.pone.0100652. PMID 25007189.
- Wing RR, Phelan S (2005). "Long-term weight loss maintenance". The American Journal of Clinical Nutrition (Review) 82 (1 Suppl): 222S–225S. PMID 16002825.
- Thangaratinam S, Rogozinska E, Jolly K, Glinkowski S, Roseboom T, Tomlinson JW, Kunz R, Mol BW, Coomarasamy A, Khan KS (16 May 2012). "Effects of interventions in pregnancy on maternal weight and obstetric outcomes: meta-analysis of randomised evidence". BMJ (Clinical research ed.) (Meta-analysis) 344: e2088. doi:10.1136/bmj.e2088. PMC 3355191. PMID 22596383.
- LeFevre, Michael L. (26 August 2014). "Behavioral Counseling to Promote a Healthful Diet and Physical Activity for Cardiovascular Disease Prevention in Adults With Cardiovascular Risk Factors: U.S. Preventive Services Task Force Recommendation Statement". Annals of Internal Medicine 161: 587. doi:10.7326/M14-1796.
- Yanovski SZ, Yanovski JA (Jan 1, 2014). "Long-term drug treatment for obesity: a systematic and clinical review.". JAMA: the Journal of the American Medical Association (Review) 311 (1): 74–86. doi:10.1001/jama.2013.281361. PMC 3928674. PMID 24231879.
- Rucker D, Padwal R, Li SK, Curioni C, Lau DC (2007). "Long term pharmacotherapy for obesity and overweight: updated meta-analysis". BMJ (Meta-analysis) 335 (7631): 1194–99. doi:10.1136/bmj.39385.413113.25. PMC 2128668. PMID 18006966.
- Wood, Shelley. "Diet Drug Orlistat Linked to Kidney, Pancreas Injuries". Medscape. Medscape News. Retrieved 26 April 2011.
- Wolfe SM (21 August 2013). "When EMA and FDA decisions conflict: differences in patients or in regulation?". BMJ (Clinical research ed.) 347: f5140. doi:10.1136/bmj.f5140. PMID 23970394.
- Bays HE (March 2011). "Lorcaserin: drug profile and illustrative model of the regulatory challenges of weight-loss drug development". Expert review of cardiovascular therapy (Review) 9 (3): 265–77. doi:10.1586/erc.10.22. PMID 21438803.
- Bays HE, Gadde KM (December 2011). "Phentermine/topiramate for weight reduction and treatment of adverse metabolic consequences in obesity". Drugs Today (Review) 47 (12): 903–14. doi:10.1358/dot.2011.47.12.1718738. PMID 22348915.
- Colquitt, JL; Pickett, K; Loveman, E; Frampton, GK (Aug 8, 2014). "Surgery for weight loss in adults". The Cochrane database of systematic reviews (Meta-analysis, Review) 8: CD003641. doi:10.1002/14651858.CD003641.pub4. PMID 25105982.
- Chang SH, Stoll CR, Song J, Varela JE, Eagon CJ, Colditz GA (2014). "The effectiveness and risks of bariatric surgery: an updated systematic review and meta-analysis, 2003-2012". JAMA Surgery (Meta-analysis, Review) 149 (3): 275–87. doi:10.1001/jamasurg.2013.3654. PMID 24352617.
- Sjöström L, Narbro K, Sjöström CD, Karason K, Larsson B, Wedel H, Lystig T, Sullivan M, Bouchard C, Carlsson B, Bengtsson C, Dahlgren S, Gummesson A, Jacobson P, Karlsson J, Lindroos AK, Lönroth H, Näslund I, Olbers T, Stenlöf K, Torgerson J, Agren G, Carlsson LM (August 2007). "Effects of bariatric surgery on mortality in Swedish obese subjects". N. Engl. J. Med. (Research Support) 357 (8): 741–52. doi:10.1056/NEJMoa066254. PMID 17715408.
- Weintraub, Karen. "New allies in war on weight". The Boston Globe. The Boston Globe. Retrieved 30 June 2014.
- "Global Prevalence of Adult Obesity" (PDF). International Obesity Taskforce. Retrieved January 29, 2008.
- Haslam D (March 2007). "Obesity: a medical history". Obes Rev (Review). 8 Suppl 1: 31–6. doi:10.1111/j.1467-789X.2007.00314.x. PMID 17316298.
- "Obesity and overweight". World Health Organization. Retrieved April 8, 2009.
- Seidell 2005 p.5
- Howard NJ, Taylor AW, Gill TK, Chittleborough CR (2008). "Severe obesity: Investigating the socio-demographics within the extremes of body mass index". Obesity Research & Clinical Practice 2 (1): I–II. doi:10.1016/j.orcp.2008.01.001. PMID 24351678.
- Tjepkema M (2005-07-06). "Measured Obesity–Adult obesity in Canada: Measured height and weight". Nutrition: Findings from the Canadian Community Health Survey. Ottawa, Ontario: Statistics Canada.
- "Online Etymology Dictionary: Obesity". Douglas Harper. Retrieved December 31, 2008.
- "Obesity, n". Oxford English Dictionary 2008. Retrieved March 21, 2009.
- Zachary Bloomgarden (2003). "Prevention of Obesity and Diabetes". Diabetes Care (Review) 26 (11): 3172–3178. doi:10.2337/diacare.26.11.3172. PMID 14578257.
- "History of Medicine: Sushruta – the Clinician – Teacher par Excellence" (PDF). Dwivedi, Girish & Dwivedi, Shridhar. 2007. Retrieved 2008-09-19.
- Theodore Mazzone; Giamila Fantuzzi (2006). Adipose Tissue And Adipokines in Health And Disease (Nutrition and Health). Totowa, NJ: Humana Press. p. 222. ISBN 1-58829-721-7.
- Keller p. 49
- Breslow L (September 1952). "Public Health Aspects of Weight Control". Am J Public Health Nations Health 42 (9): 1116–20. doi:10.2105/AJPH.42.9.1116. PMC 1526346. PMID 12976585.
- Puhl R, Brownell KD (December 2001). "Bias, discrimination, and obesity". Obes. Res. (Review) 9 (12): 788–805. doi:10.1038/oby.2001.108. PMID 11743063.
- Rubinstein S, Caballero B (2000). "Is Miss America an undernourished role model?". JAMA (Letter) 283 (12): 1569. doi:10.1001/jama.283.12.1569. PMID 10735392.
- Johnson F, Cooke L, Croker H, Wardle J (2008). type = Comparative Study "Changing perceptions of weight in Great Britain: comparison of two population surveys". BMJ 337: a494. doi:10.1136/bmj.a494. PMC 2500200. PMID 18617488.
- Fumento, Michael (1997). The Fat of the Land: Our Health Crisis and How Overweight Americans Can Help Themselves. Penguin (Non-Classics). p. 126. ISBN 0-14-026144-3.
- Puhl R., Henderson K., and Brownell K. 2005 p.29
- Johansson E, Böckerman P, Kiiskinen U, Heliövaara M (2009). "Obesity and labour market success in Finland: The difference between having a high BMI and being fat". Economics and Human Biology 7 (1): 36–45. doi:10.1016/j.ehb.2009.01.008. PMID 19249259.
- Cawley J, Meyerhoefer C (January 2012). "The medical care costs of obesity: An instrumental variables approach". Journal of Health Economics 31 (1): 219–230. doi:10.1016/j.jhealeco.2011.10.003. PMID 22094013.
- Finkelstein EA, Fiebelkorn IA, Wang G (1 January 2003). "National medical spending attributable to overweight and obesity: How much, and who's paying". Health Affairs. Online (May). doi:10.1377/hlthaff.w3.219.
- "Obesity and overweight: Economic consequences". Centers for Disease Control and Prevention. 22 May 2007. Retrieved 2007-09-05.
- Colagiuri S, Lee CM, Colagiuri R, Magliano D, Shaw JE, Zimmet PZ, Caterson ID (2010). "The cost of overweight and obesity in Australia". The Medical Journal of Australia (Comparative Study) 192 (5): 260–4. PMID 20201759.
- Cummings, Laura (5 February 2003). "The diet business: Banking on failure". BBC News. Retrieved 25 February 2009.
- van Baal PH, Polder JJ, de Wit GA, Hoogenveen RT, Feenstra TL, Boshuizen HC, Engelfriet PM, Brouwer WB (February 2008). "Lifetime Medical Costs of Obesity: Prevention No Cure for Increasing Health Expenditure". PLoS Med. (Comparative Study) 5 (2): e29. doi:10.1371/journal.pmed.0050029. PMC 2225430. PMID 18254654.
- Bakewell J (2007). "Bariatric furniture: Considerations for use". Int J Ther Rehabil 14 (7): 329–33.
- Neovius K, Johansson K, Kark M, Neovius M (January 2009). "Obesity status and sick leave: a systematic review". Obes Rev (Review) 10 (1): 17–27. doi:10.1111/j.1467-789X.2008.00521.x. PMID 18778315.
- Ostbye T, Dement JM, Krause KM (2007). "Obesity and workers' compensation: Results from the Duke Health and Safety Surveillance System". Arch. Intern. Med. (Research Support) 167 (8): 766–73. doi:10.1001/archinte.167.8.766. PMID 17452538.
- "Alabama "Obesity Penalty" Stirs Debate". Don Fernandez. Retrieved April 5, 2009.
- Puhl R., Henderson K., and Brownell K. 2005 p.30
- Lisa DiCarlo (2002-10-24). "Why Airlines Can't Cut The Fat". Forbes.com. Retrieved 2008-07-23.
- Dannenberg AL, Burton DC, Jackson RJ (2004). "Economic and environmental costs of obesity: The impact on airlines". American journal of preventive medicine (Letter) 27 (3): 264. doi:10.1016/j.amepre.2004.06.004. PMID 15450642.
- Lauren Cox (July 2, 2009). "Who Should Pay for Obese Health Care?". ABC News. Retrieved 2012-08-06.
- "109th U.S. Congress (2005–2006) H.R. 554: 109th U.S. Congress (2005–2006) H.R. 554: Personal Responsibility in Food Consumption Act of 2005". GovTrack.us. Retrieved 2008-07-24.
- Basulto, Dominic (June 20, 2013). "A changing battlefield in the fight against fat". The Washington Post. Retrieved June 20, 2013. (WebCite archive)
- "Obesity can be deemed a disability at work - EU court". Reuters. December 18, 2014. Retrieved December 18, 2014.
- "What is NAAFA". National Association to Advance Fat Acceptance. Retrieved February 17, 2009.
- "ISAA Mission Statement". International Size Acceptance Association. Retrieved February 17, 2009.
- Pulver, Adam (2007). An Imperfect Fit: Obesity, Public Health, and Disability Anti-Discrimination Law. Social Science Electronic Publishing. Retrieved January 13, 2009.
- Neumark-Sztainer D (March 1999). "The weight dilemma: a range of philosophical perspectives". Int. J. Obes. Relat. Metab. Disord. (Review). 23 Suppl 2: S31–7. doi:10.1038/sj.ijo.0800857. PMID 10340803.
- National Association to Advance Fat Acceptance (2008). "We come in all sizes". NAAFA. Retrieved 2008-07-29.
- "International Size Acceptance Association – ISAA". International Size Acceptance Association. Retrieved January 13, 2009.
- Flynn MA, McNeil DA, Maloff B, Mutasingwa D, Wu M, Ford C, Tough SC (February 2006). "Reducing obesity and related chronic disease risk in children and youth: a synthesis of evidence with 'best practice' recommendations". Obes Rev (Review). 7 Suppl 1: 7–66. doi:10.1111/j.1467-789X.2006.00242.x. PMID 16371076.
- Dollman J, Norton K, Norton L (December 2005). "Evidence for secular trends in children's physical activity behaviour". Br J Sports Med (Review) 39 (12): 892–7; discussion 897. doi:10.1136/bjsm.2004.016675. PMC 1725088. PMID 16306494.
- Metcalf B, Henley W, Wilkin T (2012). "Effectiveness of intervention on physical activity of children: systematic review and meta-analysis of controlled trials with objectively measured outcomes (EarlyBird 54)". BMJ (Clinical Research Ed.) (Review, Meta-analysis) 345: e5888. doi:10.1136/bmj.e5888. PMID 23044984.
- Lund Elizabeth M. (2006). "Prevalence and Risk Factors for Obesity in Adult Dogs from Private US Veterinary Practices" (PDF). Intern J Appl Res Vet Med 4 (2): 177–86.
- McGreevy PD, Thomson PC, Pride C, Fawcett A, Grassi T, Jones B (May 2005). "Prevalence of obesity in dogs examined by Australian veterinary practices and the risk factors involved". Vet. Rec. 156 (22): 695–702. doi:10.1136/vr.156.22.695. PMID 15923551.
- Nijland ML, Stam F, Seidell JC (June 2009). "Overweight in dogs, but not in cats, is related to overweight in their owners". Public Health Nutr 13 (1): 1–5. doi:10.1017/S136898000999022X. PMID 19545467.
- Bhargava A, Guthrie JF (2002). "Unhealthy eating habits, physical exercise and macronutrient intakes are predictors of anthropometric indicators in the Women's Health Trial: Feasibility Study in Minority Populations". British Journal of Nutrition (Randomized Controlled Trial) 88 (6): 719–728. doi:10.1079/BJN2002739. PMID 12493094.
- Bhargava A (2006). "Fiber intakes and anthropometric measures are predictors of circulating hormone, triglyceride, and cholesterol concentration in the Women's Health Trial". Journal of Nutrition (Research Support) 136 (8): 2249–2254. PMID 16857849.
- Jebb S. and Wells J. Measuring body composition in adults and children In:Peter G. Kopelman, Ian D. Caterson, Michael J. Stock, William H. Dietz (2005). Clinical obesity in adults and children: In Adults and Children. Blackwell Publishing. pp. 12–28. ISBN 1-4051-1672-2.
- Kopelman P., Caterson I. An overview of obesity management In:Peter G. Kopelman, Ian D. Caterson, Michael J. Stock, William H. Dietz (2005). Clinical obesity in adults and children: In Adults and Children. Blackwell Publishing. pp. 319–326. ISBN 1-4051-1672-2.
- National Heart, Lung, and Blood Institute (NHLBI) (1998). Clinical Guidelines on the Identification, Evaluation, and Treatment of Overweight and Obesity in Adults (PDF). International Medical Publishing, Inc. ISBN 1-58808-002-1.
- "Obesity: guidance on the prevention, identification, assessment and management of overweight and obesity in adults and children" (PDF). National Institute for Health and Clinical Excellence(NICE). National Health Services (NHS). 2006. Retrieved April 8, 2009.
- Puhl R., Henderson K., and Brownell K. Social consequences of obesity In:Peter G. Kopelman, Ian D. Caterson, Michael J. Stock, William H. Dietz (2005). Clinical obesity in adults and children: In Adults and Children. Blackwell Publishing. pp. 29–45. ISBN 1-4051-1672-2.
- Seidell JC. Epidemiology — definition and classification of obesity In:Peter G. Kopelman, Ian D. Caterson, Michael J. Stock, William H. Dietz (2005). Clinical obesity in adults and children: In Adults and Children. Blackwell Publishing. pp. 3–11. ISBN 1-4051-1672-2.
- World Health Organization (WHO) (2000). Technical report series 894: Obesity: Preventing and managing the global epidemic. (PDF). Geneva: World Health Organization. ISBN 92-4-120894-5.
Find more about
at Wikipedia's sister projects
|Definitions from Wiktionary|
|Media from Commons|
|News stories from Wikinews|
|Quotations from Wikiquote|
|Source texts from Wikisource|
|Textbooks from Wikibooks|
|Learning resources from Wikiversity|
- Obesity at DMOZ
- Many authors (2015). "Obesity 2015". The Lancet.
- Fumento, Michael (1997). The Fat of the Land: Our Health Crises and How Overweight Americans can Help Themselves. New York: Penguin Books. ISBN 0-14-026144-3.
- Keller, Kathleen (2008). Encyclopedia of Obesity. Thousand Oaks, Calif: Sage Publications, Inc. ISBN 1-4129-5238-7.
- Kolata, Gina (2007). Rethinking Thin: The New Science of Weight Loss – and the Myths and Realities of Dieting. Picador. ISBN 0-312-42785-9.
- Kopelman, Peter G.; Caterson, Ian D.; Dietz, William H., eds. (2009). Clinical obesity in Adults and Children (3rd ed.). John Wiley & Sons. ISBN 978-1-4443-0763-4.
- Levy-Navarro, Elena (2008). The Culture of Obesity in Early and Late Modernity. Palgrave Macmillan. ISBN 0-230-60123-5.
- Pool, Robert (2001). Fat: Fighting the Obesity Epidemic. Oxford, UK: Oxford University Press. ISBN 0-19-511853-7. |
Fourteenth Amendment to the United States Constitution
The Fourteenth Amendment (Amendment XIV) to the United States Constitution was adopted on July 9, 1868, as one of the Reconstruction Amendments. Arguably one of the most consequential amendments to this day, the amendment addresses citizenship rights and equal protection under the law and was proposed in response to issues related to former slaves following the American Civil War. The amendment was bitterly contested, particularly by the states of the defeated Confederacy, which were forced to ratify it in order to regain representation in Congress. The amendment, particularly its first section, is one of the most litigated parts of the Constitution, forming the basis for landmark decisions such as Brown v. Board of Education (1954) regarding racial segregation, Roe v. Wade (1973) regarding abortion, Bush v. Gore (2000) regarding the 2000 presidential election, and Obergefell v. Hodges (2015) regarding same-sex marriage. The amendment limits the actions of all state and local officials, and also those acting on behalf of such officials.
The amendment's first section includes several clauses: the Citizenship Clause, Privileges or Immunities Clause, Due Process Clause, and Equal Protection Clause. The Citizenship Clause provides a broad definition of citizenship, nullifying the Supreme Court's decision in Dred Scott v. Sandford (1857), which had held that Americans descended from African slaves could not be citizens of the United States. Since the Slaughter-House Cases (1873), the Privileges or Immunities Clause has been interpreted to do very little.
The Due Process Clause prohibits state and local governments from depriving persons of life, liberty, or property without a fair procedure. The Supreme Court has ruled this clause makes most of the Bill of Rights as applicable to the states as it is to the federal government, as well as to recognize substantive and procedural requirements that state laws must satisfy. The Equal Protection Clause requires each state to provide equal protection under the law to all people, including all non-citizens, within its jurisdiction. This clause has been the basis for many decisions rejecting irrational or unnecessary discrimination against people belonging to various groups.
The second, third, and fourth sections of the amendment are seldom litigated. However, the second section's reference to "rebellion, or other crime" has been invoked as a constitutional ground for felony disenfranchisement. The fourth section was held, in Perry v. United States (1935), to prohibit a current Congress from abrogating a contract of debt incurred by a prior Congress. The fifth section gives Congress the power to enforce the amendment's provisions by "appropriate legislation"; however, under City of Boerne v. Flores (1997), this power may not be used to contradict a Supreme Court decision interpreting the amendment.
Section 1. All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside. No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.
Section 2. Representatives shall be apportioned among the several States according to their respective numbers, counting the whole number of persons in each State, excluding Indians not taxed. But when the right to vote at any election for the choice of electors for President and Vice President of the United States, Representatives in Congress, the Executive and Judicial officers of a State, or the members of the Legislature thereof, is denied to any of the male inhabitants of such State, being twenty-one years of age, and citizens of the United States, or in any way abridged, except for participation in rebellion, or other crime, the basis of representation therein shall be reduced in the proportion which the number of such male citizens shall bear to the whole number of male citizens twenty-one years of age in such State.
Section 3. No person shall be a Senator or Representative in Congress, or elector of President and Vice President, or hold any office, civil or military, under the United States, or under any State, who, having previously taken an oath, as a member of Congress, or as an officer of the United States, or as a member of any State legislature, or as an executive or judicial officer of any State, to support the Constitution of the United States, shall have engaged in insurrection or rebellion against the same, or given aid or comfort to the enemies thereof. But Congress may, by a vote of two-thirds of each House, remove such disability.
Section 4. The validity of the public debt of the United States, authorized by law, including debts incurred for payment of pensions and bounties for services in suppressing insurrection or rebellion, shall not be questioned. But neither the United States nor any State shall assume or pay any debt or obligation incurred in aid of insurrection or rebellion against the United States, or any claim for the loss or emancipation of any slave; but all such debts, obligations and claims shall be held illegal and void.
Section 5. The Congress shall have power to enforce, by appropriate legislation, the provisions of this article.
Proposal by Congress
In the final years of the American Civil War and the Reconstruction Era that followed, Congress repeatedly debated the rights of black former slaves freed by the 1863 Emancipation Proclamation and the 1865 Thirteenth Amendment, the latter of which had formally abolished slavery. Following the passage of the Thirteenth Amendment by Congress, however, Republicans grew concerned over the increase it would create in the congressional representation of the Democratic-dominated Southern States. Because the full population of freed slaves would now be counted for determining congressional representation, rather than the three-fifths previously mandated by the Three-Fifths Compromise, the Southern States would dramatically increase their power in the population-based House of Representatives, regardless of whether the former slaves were allowed to vote. Republicans began looking for a way to offset this advantage, either by protecting and attracting votes of former slaves, or at least by discouraging their disenfranchisement.
In 1865, Congress passed what would become the Civil Rights Act of 1866, guaranteeing citizenship without regard to race, color, or previous condition of slavery or involuntary servitude. The bill also guaranteed equal benefits and access to the law, a direct assault on the Black Codes passed by many post-war states. The Black Codes attempted to return ex-slaves to something like their former condition by, among other things, restricting their movement, forcing them to enter into year-long labor contracts, prohibiting them from owning firearms, and preventing them from suing or testifying in court.
Although strongly urged by moderates in Congress to sign the bill, President Andrew Johnson vetoed it on March 27, 1866. In his veto message, he objected to the measure because it conferred citizenship on the freedmen at a time when 11 out of 36 states were unrepresented in the Congress, and that it discriminated in favor of African-Americans and against whites. Three weeks later, Johnson's veto was overridden and the measure became law. Despite this victory, even some Republicans who had supported the goals of the Civil Rights Act began to doubt that Congress really possessed constitutional power to turn those goals into laws. The experience also encouraged both radical and moderate Republicans to seek Constitutional guarantees for black rights, rather than relying on temporary political majorities.
Over 70 proposals for an amendment were drafted. In late 1865, the Joint Committee on Reconstruction proposed an amendment stating that any citizens barred from voting on the basis of race by a state would not be counted for purposes of representation of that state. This amendment passed the House, but was blocked in the Senate by a coalition of Radical Republicans led by Charles Sumner, who believed the proposal a "compromise with wrong", and Democrats opposed to black rights. Consideration then turned to a proposed amendment by Representative John A. Bingham of Ohio, which would enable Congress to safeguard "equal protection of life, liberty, and property" of all citizens; this proposal failed to pass the House. In April 1866, the Joint Committee forwarded a third proposal to Congress, a carefully negotiated compromise that combined elements of the first and second proposals as well as addressing the issues of Confederate debt and voting by ex-Confederates. The House of Representatives passed House Resolution 127, 39th Congress several weeks later and sent to the Senate for action. The resolution was debated and several amendments to it were proposed. Amendments to Sections 2, 3, and 4 were adopted on June 8, 1866, and the modified resolution passed by a 33 to 11 vote (5 absent, not voting). The House agreed to the Senate amendments on June 13 by a 138–36 vote (10 not voting). A concurrent resolution requesting the President to transmit the proposal to the executives of the several states was passed by both houses of Congress on June 18.
The Radical Republicans were satisfied that they had secured civil rights for blacks, but were disappointed that the amendment would not also secure political rights for blacks; in particular, the right to vote. For example, Thaddeus Stevens, a leader of the disappointed Radical Republicans, said: "I find that we shall be obliged to be content with patching up the worst portions of the ancient edifice, and leaving it, in many of its parts, to be swept through by the tempests, the frosts, and the storms of despotism." Abolitionist Wendell Phillips called it a "fatal and total surrender". This point would later be addressed by the Fifteenth Amendment.
Ratification by the states
On June 16, 1866, Secretary of State William Seward transmitted the Fourteenth Amendment to the governors of the several states for its ratification. State legislatures in every formerly Confederate state, with the exception of Tennessee, refused to ratify it. This refusal led to the passage of the Reconstruction Acts. Ignoring the existing state governments, military government was imposed until new civil governments were established and the Fourteenth Amendment was ratified. It also prompted Congress to pass a law on March 2, 1867, requiring that a former Confederate state must ratify the Fourteenth Amendment before "said State shall be declared entitled to representation in Congress".
The first twenty-eight states to ratify the Fourteenth Amendment were:
- Connecticut – June 30, 1866
- New Hampshire – July 6, 1866
- Tennessee – July 18, 1866
- New Jersey – September 11, 1866 (rescinded ratification – February 20, 1868/March 24, 1868; re-ratified – April 23, 2003)
- Oregon – September 19, 1866 (rescinded ratification – October 16, 1868; re-ratified – April 25, 1973)
- Vermont – October 30, 1866
- New York – January 10, 1867
- Ohio – January 11, 1867 (rescinded ratification – January 13, 1868; re-ratified – March 12, 2003)
- Illinois – January 15, 1867
- West Virginia – January 16, 1867
- Michigan – January 16, 1867
- Minnesota – January 16, 1867
- Kansas – January 17, 1867
- Maine – January 19, 1867
- Nevada – January 22, 1867
- Indiana – January 23, 1867
- Missouri – January 25, 1867
- Pennsylvania – February 6, 1867
- Rhode Island – February 7, 1867
- Wisconsin – February 13, 1867
- Massachusetts – March 20, 1867
- Nebraska – June 15, 1867
- Iowa – March 16, 1868
- Arkansas – April 6, 1868
- Florida – June 9, 1868
- North Carolina – July 4, 1868 (after rejection – December 14, 1866)
- Louisiana – July 9, 1868 (after rejection – February 6, 1867)
- South Carolina – July 9, 1868 (after rejection – December 20, 1866)
If rescission by Ohio and New Jersey were invalid, South Carolina would have been the 28th State. Rescission by Oregon did not occur until later. These rescissions caused significant controversy. However, ratification by other states continued during the course of the debate:
- Alabama – July 13, 1868
On July 20, 1868, Secretary of State William H. Seward certified that if withdrawals of ratification by New Jersey and Ohio were ineffective, then the amendment had become part of the Constitution on July 9, 1868, with ratification by South Carolina. The following day, Congress adopted and transmitted to the Department of State a concurrent resolution declaring the Fourteenth Amendment to be a part of the Constitution and directing the Secretary of State to promulgate it as such. Both New Jersey and Ohio were named in the congressional resolution as having ratified the amendment, although Alabama was also named, making 29 states total.
On the same day, one more State ratified:
- Georgia – July 21, 1868 (after rejection – November 9, 1866)
On July 27, Secretary Seward received the formal ratification from Georgia. The following day, July 28, Secretary Seward issued his official proclamation certifying the adoption of the Fourteenth Amendment. Secretary Seward stated that his proclamation was "in conformance" to the resolution by Congress, but his official list of States included both Alabama and Georgia, as well as Ohio and New Jersey.
The inclusion of Ohio and New Jersey has led some to question the validity of rescission of a ratification. The inclusion of Alabama and Georgia has called that conclusion into question. While there have been Supreme Court cases dealing with ratification issues, this particular question has never been adjudicated.
The Fourteenth Amendment was subsequently ratified:
- Virginia – October 8, 1869 (after rejection – January 9, 1867)
- Mississippi – January 17, 1870
- Texas – February 18, 1870 (after rejection – October 27, 1866)
- Delaware – February 12, 1901 (after rejection – February 8, 1867)
- Maryland – April 4, 1959 (after rejection – March 23, 1867)
- California – May 6, 1959
- Kentucky – March 30, 1976 (after rejection – January 8, 1867)
Since Ohio and New Jersey re-ratified the Fourteenth Amendment in 2003, all U.S. states that existed during Reconstruction have ratified the amendment.
Citizenship and civil rights
Section 1 of the amendment formally defines United States citizenship and also protects various civil rights from being abridged or denied by any state or state actor. Abridgment or denial of those civil rights by private persons is not addressed by this amendment; the Supreme Court held in the Civil Rights Cases (1883) that the amendment was limited to "state action" and, therefore, did not authorize the Congress to outlaw racial discrimination by private individuals or organizations (though Congress can sometimes reach such discrimination via other parts of the Constitution). U.S. Supreme Court Justice Joseph P. Bradley commented in the Civil Rights Cases that "individual invasion of individual rights is not the subject-matter of the [Fourteenth] Amendment. It has a deeper and broader scope. It nullifies and makes void all state legislation, and state action of every kind, which impairs the privileges and immunities of citizens of the United States, or which injures them in life, liberty or property without due process of law, or which denies to any of them the equal protection of the laws."
The Radical Republicans who advanced the Thirteenth Amendment hoped to ensure broad civil and human rights for the newly freed people—but its scope was disputed before it even went into effect. The framers of the Fourteenth Amendment wanted these principles enshrined in the Constitution to protect the new Civil Rights Act from being declared unconstitutional by the Supreme Court and also to prevent a future Congress from altering it by a mere majority vote. This section was also in response to violence against black people within the Southern States. The Joint Committee on Reconstruction found that only a Constitutional amendment could protect black people's rights and welfare within those states.
The Citizenship Clause overruled the Supreme Court's Dred Scott decision that black people were not citizens and could not become citizens, nor enjoy the benefits of citizenship. Some members of Congress voted for the Fourteenth Amendment in order to eliminate doubts about the constitutionality of the Civil Rights Act of 1866, or to ensure that no subsequent Congress could later repeal or alter the main provisions of that Act. The Civil Rights Act of 1866 had granted citizenship to all people born in the United States if they were not subject to a foreign power, and this clause of the Fourteenth Amendment constitutionalized this rule. According to Garrett Epps, Professor of constitutional law at the University of Baltimore, the Citizenship Clause doesn't cover one group: "Only one group is not “subject to the jurisdiction”—accredited foreign diplomats and their families, who can be expelled by the federal government but not arrested or tried."
There are varying interpretations of the original intent of Congress and of the ratifying states, based on statements made during the congressional debate over the amendment, as well as the customs and understandings prevalent at that time. Some of the major issues that have arisen about this clause are the extent to which it included Native Americans, its coverage of non-citizens legally present in the United States when they have a child, whether the clause allows revocation of citizenship, and whether the clause applies to illegal immigrants.
Many things claimed as uniquely American—a devotion to individual freedom, for example, or social opportunity—exist in other countries. But birthright citizenship does make the United States (along with Canada) unique in the developed world. [...] Birthright citizenship is one expression of the commitment to equality and the expansion of national consciousness that marked Reconstruction. [...] Birthright citizenship is one legacy of the titanic struggle of the Reconstruction era to create a genuine democracy grounded in the principle of equality.
Garrett Epps, professor of constitutional law at the University of Baltimore, also stresses, like Eric Foner, the equality aspect of the Fourteenth Amendment:
Its centerpiece is the idea that citizenship in the United States is universal—that we are one nation, with one class of citizens, and that citizenship extends to everyone born here. Citizens have rights that neither the federal government nor any state can revoke at will; even undocumented immigrants—“persons,” in the language of the amendment—have rights to due process and equal protection of the law.
During the original congressional debate over the amendment Senator Jacob M. Howard of Michigan—the author of the Citizenship Clause—described the clause as having the same content, despite different wording, as the earlier Civil Rights Act of 1866, namely, that it excludes Native Americans who maintain their tribal ties and "persons born in the United States who are foreigners, aliens, who belong to the families of ambassadors or foreign ministers". According to historian Glenn W. LaFantasie of Western Kentucky University, "A good number of his fellow senators supported his view of the citizenship clause." Others also agreed that the children of ambassadors and foreign ministers were to be excluded.
Senator James Rood Doolittle of Wisconsin asserted that all Native Americans were subject to United States jurisdiction, so that the phrase "Indians not taxed" would be preferable, but Senate Judiciary Committee Chairman Lyman Trumbull and Howard disputed this, arguing that the federal government did not have full jurisdiction over Native American tribes, which govern themselves and make treaties with the United States. In Elk v. Wilkins (1884), the clause's meaning was tested regarding whether birth in the United States automatically extended national citizenship. The Supreme Court held that Native Americans who voluntarily quit their tribes did not automatically gain national citizenship. The issue was resolved with the passage of the Indian Citizenship Act of 1924, which granted full U.S. citizenship to indigenous peoples.
Children born to foreign nationals
The Fourteenth Amendment provides that children born in the United States and subject to its jurisdiction become American citizens at birth. At the time of the amendment's passage, President Andrew Johnson and three senators, including Trumbull, the author of the Civil Rights Act, asserted that both the Civil Rights Act and the Fourteenth Amendment would confer citizenship to children born to foreign nationals in the United States. Senator Edgar Cowan of Pennsylvania had a decidedly different opinion. Some scholars dispute whether the Citizenship Clause should apply to the children of unauthorized immigrants today, as "the problem ... did not exist at the time". In the 21st century, Congress has occasionally discussed passing a statute or a constitutional amendment to reduce the practice of "birth tourism", in which a foreign national gives birth in the United States to gain the child's citizenship.
The clause's meaning with regard to a child of immigrants was tested in United States v. Wong Kim Ark (1898). The Supreme Court held that under the Fourteenth Amendment, a man born within the United States to Chinese citizens who have a permanent domicile and residence in the United States and are carrying out business in the United States—and whose parents were not employed in a diplomatic or other official capacity by a foreign power—was a citizen of the United States. Subsequent decisions have applied the principle to the children of foreign nationals of non-Chinese descent.
According to the Foreign Affairs Manual, which is published by the State Department, "Despite widespread popular belief, U.S. military installations abroad and U.S. diplomatic or consular facilities abroad are not part of the United States within the meaning of the [Fourteenth] Amendment."
Loss of citizenship
Loss of national citizenship is possible only under the following circumstances:
- Fraud in the naturalization process. Technically, this is not a loss of citizenship but rather a voiding of the purported naturalization and a declaration that the immigrant never was a citizen of the United States.
- Affiliation with "anti-American" organizations (e.g., the Communist party, terrorist organizations, etc.) within 5 years of naturalization. The State Department views such affiliations as sufficient evidence that an applicant must have lied or concealed evidence in the naturalization process.
- Other-than-honorable discharge from the U.S. armed forces before 5 years of honorable service, if honorable service was the basis for the naturalization.
- Voluntary relinquishment of citizenship. This may be accomplished either through renunciation procedures specially established by the State Department or through other actions that demonstrate desire to give up national citizenship.
For much of the country's history, voluntary acquisition or exercise of a foreign citizenship was considered sufficient cause for revocation of national citizenship. This concept was enshrined in a series of treaties between the United States and other countries (the Bancroft Treaties). However, the Supreme Court repudiated this concept in Afroyim v. Rusk (1967), as well as Vance v. Terrazas (1980), holding that the Citizenship Clause of the Fourteenth Amendment barred the Congress from revoking citizenship. However, it has been argued that Congress can revoke citizenship that it has previously granted to a person not born in the United States.
Privileges or Immunities Clause
The Privileges or Immunities Clause, which protects the privileges and immunities of national citizenship from interference by the states, was patterned after the Privileges and Immunities Clause of Article IV, which protects the privileges and immunities of state citizenship from interference by other states. In the Slaughter-House Cases (1873), the Supreme Court concluded that the Constitution recognized two separate types of citizenship—"national citizenship" and "state citizenship"—and the Court held that the Privileges or Immunities Clause prohibits states from interfering only with privileges and immunities possessed by virtue of national citizenship. The Court concluded that the privileges and immunities of national citizenship included only those rights that "owe their existence to the Federal government, its National character, its Constitution, or its laws". The Court recognized few such rights, including access to seaports and navigable waterways, the right to run for federal office, the protection of the federal government while on the high seas or in the jurisdiction of a foreign country, the right to travel to the seat of government, the right to peaceably assemble and petition the government, the privilege of the writ of habeas corpus, and the right to participate in the government's administration. This decision has not been overruled and has been specifically reaffirmed several times. Largely as a result of the narrowness of the Slaughter-House opinion, this clause subsequently lay dormant for well over a century.
Despite fundamentally differing views concerning the coverage of the Privileges or Immunities Clause of the Fourteenth Amendment, most notably expressed in the majority and dissenting opinions in the Slaughter-House Cases (1873), it has always been common ground that this Clause protects the third component of the right to travel. Writing for the majority in the Slaughter-House Cases, Justice Miller explained that one of the privileges conferred by this Clause "is that a citizen of the United States can, of his own volition, become a citizen of any State of the Union by a bona fide residence therein, with the same rights as other citizens of that State". (emphasis added)
Justice Miller actually wrote in the Slaughter-House Cases that the right to become a citizen of a state (by residing in that state) "is conferred by the very article under consideration" (emphasis added), rather than by the "clause" under consideration.
In McDonald v. Chicago (2010), Justice Clarence Thomas, while concurring with the majority in incorporating the Second Amendment against the states, declared that he reached this conclusion through the Privileges or Immunities Clause instead of the Due Process Clause. Randy Barnett has referred to Justice Thomas's concurring opinion as a "complete restoration" of the Privileges or Immunities Clause.
Due Process Clause
Due process of law in the [Fourteenth Amendment] refers to that law of the land in each state which derives its authority from the inherent and reserved powers of the state, exerted within the limits of those fundamental principles of liberty and justice which lie at the base of all our civil and political institutions, and the greatest security for which resides in the right of the people to make their own laws, and alter them at their pleasure.
The Due Process Clause of the Fourteenth Amendment applies only against the states, but it is otherwise textually identical to the Due Process Clause of the Fifth Amendment, which applies against the federal government; both clauses have been interpreted to encompass identical doctrines of procedural due process and substantive due process. Procedural due process is the guarantee of a fair legal process when the government tries to interfere with a person's protected interests in life, liberty, or property, and substantive due process is the guarantee that the fundamental rights of citizens will not be encroached on by government. The Due Process Clause of the Fourteenth Amendment also incorporates most of the provisions in the Bill of Rights, which were originally applied against only the federal government, and applies them against the states. The Due Process clause applies regardless whether one is citizen of the United States of America or not.
Substantive due process
Beginning with Allgeyer v. Louisiana (1897), the Court interpreted the Due Process Clause as providing substantive protection to private contracts, thus prohibiting a variety of social and economic regulation; this principle was referred to as "freedom of contract". Thus, the Court struck down a law decreeing maximum hours for workers in a bakery in Lochner v. New York (1905) and struck down a minimum wage law in Adkins v. Children's Hospital (1923). In Meyer v. Nebraska (1923), the Court stated that the "liberty" protected by the Due Process Clause
[w]ithout doubt ... denotes not merely freedom from bodily restraint but also the right of the individual to contract, to engage in any of the common occupations of life, to acquire useful knowledge, to marry, establish a home and bring up children, to worship God according to the dictates of his own conscience, and generally to enjoy those privileges long recognized at common law as essential to the orderly pursuit of happiness by free men.
However, the Court did uphold some economic regulation, such as state Prohibition laws (Mugler v. Kansas, 1887), laws declaring maximum hours for mine workers (Holden v. Hardy, 1898), laws declaring maximum hours for female workers (Muller v. Oregon, 1908), and President Woodrow Wilson's intervention in a railroad strike (Wilson v. New, 1917), as well as federal laws regulating narcotics (United States v. Doremus, 1919). The Court repudiated, but did not explicitly overrule, the "freedom of contract" line of cases in West Coast Hotel v. Parrish (1937).
[T]he full scope of the liberty guaranteed by the Due Process Clause cannot be found in or limited by the precise terms of the specific guarantees elsewhere provided in the Constitution. This 'liberty' is not a series of isolated points pricked out in terms of the taking of property; the freedom of speech, press, and religion; the right to keep and bear arms; the freedom from unreasonable searches and seizures; and so on. It is a rational continuum which, broadly speaking, includes a freedom from all substantial arbitrary impositions and purposeless restraints, ... and which also recognizes, what a reasonable and sensitive judgment must, that certain interests require particularly careful scrutiny of the state needs asserted to justify their abridgment.
This broad view of liberty was adopted by the Supreme Court in Griswold v. Connecticut (for further information see below). Although the "freedom of contract" described above has fallen into disfavor, by the 1960s, the Court had extended its interpretation of substantive due process to include other rights and freedoms that are not enumerated in the Constitution but that, according to the Court, extend or derive from existing rights. For example, the Due Process Clause is also the foundation of a constitutional right to privacy. The Court first ruled that privacy was protected by the Constitution in Griswold v. Connecticut (1965), which overturned a Connecticut law criminalizing birth control. While Justice William O. Douglas wrote for the majority that the right to privacy was found in the "penumbras" of various provisions in the Bill of Rights, Justices Arthur Goldberg and John Marshall Harlan II wrote in concurring opinions that the "liberty" protected by the Due Process Clause included individual privacy.
The right to privacy was the basis for Roe v. Wade (1973), in which the Court invalidated a Texas law forbidding abortion except to save the mother's life. Like Goldberg's and Harlan's concurring opinions in Griswold, the majority opinion authored by Justice Harry Blackmun located the right to privacy in the Due Process Clause's protection of liberty. The decision disallowed many state and federal abortion restrictions, and it became one of the most controversial in the Court's history. In Planned Parenthood v. Casey (1992), the Court decided that "the essential holding of Roe v. Wade should be retained and once again reaffirmed".
In Lawrence v. Texas (2003), the Court found that a Texas law against same-sex sexual intercourse violated the right to privacy. In Obergefell v. Hodges (2015), the Court ruled that the fundamental right to marriage included same-sex couples being able to marry.
Procedural due process
When the government seeks to burden a person's protected liberty interest or property interest, the Supreme Court has held that procedural due process requires that, at a minimum, the government provide the person notice, an opportunity to be heard at an oral hearing, and a decision by a neutral decision maker. For example, such process is due when a government agency seeks to terminate civil service employees, expel a student from public school, or cut off a welfare recipient's benefits. The Court has also ruled that the Due Process Clause requires judges to recuse themselves in cases where the judge has a conflict of interest. For example, in Caperton v. A.T. Massey Coal Co. (2009), the Court ruled that a justice of the Supreme Court of Appeals of West Virginia had to recuse himself from a case involving a major contributor to his campaign for election to that court.
While many state constitutions are modeled after the United States Constitution and federal laws, those state constitutions did not necessarily include provisions comparable to the Bill of Rights. In Barron v. Baltimore (1833), the Supreme Court unanimously ruled that the Bill of Rights restrained only the federal government, not the states. However, the Supreme Court has subsequently held that most provisions of the Bill of Rights apply to the states through the Due Process Clause of the Fourteenth Amendment under a doctrine called "incorporation".
Whether incorporation was intended by the amendment's framers, such as John Bingham, has been debated by legal historians. According to legal scholar Akhil Reed Amar, the framers and early supporters of the Fourteenth Amendment believed that it would ensure that the states would be required to recognize the same individual rights as the federal government; all of these rights were likely understood as falling within the "privileges or immunities" safeguarded by the amendment.
By the latter half of the 20th century, nearly all of the rights in the Bill of Rights had been applied to the states. The Supreme Court has held that the amendment's Due Process Clause incorporates all of the substantive protections of the First, Second, Fourth, Fifth (except for its Grand Jury Clause) and Sixth Amendments, along with the Excessive Fines Clause and Cruel and Unusual Punishment Clause of the Eighth Amendment. While the Third Amendment has not been applied to the states by the Supreme Court, the Second Circuit ruled that it did apply to the states within that circuit's jurisdiction in Engblom v. Carey. The Seventh Amendment right to jury trial in civil cases has been held not to be applicable to the states, but the amendment's Re-Examination Clause does apply to "a case tried before a jury in a state court and brought to the Supreme Court on appeal".
Equal Protection Clause
The Equal Protection Clause was created largely in response to the lack of equal protection provided by law in states with Black Codes. Under Black Codes, blacks could not sue, give evidence, or be witnesses. They also were punished more harshly than whites. The Supreme Court in Strauder v. West Virginia said that the Fourteenth Amendment not only gave citizenship and the privileges of citizenship to persons of color, but it denied to any State the power to withhold from them the equal protection of the laws, and authorized Congress to enforce its provisions by appropriate legislation. In 1880, the Supreme Court stated in Strauder v. West Virginia specifically that the Equal Protection Clause was
designed to assure to the colored race the enjoyment of all the civil rights that under the law are enjoyed by white persons, and to give to that race the protection of the general government, in that enjoyment, whenever it should be denied by the States.
The Equal Protection Clause applies to citizens and non-citizens alike. The clause mandates that individuals in similar situations be treated equally by the law. Although the text of the Fourteenth Amendment applies the Equal Protection Clause only against the states, the Supreme Court, since Bolling v. Sharpe (1954), has applied the clause against the federal government through the Due Process Clause of the Fifth Amendment under a doctrine called "reverse incorporation".
In Yick Wo v. Hopkins (1886), the Supreme Court has clarified that the meaning of "person" and "within its jurisdiction" in the Equal Protection Clause would not be limited to discrimination against African Americans, but would extend to other races, colors, and nationalities such as (in this case) legal aliens in the United States who are Chinese citizens:
These provisions are universal in their application to all persons within the territorial jurisdiction, without regard to any differences of race, of color, or of nationality, and the equal protection of the laws is a pledge of the protection of equal laws.
Persons "within its jurisdiction" are entitled to equal protection from a state. Largely because the Privileges and Immunities Clause of Article IV has from the beginning guaranteed the privileges and immunities of citizens in the several states, the Supreme Court has rarely construed the phrase "within its jurisdiction" in relation to natural persons. In Plyler v. Doe (1982), where the Court held that aliens illegally present in a state are within its jurisdiction and may thus raise equal protection claims the Court explicated the meaning of the phrase "within its jurisdiction" as follows: "[U]se of the phrase 'within its jurisdiction' confirms the understanding that the Fourteenth Amendment's protection extends to anyone, citizen or stranger, who is subject to the laws of a State, and reaches into every corner of a State's territory." The Court reached this understanding among other things from Senator Howard, a member of the Joint Committee of Fifteen, and the floor manager of the amendment in the Senate. Senator Howard was explicit about the broad objectives of the Fourteenth Amendment and the intention to make its provisions applicable to all who "may happen to be" within the jurisdiction of a state:
The last two clauses of the first section of the amendment disable a State from depriving not merely a citizen of the United States, but any person, whoever he may be, of life, liberty, or property without due process of law, or from denying to him the equal protection of the laws of the State. This abolishes all class legislation in the States and does away with the injustice of subjecting one caste of persons to a code not applicable to another. ... It will, if adopted by the States, forever disable every one of them from passing laws trenching upon those fundamental rights and privileges which pertain to citizens of the United States, and to all person who may happen to be within their jurisdiction. [emphasis added by the U.S. Supreme Court]
The relationship between the Fifth and Fourteenth Amendments was addressed by Justice Field in Wong Wing v. United States (1896). He observed with respect to the phrase "within its jurisdiction": "The term 'person', used in the Fifth Amendment, is broad enough to include any and every human being within the jurisdiction of the republic. A resident, alien born, is entitled to the same protection under the laws that a citizen is entitled to. He owes obedience to the laws of the country in which he is domiciled, and, as a consequence, he is entitled to the equal protection of those laws. ... The contention that persons within the territorial jurisdiction of this republic might be beyond the protection of the law was heard with pain on the argument at the bar—in face of the great constitutional amendment which declares that no State shall deny to any person within its jurisdiction the equal protection of the laws."
The Supreme Court also decided whether foreign corporations are also within the jurisdiction of a state, ruling that a foreign corporation which sued in a state court in which it was not licensed to do business to recover possession of property wrongfully taken from it in another state was within the jurisdiction and could not be subjected to unequal burdens in the maintenance of the suit. When a state has admitted a foreign corporation to do business within its borders, that corporation is entitled to equal protection of the laws but not necessarily to identical treatment with domestic corporations.
The court does not wish to hear argument on the question whether the provision in the Fourteenth Amendment to the Constitution, which forbids a State to deny to any person within its jurisdiction the equal protection of the laws, applies to these corporations. We are all of the opinion that it does.
This dictum, which established that corporations enjoyed personhood under the Equal Protection Clause, was repeatedly reaffirmed by later courts. It remained the predominant view throughout the twentieth century, though it was challenged in dissents by justices such as Hugo Black and William O. Douglas. Between 1890 and 1910, Fourteenth Amendment cases involving corporations vastly outnumbered those involving the rights of blacks, 288 to 19.
In the decades following the adoption of the Fourteenth Amendment, the Supreme Court overturned laws barring blacks from juries (Strauder v. West Virginia, 1880) or discriminating against Chinese Americans in the regulation of laundry businesses (Yick Wo v. Hopkins, 1886), as violations of the Equal Protection Clause. However, in Plessy v. Ferguson (1896), the Supreme Court held that the states could impose racial segregation so long as they provided similar facilities—the formation of the "separate but equal" doctrine.
The Court went even further in restricting the Equal Protection Clause in Berea College v. Kentucky (1908), holding that the states could force private actors to discriminate by prohibiting colleges from having both black and white students. By the early 20th century, the Equal Protection Clause had been eclipsed to the point that Justice Oliver Wendell Holmes, Jr. dismissed it as "the usual last resort of constitutional arguments".
The Court held to the "separate but equal" doctrine for more than fifty years, despite numerous cases in which the Court itself had found that the segregated facilities provided by the states were almost never equal, until Brown v. Board of Education (1954) reached the Court. In Brown the Court ruled that even if segregated black and white schools were of equal quality in facilities and teachers, segregation was inherently harmful to black students and so was unconstitutional. Brown met with a campaign of resistance from white Southerners, and for decades the federal courts attempted to enforce Brown's mandate against repeated attempts at circumvention. This resulted in the controversial desegregation busing decrees handed down by federal courts in various parts of the nation. In Parents Involved in Community Schools v. Seattle School District No. 1 (2007), the Court ruled that race could not be the determinative factor in determining to which public schools parents may transfer their children.
In Plyler v. Doe (1982) the Supreme Court struck down a Texas statute denying free public education to illegal immigrants as a violation of the Equal Protection Clause of the Fourteenth Amendment because discrimination on the basis of illegal immigration status did not further a substantial state interest. The Court reasoned that illegal aliens and their children, though not citizens of the United States or Texas, are people "in any ordinary sense of the term" and, therefore, are afforded Fourteenth Amendment protections.
In Hernandez v. Texas (1954), the Court held that the Fourteenth Amendment protects those beyond the racial classes of white or "Negro" and extends to other racial and ethnic groups, such as Mexican Americans in this case. In the half-century following Brown, the Court extended the reach of the Equal Protection Clause to other historically disadvantaged groups, such as women and illegitimate children, although it has applied a somewhat less stringent standard than it has applied to governmental discrimination on the basis of race (United States v. Virginia (1996); Levy v. Louisiana (1968)).
The Supreme Court ruled in Regents of the University of California v. Bakke (1978) that affirmative action in the form of racial quotas in public university admissions was a violation of Title VI of the Civil Rights Act of 1964; however, race could be used as one of several factors without violating of the Equal Protection Clause or Title VI. In Gratz v. Bollinger (2003) and Grutter v. Bollinger (2003), the Court considered two race-conscious admissions systems at the University of Michigan. The university claimed that its goal in its admissions systems was to achieve racial diversity. In Gratz, the Court struck down a points-based undergraduate admissions system that added points for minority status, finding that its rigidity violated the Equal Protection Clause; in Grutter, the Court upheld a race-conscious admissions process for the university's law school that used race as one of many factors to determine admission. In Fisher v. University of Texas (2013), the Court ruled that before race can be used in a public university's admission policy, there must be no workable race-neutral alternative. In Schuette v. Coalition to Defend Affirmative Action (2014), the Court upheld the constitutionality of a state constitutional prohibition on the state or local use of affirmative action.
Reed v. Reed (1971), which struck down an Idaho probate law favoring men, was the first decision in which the Court ruled that arbitrary gender discrimination violated the Equal Protection Clause. In Craig v. Boren (1976), the Court ruled that statutory or administrative sex classifications had to be subjected to an intermediate standard of judicial review. Reed and Craig later served as precedents to strike down a number of state laws discriminating by gender.
Since Wesberry v. Sanders (1964) and Reynolds v. Sims (1964), the Supreme Court has interpreted the Equal Protection Clause as requiring the states to apportion their congressional districts and state legislative seats according to "one man, one vote". The Court has also struck down redistricting plans in which race was a key consideration. In Shaw v. Reno (1993), the Court prohibited a North Carolina plan aimed at creating majority-black districts to balance historic underrepresentation in the state's congressional delegations.
The Equal Protection Clause served as the basis for the decision in Bush v. Gore (2000), in which the Court ruled that no constitutionally valid recount of Florida's votes in the 2000 presidential election could be held within the needed deadline; the decision effectively secured Bush's victory in the disputed election. In League of United Latin American Citizens v. Perry (2006), the Court ruled that House Majority Leader Tom DeLay's Texas redistricting plan intentionally diluted the votes of Latinos and thus violated the Equal Protection Clause.
State actor doctrine
Before United States v. Cruikshank, 92 U.S. 542 (1876) was decided by United States Supreme Court, the case was decided as a circuit case (Federal Cases No. 14897). Presiding of this circuit case was judge Joseph P. Bradley who wrote at page 710 of Federal Cases No. 14897 regarding the Fourteenth Amendment to the United States Constitution:
It is a guarantee of protection against the acts of the state government itself. It is a guarantee against the exertion of arbitrary and tyrannical power on the part of the government and legislature of the state, not a guarantee against the commission of individual offenses, and the power of Congress, whether express or implied, to legislate for the enforcement of such a guarantee does not extend to the passage of laws for the suppression of crime within the states. The enforcement of the guarantee does not require or authorize Congress to perform 'the duty that the guarantee itself supposes it to be the duty of the state to perform, and which it requires the state to perform.'
The above quote was quoted by United Supreme Court in United States v. Harris, 106 U.S. 629 (1883) and supplemented by a quote from the majority opinion in United States v. Cruikshank, 92 U.S. 542 (1876) as written by Chief Justice Morrison Waite:
The Fourteenth Amendment prohibits a State from depriving any person of life, liberty, or property without due process of law, and from denying to any person within its jurisdiction the equal protection of the laws, but it adds nothing to the rights of one citizen as against another. It simply furnishes an additional guaranty against any encroachment by the States upon the fundamental rights which belong to every citizen as a member of society. The duty of protecting all its citizens in the enjoyment of an equality of rights was originally assumed by the States, and it still remains there. The only obligation resting upon the United States is to see that the States do not deny the right. This the Amendment guarantees, but no more. The power of the National Government is limited to the enforcement of this guaranty.
Individual liberties guaranteed by the United States Constitution, other than the Thirteenth Amendment's ban on slavery, protect not against actions by private persons or entities, but only against actions by government officials. Regarding the Fourteenth Amendment, the Supreme Court ruled in Shelley v. Kraemer (1948): "[T]he action inhibited by the first section of the Fourteenth Amendment is only such action as may fairly be said to be that of the States. That Amendment erects no shield against merely private conduct, however discriminatory or wrongful." The court added in Civil Rights Cases (1883): "It is State action of a particular character that is prohibited. Individual invasion of individual rights is not the subject matter of the amendment. It has a deeper and broader scope. It nullifies and makes void all State legislation, and State action of every kind, which impairs the privileges and immunities of citizens of the United States, or which injures them in life, liberty, or property without due process of law, or which denies to any of them the equal protection of the laws."
Vindication of federal constitutional rights are limited to those situations where there is "state action" meaning action of government officials who are exercising their governmental power. In Ex parte Virginia (1880), the Supreme Court found that the prohibitions of the Fourteenth Amendment "have reference to actions of the political body denominated by a State, by whatever instruments or in whatever modes that action may be taken. A State acts by its legislative, its executive, or its judicial authorities. It can act in no other way. The constitutional provision, therefore, must mean that no agency of the State, or of the officers or agents by whom its powers are exerted, shall deny to any person within its jurisdiction the equal protection of the laws. Whoever, by virtue of public position under a State government, deprives another of property, life, or liberty, without due process of law, or denies or takes away the equal protection of the laws, violates the constitutional inhibition; and as he acts in the name and for the State, and is clothed with the State's power, his act is that of the State."
There are however instances where people are the victims of civil-rights violations that occur in circumstances involving both government officials and private actors. In the 1960s, the United States Supreme Court adopted an expansive view of state action opening the door to wide-ranging civil-rights litigation against private actors when they act as state actors (i.e., acts done or otherwise "sanctioned in some way" by the state). The Court found that the state action doctrine is equally applicable to denials of privileges or immunities, due process, and equal protection of the laws.
The critical factor in determining the existence of state action is not governmental involvement with private persons or private corporations, but "the inquiry must be whether there is a sufficiently close nexus between the State and the challenged action of the regulated entity so that the action of the latter may be fairly treated as that of the State itself". "Only by sifting facts and weighing circumstances can the nonobvious involvement of the State in private conduct be attributed its true significance."
The Supreme Court asserted that plaintiffs must establish not only that a private party "acted under color of the challenged statute, but also that its actions are properly attributable to the State". "And the actions are to be attributable to the State apparently only if the State compelled the actions and not if the State merely established the process through statute or regulation under which the private party acted."
The rules developed by the Supreme Court for business regulation are that (1) the "mere fact that a business is subject to state regulation does not by itself convert its action into that of the State for purposes of the Fourteenth Amendment",[a] and (2) "a State normally can be held responsible for a private decision only when it has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must be deemed to be that of the State".[b]
Apportionment of representation in House of Representatives
Under Article I, Section 2, Clause 3, the basis of representation of each state in the House of Representatives was determined by adding three-fifths of each state's slave population to its free population. Because slavery (except as punishment for crime) had been abolished by the Thirteenth Amendment, the freed slaves would henceforth be given full weight for purposes of apportionment. This situation was a concern to the Republican leadership of Congress, who worried that it would increase the political power of the former slave states, even as they continued to deny freed slaves the right to vote.
Two solutions were considered:
- reduce the Congressional representation of the former slave states (for example, by basing representation on the number of legal voters rather than the number of inhabitants)
- guarantee freed slaves the right to vote
On January 31, 1866, the House of Representatives voted in favor of a proposed constitutional amendment that would reduce a state's representation in the House in proportion to which that state used "race or color" as a basis to deny the right to vote in that state. The amendment failed in the Senate, partly because radical Republicans foresaw that states would be able to use ostensibly race-neutral criteria, such as educational and property qualifications, to disenfranchise the freed slaves without negative consequence. So the amendment was changed to penalize states in which the vote was denied to male citizens over twenty-one for any reason other than participation in crime. Later, the Fifteenth Amendment was adopted to guarantee the right to vote could not be denied based on race or color.
The effect of Section 2 was twofold:
- Although the three-fifths clause was not formally repealed, it was effectively removed from the Constitution. In the words of the Supreme Court in Elk v. Wilkins, Section 2 "abrogated so much of the corresponding clause of the original Constitution as counted only three-fifths of such persons [slaves]".
- It was intended to penalize, by means of reduced Congressional representation, states that withheld the franchise from adult male citizens for any reason other than participation in crime. This, it was hoped, would induce the former slave states to recognize the political rights of the former slaves, without directly forcing them to do so—something that it was thought the states would not accept.
The first reapportionment after the enactment of the Fourteenth Amendment occurred in 1873, based on the 1870 census. Congress appears to have attempted to enforce the provisions of Section 2, but was unable to identify enough disenfranchised voters to make a difference to any state's representation. In the implementing statute, Congress added a provision stating that
should any state, after the passage of this Act, deny or abridge the right of any of the male inhabitants of such State, being twenty-one years of age, and citizens of the United States, to vote at any election named in the amendments to the Constitution, article fourteen, section two, except for participation in rebellion or other crime, the number of Representatives apportioned in this act to such State shall be reduced in the proportion which the number of such male citizens shall have to the whole number of male citizens twenty-one years of age in such State.
A nearly identical provision remains in federal law to this day.
Despite this legislation, in subsequent reapportionments, no change has ever been made to any state's Congressional representation on the basis of the Amendment. Bonfield, writing in 1960, suggested that "[t]he hot political nature of such proposals has doomed them to failure". Aided by this lack of enforcement, southern states continued to use pretexts to prevent many blacks from voting until the passage of the Voting Rights Act of 1965.
In the Fourth Circuit case of Saunders v Wilkins (1945), Saunders claimed that Virginia should have its Congressional representation reduced because of its use of a poll tax and other voting restrictions. The plaintiff sued for the right to run for Congress at large in the state, rather than in one of its designated Congressional districts. The lawsuit was dismissed as a political question.
Influence on voting rights
In Minor v. Happersett (1875), the Supreme Court cited Section 2 as supporting its conclusion that the right to vote was not among the "privileges and immunities of citizenship" protected by Section 1. Women would not achieve equal voting rights throughout the United States until the adoption of Nineteenth Amendment in 1920.
In Hunter v. Underwood (1985), a case involving disenfranchising black misdemeanants, the Supreme Court concluded that the Tenth Amendment cannot save legislation prohibited by the subsequently enacted Fourteenth Amendment. More specifically the Court concluded that laws passed with a discriminatory purpose are not excepted from the operation of the Equal Protection Clause by the "other crime" provision of Section 2. The Court held that Section 2 "was not designed to permit the purposeful racial discrimination [...] which otherwise violates [Section] 1 of the Fourteenth Amendment."
Abolitionist leaders criticized the amendment's failure to specifically prohibit the states from denying people the right to vote on the basis of race.
Section 2 protects the right to vote only of adult males, not adult females, making it the only provision of the Constitution to explicitly discriminate on the basis of sex. Section 2 was condemned by women's suffragists, such as Elizabeth Cady Stanton and Susan B. Anthony, who had long seen their cause as linked to that of black rights. The separation of black civil rights from women's civil rights split the two movements for decades.
Participants in rebellion
Section 3 prohibits the election or appointment to any federal or state office of any person who had held any of certain offices and then engaged in insurrection, rebellion, or treason. However, a two-thirds vote by each House of the Congress can override this limitation. In 1898, the Congress enacted a general removal of Section 3's limitation. In 1975, the citizenship of Confederate general Robert E. Lee was restored by a joint congressional resolution, retroactive to June 13, 1865. In 1978, pursuant to Section 3, the Congress posthumously removed the service ban from Confederate president Jefferson Davis.
Section 3 was used to prevent Socialist Party of America member Victor L. Berger, convicted of violating the Espionage Act for his anti-militarist views, from taking his seat in the House of Representatives in 1919 and 1920.
Validity of public debt
Section 4 confirmed the legitimacy of all public debt appropriated by the Congress. It also confirmed that neither the United States nor any state would pay for the loss of slaves or debts that had been incurred by the Confederacy. For example, during the Civil War several British and French banks had lent large sums of money to the Confederacy to support its war against the Union. In Perry v. United States (1935), the Supreme Court ruled that under Section 4 voiding a United States bond "went beyond the congressional power."
The debt-ceiling crises of 2011 and 2013 raised the question of what is the President's authority under Section 4. Some, such as legal scholar Garrett Epps, fiscal expert Bruce Bartlett and Treasury Secretary Timothy Geithner, have argued that a debt ceiling may be unconstitutional and therefore void as long as it interferes with the duty of the government to pay interest on outstanding bonds and to make payments owed to pensioners (that is, Social Security and Railroad Retirement Act recipients). Legal analyst Jeffrey Rosen has argued that Section 4 gives the President unilateral authority to raise or ignore the national debt ceiling, and that if challenged the Supreme Court would likely rule in favor of expanded executive power or dismiss the case altogether for lack of standing. Erwin Chemerinsky, professor and dean at University of California, Irvine School of Law, has argued that not even in a "dire financial emergency" could the President raise the debt ceiling as "there is no reasonable way to interpret the Constitution that [allows him to do so]". Jack Balkin, Knight Professor of Constitutional Law at Yale University, opined that like Congress the President is bound by the Fourteenth Amendment, for otherwise, he could violate any part of the amendment at will. Because the President must obey the Section 4 requirement not to put the validity of the public debt into question, Balkin argued that President Obama is obliged "to prioritize incoming revenues to pay the public debt: interest on government bonds and any other 'vested' obligations. What falls into the latter category is not entirely clear, but a large number of other government obligations—and certainly payments for future services—would not count and would have to be sacrificed. This might include, for example, Social Security payments."
Power of enforcement
The opinion of the Supreme Court in The Slaughter-House Cases, 83 U.S. (16 Wall.) 36 (1873) stated with a view to the Reconstruction Amendments and about the Fourteenth Amendment's Section 5 Enforcement Clause in light of said Amendent's Equal Protection Clause:
In the light of the history of these amendments, and the pervading purpose of them, which we have already discussed, it is not difficult to give a meaning to this clause. The existence of laws in the States where the newly emancipated negroes resided, which discriminated with gross injustice and hardship against them as a class, was the evil to be remedied by this clause, and by it such laws are forbidden. If, however, the States did not conform their laws to its requirements, then by the fifth section of the article of amendment Congress was authorized to enforce it by suitable legislation.
Section 5, also known as the Enforcement Clause of the Fourteenth Amendment, enables Congress to pass laws enforcing the amendment's other provisions. In the Civil Rights Cases (1883), the Supreme Court interpreted Section 5 narrowly, stating that "the legislation which Congress is authorized to adopt in this behalf is not general legislation upon the rights of the citizen, but corrective legislation". In other words, the amendment authorizes Congress to pass laws only to combat violations of the rights protected in other sections.
In Katzenbach v. Morgan (1966), the Court upheld Section 4(e) of the Voting Rights Act of 1965, which prohibits certain forms of literacy requirements as a condition to vote, as a valid exercise of Congressional power under Section 5 to enforce the Equal Protection Clause. The Court ruled that Section 5 enabled Congress to act both remedially and prophylactically to protect the rights guaranteed by the amendment. However, in City of Boerne v. Flores (1997), the Court narrowed Congress's enforcement power, holding that Congress may not enact legislation under Section 5 that substantively defines or interprets Fourteenth Amendment rights. The Court ruled that legislation is valid under Section 5 only if there is a "congruence and proportionality" between the injury to a person's Fourteenth Amendment right and the means Congress adopted to prevent or remedy that injury.
Selected Supreme Court cases
Privileges or immunities
- 1833: Barron v. Baltimore
- 1873: Slaughter-House Cases
- 1883: Civil Rights Cases
- 1884: Hurtado v. California
- 1897: Chicago, Burlington & Quincy Railroad v. Chicago
- 1900: Maxwell v. Dow
- 1908: Twining v. New Jersey
- 1925: Gitlow v. New York
- 1932: Powell v. Alabama
- 1937: Palko v. Connecticut
- 1947: Adamson v. California
- 1952: Rochin v. California
- 1961: Mapp v. Ohio
- 1962: Robinson v. California
- 1963: Gideon v. Wainwright
- 1964: Malloy v. Hogan
- 1967: Reitman v. Mulkey
- 1968: Duncan v. Louisiana
- 1969: Benton v. Maryland
- 1970: Goldberg v. Kelly
- 1972: Furman v. Georgia
- 1974: Goss v. Lopez
- 1975: O'Connor v. Donaldson
- 1976: Gregg v. Georgia
- 2010: McDonald v. Chicago
- 2019: Timbs v. Indiana
Substantive due process
- 1876: Munn v. Illinois
- 1887: Mugler v. Kansas
- 1897: Allgeyer v. Louisiana
- 1905: Lochner v. New York
- 1908: Muller v. Oregon
- 1923: Adkins v. Children's Hospital
- 1923: Meyer v. Nebraska
- 1925: Pierce v. Society of Sisters
- 1934: Nebbia v. New York
- 1937: West Coast Hotel Co. v. Parrish
- 1965: Griswold v. Connecticut
- 1973: Roe v. Wade
- 1992: Planned Parenthood v. Casey
- 1996: BMW of North America, Inc. v. Gore
- 1997: Washington v. Glucksberg
- 2003: State Farm v. Campbell
- 2003: Lawrence v. Texas
- 2015: Obergefell v. Hodges
- 1880: Strauder v. West Virginia
- 1886: Yick Wo v. Hopkins
- 1886: Santa Clara County v. Southern Pacific Railroad
- 1896: Plessy v. Ferguson
- 1908: Berea College v. Kentucky
- 1917: Buchanan v. Warley
- 1942: Skinner v. Oklahoma
- 1944: Korematsu v. United States
- 1948: Shelley v. Kraemer
- 1954: Hernandez v. Texas
- 1954: Brown v. Board of Education
- 1962: Baker v. Carr
- 1967: Loving v. Virginia
- 1971: Reed v. Reed
- 1971: Palmer v. Thompson
- 1972: Eisenstadt v. Baird
- 1973: San Antonio Independent School District v. Rodriguez
- 1976: Examining Board v. Flores de Otero
- 1978: Regents of the University of California v. Bakke
- 1982: Mississippi University for Women v. Hogan
- 1986: Posadas de Puerto Rico Associates v. Tourism Company of Puerto Rico
- 1996: United States v. Virginia
- 1996: Romer v. Evans
- 2000: Bush v. Gore
Power of enforcement
- 1883: Civil Rights Cases
- 1966: Katzenbach v. Morgan
- 1997: City of Boerne v. Flores
- 1999: Florida Prepaid Postsecondary Education Expense Board v. College Savings Bank
- 2000: United States v. Morrison
- 2000: Kimel v. Florida Board of Regents
- 2001: Board of Trustees of the University of Alabama v. Garrett
- 2003: Nevada Department of Human Resources v. Hibbs
- 2004: Tennessee v. Lane
- 2013: Shelby County v. Holder
- "Constitution of the United States: Amendments 11–27". National Archives and Records Administration. Archived from the original on May 26, 2013. Retrieved June 11, 2013.
- Goldstone 2011, p. 22.
- Stromberg, "A Plain Folk Perspective" (2002), p. 111.
- Nelson, William E. (1988). The Fourteenth Amendment: From Political Principle to Judicial Doctrine. Harvard University Press. p. 47. ISBN 9780674041424. Retrieved June 6, 2013.
- Stromberg, "A Plain Folk Perspective" (2002), p. 112.
- Foner, Eric (June 1, 1997). Reconstruction. pp. 199–200. ISBN 978-0-8071-2234-1.
- Foner 1988, pp. 250–251.
- Castel, Albert E. (1979). The Presidency of Andrew Johnson. American Presidency. Lawrence, Kan.: The Regents Press of Kansas. p. 70. ISBN 978-0-7006-0190-5.
- Castel, Albert E. (1979). The Presidency of Andrew Johnson. American Presidency. Lawrence, Kan.: The Regents Press of Kansas. p. 71. ISBN 978-0-7006-0190-5.
- Rosen, Jeffrey. The Supreme Court: The Personalities and Rivalries That Defined America, p. 79 (MacMillan 2007).
- Newman, Roger. The Constitution and its Amendments, Vol. 4, p. 8 (Macmillan 1999).
- Goldstone 2011, pp. 22–23.
- Soifer, "Prohibition of Voluntary Peonage" (2012), p. 1614.
- Foner 1988, p. 252.
- Foner 1988, p. 253.
- James J. Kilpatrick, ed. (1961). The Constitution of the United States and Amendments Thereto. Virginia Commission on Constitutional Government. p. 44.
- McPherson, Edward LL.D., (Clerk of the House of Representatives of the United States) "A Handbook of Politics for 1868", Part I - Political Manual for 1866, VI - Votes on Proposed Constitutional Amendments. Washington City: Philp & Solomons. 1868, p. 102
- Carter, Dan. When the War Was Over: The Failure of Self-Reconstruction in the South, 1865–1867, pp. 242–243 (LSU Press 1985).
- Graber, "Subtraction by Addition?" (2012), pp. 1501–1502.
- "The Civil War And Reconstruction". Retrieved January 8, 2016.
- An Act to provide for the more efficient Government of the Rebel States, enacted March 2, 1867, 14 Stat. 428, 429
- "Amendment XIV". US Government Printing Office. Archived from the original on February 2, 2014. Retrieved June 23, 2013.
- A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875. Library of Congress. p. 707.
- Killian, Johnny H.; et al. (2004). The Constitution of the United States of America: Analysis and Interpretation: Analysis of Cases Decided by the Supreme Court of the United States to June 28, 2002. Government Printing Office. p. 31. ISBN 9780160723797.
- A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875. Library of Congress. p. 709.
- A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875. Library of Congress. p. 710.
- A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875. Library of Congress. p. 708.
- A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774–1875. Library of Congress. p. 711.
- "Amendment of 1868 Ratified by Maryland". The New York Times. April 5, 1959. p. 71. ProQuest 114922297.
- Civil Rights Cases, 109 U.S. 3 (1883).
- "Civil Rights Cases (1883)". Pearson Education, Inc., publishing as Pearson Prentice Hall. Pearson Education. 2005. Retrieved October 23, 2013.
- Graber, "Subtraction by Addition?" (2012), p. 1523.
- Goldstone 2011, pp. 23–24.
- Eric Foner, "The Second American Revolution", In These Times, September 1987; reprinted in Civil Rights Since 1787, ed. Jonathan Birnbaum & Clarence Taylor, NYU Press, 2000. ISBN 0814782493
- Finkelman, Paul (2003). "John Bingham and the Background to the Fourteenth Amendment" (PDF). Akron Law Review. 36 (671). Retrieved April 2, 2009.
- Harrell, David and Gaustad, Edwin. Unto A Good Land: A History Of The American People, Volume 1, p. 520 (Eerdmans Publishing, 2005): "The most important, and the one that has occasioned the most litigation over time as to its meaning and application, was Section One."
- Stephenson, D. The Waite Court: Justices, Rulings, and Legacy, p. 147 (ABC-CLIO, 2003).
- Tsesis, Alexander (2008). "The Inalienable Core of Citizenship: From Dred Scott to the Rehnquist Court". Arizona State Law Journal. 39. SSRN 1023809.
- McDonald v. Chicago, 130 S. Ct. 3020, 3060 (2010) ("This [clause] unambiguously overruled this Court's contrary holding in Dred Scott.")
- "The Atlantic Argument: Trump Is Trying to Change 'What it Means to Be American'". The Atlantic. November 8, 2018. Retrieved March 18, 2020.
- Garrett Epps (Professor of constitutional law at the University of Baltimore) (October 30, 2018). "Ideas: The Citizenship Clause Means What It Says". The Atlantic. Archived from the original on March 7, 2020. Retrieved March 18, 2020.
- Jones v. Mayer, 392 U.S. 409 (1968).
- Yen, Chin-Yung. Rights of citizens and persons under the Fourteenth amendment, page 7 (New Era Printing Company 1905).
- Messner, Emily. "Born in the U.S.A. (Part I)", The Debate, The Washington Post (March 30, 2006). Archived November 6, 2011, at the Wayback Machine
- Pear, Robert (August 7, 1996). "Citizenship Proposal Faces Obstacle in the Constitution". The New York Times.
- Magliocca, Gerard N. (2007). "Indians and Invaders: The Citizenship Clause and Illegal Aliens". University of Pennsylvania Journal of Constitutional Law. 10: 499–526. SSRN 965268.
- Foner, Eric (August 27, 2015). "Birthright Citizenship Is the Good Kind of American Exceptionalism". The Nation. The Nation. Retrieved November 12, 2015.
- LaFantasie, Glenn (March 20, 2011) The erosion of the Civil War consensus, Salon Archived March 23, 2011, at the Wayback Machine
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2893 Senator Reverdy Johnson said in the debate: "Now, all this amendment provides is, that all persons born in the United States and not subject to some foreign Power—for that, no doubt, is the meaning of the committee who have brought the matter before us—shall be considered as citizens of the United States ... If there are to be citizens of the United States entitled everywhere to the character of citizens of the United States, there should be some certain definition of what citizenship is, what has created the character of citizen as between himself and the United States, and the amendment says citizenship may depend upon birth, and I know of no better way to give rise to citizenship than the fact of birth within the territory of the United States, born of parents who at the time were subject to the authority of the United States."
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2897.
- Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 572.
- Congressional Globe, 1st Session, 39th Congress, pt. 4, pp. 2890,2892–4,2896.
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2893. Trumbull, during the debate, said, "What do we [the committee reporting the clause] mean by 'subject to the jurisdiction of the United States'? Not owing allegiance to anybody else. That is what it means." He then proceeded to expound upon what he meant by "complete jurisdiction": "Can you sue a Navajoe Indian in court? ... We make treaties with them, and therefore they are not subject to our jurisdiction.... If we want to control the Navajoes or any other Indians of which the Senator from Wisconsin has spoken, how do we do it? Do we pass a law to control them? Are they subject to our jurisdiction in that sense? ... Would he [Sen. Doolittle] think of punishing them for instituting among themselves their own tribal regulations? Does the Government of the United States pretend to take jurisdiction of murders and robberies and other crimes committed by one Indian upon another? ... It is only those persons who come completely within our jurisdiction, who are subject to our laws, that we think of making citizens."
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2895. Howard additionally stated the word jurisdiction meant "the same jurisdiction in extent and quality as applies to every citizen of the United States now" and that the U.S. possessed a "full and complete jurisdiction" over the person described in the amendment.
- Elk v. Wilkins, 112 U.S. 94 (1884).
- Urofsky, Melvin I.; Finkelman, Paul (2002). A March of Liberty: A Constitutional History of the United States. 1 (2nd ed.). New York, NY: Oxford University Press. ISBN 978-0-19-512635-8.
- Reid, Kay (September 22, 2012). "Multilayered loyalties: Oregon Indian women as citizens of the land, their tribal nations, and the united States". Oregon Historical Quarterly. Archived from the original on September 4, 2013. Retrieved July 18, 2013.
- Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 2893. From the debate on the Civil Rights Act:
Mr. Johnson: "... Who is a citizen of the United States is an open question. The decision of the courts and doctrine of the commentators is, that every man who is a citizen of the State becomes ipso facto a citizen of the United States; but there is no definition as to how citizenship can exist in the United States except through the medium of a citizenship in a State ..."
- Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 498. The debate on the Civil Rights Act contained the following exchange:
Mr. Cowan: "I will ask whether it will not have the effect of naturalizing the children of Chinese and Gypsies born in this country?"
Mr. Trumbull: "Undoubtedly."
Mr. Trumbull: "I understand that under the naturalization laws the children who are born here of parents who have not been naturalized are citizens. This is the law, as I understand it, at the present time. Is not the child born in this country of German parents a citizen? I am afraid we have got very few citizens in some of the counties of good old Pennsylvania if the children born of German parents are not citizens."
Mr. Cowan: "The honorable Senator assumes that which is not the fact. The children of German parents are citizens; but Germans are not Chinese; Germans are not Australians, nor Hottentots, nor anything of the kind. That is the fallacy of his argument."
Mr. Trumbull: "If the Senator from Pennsylvania will show me in the law any distinction made between the children of German parents and the children of Asiatic parents, I may be able to appreciate the point which he makes; but the law makes no such distinction; and the child of an Asiatic is just as much of a citizen as the child of a European."
- Congressional Globe, 1st Session, 39th Congress, pt. 4, pp. 2891–2892 During the debate on the Amendment, Senator John Conness of California declared, "The proposition before us, I will say, Mr. President, relates simply in that respect to the children begotten of Chinese parents in California, and it is proposed to declare that they shall be citizens. We have declared that by law [the Civil Rights Act]; now it is proposed to incorporate that same provision in the fundamental instrument of the nation. I am in favor of doing so. I voted for the proposition to declare that the children of all parentage, whatever, born in California, should be regarded and treated as citizens of the United States, entitled to equal Civil Rights with other citizens."
- "Veto of the Civil Rights Bill | Teaching American History".
- Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 2891. From the debate on the Civil Rights Act:
Mr. Cowan: "Therefore I think, before we assert broadly that everybody who shall be born in the United States shall be taken to be citizen of the United States, we ought to exclude others besides Indians not taxed, because I look upon Indians not taxed as being much less dangerous and much less pestiferous to a society than I look upon Gypsies. I do not know how many my honorable friend from California looks upon Chinese, but I do know how some of his fellow citizens regard them. I have no doubt that now they are useful, and I have no doubt that within proper restraints, allowing that State and the other Pacific States to manage them as they may see fit, they may be useful; but I would not tie their hands by the Constitution of the United States so as to prevent them hereafter from dealing with them as in their wisdom they see fit ..."
- Lee, Margaret. "Birthright Citizenship Under the 14th Amendment of Persons Born in the United States to Alien Parents", Congressional Research Service (August 12, 2010): "Over the last decade or so, concern about illegal immigration has sporadically led to a re-examination of a long-established tenet of U.S. citizenship, codified in the Citizenship Clause of the Fourteenth Amendment of the U.S. Constitution and §301(a) of the Immigration and Nationality Act (INA) (8 U.S.C. §1401(a)), that a person who is born in the United States, subject to its jurisdiction, is a citizen of the United States regardless of the race, ethnicity, or alienage of the parents. ... some scholars argue that the Citizenship Clause of the Fourteenth Amendment should not apply to the children of unauthorized aliens because the problem of unauthorized aliens did not exist at the time the Fourteenth Amendment was considered in Congress and ratified by the states."
- Peter Grier (August 10, 2010). "14th Amendment: why birthright citizenship change 'can't be done'". Christian Science Monitor. Archived from the original on December 28, 2012. Retrieved June 12, 2013.
- United States v. Wong Kim Ark, 169 U.S. 649 (1898).
- Rodriguez, C.M. (2009). "The Second Founding: The Citizenship Clause, Original Meaning, and the Egalitarian Unity of the Fourteenth Amendment [PDF]" (PDF). U. Pa. J. Const. L. 11: 1363–1475. Archived from the original (PDF) on July 15, 2011. Retrieved January 20, 2011.
- "8 FAM 301.1-3 Not Included in the Meaning of 'In the United States'". United States Department of State. Retrieved July 18, 2018.
- Current policies are at .
- U.S. Department of State (February 1, 2008). "Advice about Possible Loss of U.S. Citizenship and Dual Nationality". Archived from the original on April 16, 2009. Retrieved April 17, 2009.
- For example, see Perez v. Brownell, 356 U.S. 44 (1958), overruled by Afroyim v. Rusk, 387 U.S. 253 (1967).
- Afroyim v. Rusk, 387 U.S. 253 (1967).
- Vance v. Terrazas, 444 U.S. 252 (1980).
- Yoo, John. Survey of the Law of Expatriation, Memorandum Opinion for the Solicitor General (June 12, 2002). Archived June 6, 2013, at the Wayback Machine
- Slaughter-House Cases, 83 U.S. 36 (1873).
- Beatty, Jack (April 8, 2008). Age of Betrayal: The Triumph of Money in America, 1865–1900. New York: Vintage Books. p. 135. ISBN 978-1400032426. Retrieved July 19, 2013.
- e.g., United States v. Morrison, 529 U.S. 598 (2000).
- Shaman, Jeffrey. Constitutional Interpretation: Illusion and Reality, p. 248 (Greenwood Publishing 2001).
- Saenz v. Roe, 526 U.S. 489 (1999).
- Bogen, David. Privileges and Immunities: A Reference Guide to the United States Constitution, p. 104 (Greenwood Publushing 2003).
- Barnett, Randy. Privileges or Immunities Clause alive again.
- Hurtado v. California, 110 U.S. 516 (1884).
- Curry, James A.; Riley, Richard B.; Battiston, Richard M. (2003). "6". Constitutional Government: The American Experience. Kendall/Hunt Publishing Company. p. 210. ISBN 978-0-7872-9870-8. Retrieved July 14, 2013.
- Gupta, Gayatri (2009). "Due process". In Folsom, W. Davis; Boulware, Rick (eds.). Encyclopedia of American Business. Infobase. p. 134.
- Cord, Robert L. (1987). "The Incorporation Doctrine and Procedural Due Process Under the Fourteenth Amendment: An Overview". Brigham Young University Law Review (3): 868. Retrieved July 14, 2013.
- Allgeyer v. Louisiana, 169 U.S. 649 (1897).
- "Due Process of Law – Substantive Due Process". West's Encyclopedia of American Law. Thomson Gale. 1998.
- Lochner v. New York, 198 U.S. 45 (1905).
- Adkins v. Children's Hospital, 261 U.S. 525 (1923).
- Meyer v. Nebraska, 262 U.S. 390 (1923).
- "CRS Annotated Constitution". Cornell University Law School Legal Information Institute. Archived from the original on November 10, 2013. Retrieved June 12, 2013.
- Mugler v. Kansas, 123 U.S. 623 (1887).
- Holden v. Hardy, 169 U.S. 366 (1898).
- Muller v. Oregon, 208 U.S. 412 (1908).
- Wilson v. New, 243 U.S. 332 (1917).
- United States v. Doremus, 249 U.S. 86 (1919).
- West Coast Hotel v. Parrish, 300 U.S. 379 (1937).
- Poe v. Ullman, 367 U.S. 497 (1961), at 543
- Planned Parenthood of Southeastern Pa. v. Casey, 505 U.S. 833, at 849
- Griswold v. Connecticut, 381 U.S. 479 (1965)
- Griswold v. Connecticut. Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). January 1, 2000. Archived from the original on September 5, 2013. Retrieved June 16, 2013.
- Roe v. Wade, 410 U.S. 113 (1973).
- Roe v. Wade 410 U.S. 113 (1973) Doe v. Bolton 410 U.S. 179 (1973). Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). January 1, 2000. Archived from the original on June 10, 2014. Retrieved June 16, 2013.
- Planned Parenthood v. Casey, 505 U.S. 833 (1992).
- Casey, 505 U.S. at 845–846.
- Lawrence v. Texas, 539 U.S. 558 (2003).
- Spindelman, Marc (June 1, 2004). "Surviving Lawrence v. Texas". Michigan Law Review. Archived from the original on June 10, 2014. Retrieved June 16, 2013.
- Howe, Amy (June 26, 2015). "In historic decision, Court strikes down state bans on same-sex marriage: In Plain English". SCOTUSblog. Retrieved July 8, 2015.
- White, Bradford (2008). Procedural Due Process in Plain English. National Trust for Historic Preservation. ISBN 978-0-89133-573-3.
- See also Mathews v. Eldridge (1976).
- Caperton v. A.T. Massey Coal Co., 556 U.S. 868 (2009).
- Jess Bravin; Kris Maher (June 8, 2009). "Justices Set New Standard for Recusals". The Wall Street Journal. Retrieved June 9, 2009.
- Barron v. Baltimore, 32 U.S. 243 (1833).
- Levy, Leonard W. (January 2000). Barron v. City of Baltimore 7 Peters 243 (1833). Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). Archived from the original on March 29, 2015. Retrieved June 13, 2013.
- Foster, James C. (2006). "Bingham, John Armor". In Finkelman, Paul (ed.). Encyclopedia of American Civil Liberties. CRC Press. p. 145. ISBN 9780415943420.
- Amar, Akhil Reed (1992). "The Bill of Rights and the Fourteenth Amendment". Yale Law Journal. 101 (6): 1193–1284. doi:10.2307/796923. JSTOR 796923. Archived from the original on October 19, 2008.
- "Duncan v. Louisiana (Mr. Justice Black, joined by Mr. Justice Douglas, concurring)". Cornell Law School – Legal Information Institute. May 20, 1968. Retrieved April 26, 2009.
- Levy, Leonard (1970). Fourteenth Amendment and the Bill of Rights: The Incorporation Theory (American Constitutional and Legal History Series). Da Capo Press. ISBN 978-0-306-70029-3.
- 677 F.2d 957 (1982)
- "Minneapolis & St. Louis R. Co. v. Bombolis (1916)". Justia. May 22, 1916. Retrieved August 1, 2010.
- "The Constitution of the United States of America: Analysis, and Interpretation – 1992 Edition --> Amendments to the Constitution --> Seventh Amendment – Civil Trials". U.S. Government Printing Office. U.S. Government Printing Office. 1992. p. 1464. Retrieved July 4, 2013.
- Amy Howe (February 20, 2019). "Opinion analysis: Eighth Amendment's ban on excessive fines applies to the states". SCOTUSblog. Retrieved February 20, 2019.
- Goldstone 2011, pp. 20, 23–24.
- "Strauder v. West Virginia, 100 U.S. 303 (1880) at page 306-307". Justia US Supreme Court Center. March 1, 1880. Retrieved April 3, 2020.
- Failinger, Marie (2009). "Equal protection of the laws". In Schultz, David Andrew (ed.). The Encyclopedia of American Law. Infobase. pp. 152–53. ISBN 9781438109916.
- Primus, Richard (May 2004). "Bolling Alone". Columbia Law Review. SSRN 464847.
- Bolling v. Sharpe, 347 U.S. 497 (1954)
- Yick Wo v. Hopkins, 118 U.S. 356 (1886).
- "Annotation 18 – Fourteenth Amendment: Section 1 – Rights Guaranteed: Equal Protection of the Laws: Scope and application state action". FindLaw for Legal Professionals – Law & Legal Information by FindLaw, a Thomson Reuters business. Retrieved November 23, 2013.
- Plyler v. Doe, 457 U.S. 202, 210–16 (1982).
- Congressional Globe, 39th Congress, 1st Session, 1033 (1866), page 2766
- Wong Wing v. United States, 163 U.S. 228 (1896).
- Wong Wing, 163 U.S. at 242–243 (Justice Field, concurring in part and dissenting in part).
- Johnson, John W. (January 1, 2001). Historic U.S. Court Cases: An Encyclopedia. Routledge. pp. 446–47. ISBN 978-0-415-93755-9. Retrieved June 13, 2013.
- Vile, John R., ed. (2003). "Corporations". Encyclopedia of Constitutional Amendments, Proposed Amendments, and Amending Issues: 1789 – 2002. ABC-CLIO. p. 116.
- Logan, Rayford Whittingham (1965). The betrayal of the Negro, from Rutherford B. Hayes to Woodrow Wilson. New York: Collier Books. p. 100.
- Strauder v. West Virginia, 100 U.S. 303 (1880).
- Plessy v. Ferguson, 163 U.S. 537 (1896).
- Abrams, Eve (February 12, 2009). "Plessy/Ferguson plaque dedicated". WWNO (University New Orleans Public Radio). Retrieved April 17, 2009.
- Berea College v. Kentucky, 211 U.S. 45 (1908).
- Holmes, Oliver Wendell, Jr. "274 U.S. 200: Buck v. Bell". Cornell University Law School Legal Information Institute. Archived from the original on May 30, 2013. Retrieved June 12, 2013.CS1 maint: multiple names: authors list (link)
- Brown v. Board of Education, 347 U.S. 483 (1954).
- Patterson, James (2002). Brown v. Board of Education: A Civil Rights Milestone and Its Troubled Legacy (Pivotal Moments in American History). Oxford University Press. ISBN 978-0-19-515632-4.
- "Forced Busing and White Flight". Time. September 25, 1978. Retrieved June 17, 2009.
- Parents Involved in Community Schools v. Seattle School District No. 1, 551 U.S. 701 (2007).
- Greenhouse, Linda (June 29, 2007). "Justices Limit the Use of Race in School Plans for Integration". The New York Times. Retrieved June 30, 2013.
- "Plyler v. Doe". The Oyez Project at IIT Chicago-Kent College of Law. The Oyez Project at IIT Chicago-Kent College of Law. Retrieved November 23, 2013.
- Hernandez v. Texas, 347 U.S. 475 (1954).
- United States v. Virginia, 518 U.S. 515 (1996).
- Levy v. Louisiana, 361 U.S. 68 (1968).
- Gerstmann, Evan (1999). The Constitutional Underclass: Gays, Lesbians, and the Failure of Class-Based Equal Protection. University Of Chicago Press. ISBN 978-0-226-28860-4.
- Regents of the University of California v. Bakke, 438 U.S. 265 (1978).
- Daniel E. Brannen; Richard Hanes (2001). Regents of the University of California v. Bakke 1978. Supreme Court Drama: Cases that Changed America. – via HighBeam Research (subscription required). Archived from the original on February 6, 2016. Retrieved June 27, 2013.
- Gratz v. Bollinger, 539 U.S. 244 (2003).
- Grutter v. Bollinger, 539 U.S. 306 (2003).
- Alger, Jonathan (October 11, 2003). "Gratz/Grutter and Beyond: the Diversity Leadership Challenge". University of Michigan. Archived from the original on August 13, 2011. Retrieved June 30, 2013.
- Eckes, Susan B. (January 1, 2004). "Race-Conscious Admissions Programs: Where Do Universities Go From Gratz and Grutter?". Journal of Law and Education. Archived from the original on February 6, 2016. Retrieved June 27, 2013.
- Fisher v. University of Texas, No. 11-345, 570 U.S. ___ (2013).
- Howe, Amy (June 24, 2013). "Finally! The Fisher decision in Plain English". SCOTUSblog. Retrieved June 30, 2013.
- Schuette v. Coalition to Defend Affirmative Action, No. 12-682, 572 U.S. ___ (2014).
- Denniston, Lyle (April 22, 2014). "Opinion analysis: Affirmative action — up to the voters". SCOTUSblog. Retrieved April 22, 2014.
- Reed v. Reed, 404 U.S. 71 (1971).
- Reed v. Reed 1971. Supreme Court Drama: Cases that Changed America. – via HighBeam Research (subscription required). January 1, 2001. Archived from the original on February 6, 2016. Retrieved June 12, 2013.
- Craig v. Boren, 429 U.S. 190 (1976).
- Karst, Kenneth L. (January 1, 2000). Craig v. Boren, 429 U.S. 190 (1976). Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). Archived from the original on February 6, 2016. Retrieved June 16, 2013.
- Wesberry v. Sanders, 376 U.S. 1 (1964).
- Reynolds v. Sims, 377 U.S. 533 (1964).
- Epstein, Lee; Walker, Thomas G. (2007). Constitutional Law for a Changing America: Rights, Liberties, and Justice (6th ed.). Washington, D.C.: CQ Press. p. 775. ISBN 978-0-87187-613-3.
Wesberry and Reynolds made it clear that the Constitution demanded population-based representational units for the U.S. House of Representatives and both houses of state legislatures.
- Shaw v. Reno, 509 U.S. 630 (1993).
- Aleinikoff, T. Alexander; Samuel Issacharoff (1993). "Race and Redistricting: Drawing Constitutional Lines after Shaw v. Reno". Michigan Law Review. 92 (3): 588–651. doi:10.2307/1289796. JSTOR 1289796.
- Bush v. Gore, 531 U.S. 98 (2000).
- "Bush v. Gore". Encyclopædia Britannica. Retrieved June 12, 2013.
- League of United Latin American Citizens v. Perry, 548 U.S. 399 (2006).
- Daniels, Gilda R. (March 22, 2012). "Fred Gray: life, legacy, lessons". Faulkner Law Review. Archived from the original on February 6, 2016. Retrieved June 12, 2013.
- United States of America Congressiona Record - Congressional Record: Proceedings and Debates of the 88th Congress Second Session, Volume 110, Part 5, March 19, 1964 to April 6, 1964 (Pages 5655 to 7044), here page 5943. United States Congress. Archived from the original on April 14, 2020. Retrieved April 14, 2020.
- "United States v. Harris, 106 U.S. 629 (1883)". US Supreme Court Center. Retrieved April 14, 2020.
- "United States v. Cruikshank, 92 U.S. 542 (1875)". US Supreme Court Center. Retrieved April 14, 2020.
- Dunn, Christopher (April 28, 2009). "Column: Applying the Constitution to Private Actors (New York Law Journal)". New York Civil Liberties Union (NYCLU) - American Civil Liberties Union of New York State. Archived from the original on February 29, 2020. Retrieved November 23, 2013.
- Shelley v. Kraemer, 334 U.S. 1 (1948).
- Ex Parte Virginia, 100 U.S. 339 (1880).
- Jackson v. Metropolitan Edison Co, 419 U.S. 345 (1974).
- Burton v. Wilmington Parking Authority, 365 U.S. 715 (1961).
- Flagg Bros., Inc. v. Brooks, 436 U.S. 149 (1978).
- Bonfield, Arthur Earl (1960). "The Right to Vote and Judicial Enforcement of Section Two of the Fourteenth Amendment". Cornell Law Review. 46 (1).
- ""An Act for the Apportionment of Representatives to Congress among the States according to the ninth Census", Forty-Second Congress, Sess. ii, Ch. xi, section 6. February 2, 1872".
- "2 U.S. Code § 6 - Reduction of representation". LII / Legal Information Institute.
- Friedman, Walter (January 1, 2006). Fourteenth Amendment. Encyclopedia of African-American Culture and History. – via HighBeam Research (subscription required). Archived from the original on July 14, 2014. Retrieved June 12, 2013.
- "Casetext". casetext.com.
- Chin, Gabriel J. (2004). "Reconstruction, Felon Disenfranchisement, and the Right to Vote: Did the Fifteenth Amendment Repeal Section 2 of the Fourteenth?". Georgetown Law Journal. 92: 259.
Why this if it was not in the power of the legislature to deny the right of suffrage to some male inhabitants? And if suffrage was necessarily one of the absolute rights of citizenship, why confine the operation of the limitation to male inhabitants? Women and children are, as we have seen, "persons." They are counted in the enumeration upon which the apportionment is to be made, but if they were necessarily voters because of their citizenship unless clearly excluded, why inflict the penalty for the exclusion of males alone? Clearly, no such form of words would have been elected to express the idea here indicated if suffrage was the absolute right of all citizens.
- Richardson v. Ramirez, 418 U.S. 24 (1974).
- Hunter v. Underwood, 471 U.S. 222 (1985).
- Foner 1988, p. 255.
- Foner 1988, pp. 255–256.
- "Sections 3 and 4: Disqualification and Public Debt". Caselaw.lp.findlaw.com. June 5, 1933. Retrieved August 1, 2010.
- "Pieces of History: General Robert E. Lee's Parole and Citizenship". Prologue Magazine. 37 (1). 2005.
- Goodman, Bonnie K. (2006). "History Buzz: October 16, 2006: This Week in History". History News Network. Archived from the original on October 19, 2007. Retrieved June 18, 2009.
- "Chapter 157: The Oath As Related To Qualifications", Cannon's Precedents of the U.S. House of Representatives, 6, January 1, 1936
- "Annotation 37 – Fourteenth Amendment Sections 3 and 4 Disqualification and Public Debt". FindLaw. Retrieved October 17, 2013.
- "Perry v. United States 294 U.S. 330 (1935) at 354". Findlaw.com. Archived from the original on January 23, 2013. Retrieved August 1, 2010.
- Liptak, Adam (July 24, 2011). "The 14th Amendment, the Debt Ceiling and a Way Out". The New York Times. Retrieved July 30, 2011.
In recent weeks, law professors have been trying to puzzle out the meaning and relevance of the provision. Some have joined Mr. Clinton in saying it allows Mr. Obama to ignore the debt ceiling. Others say it applies only to Congress and only to outright default on existing debts. Still, others say the President may do what he wants in an emergency, with or without the authority of the 14th Amendment.
- Balkin, Jack M. "3 ways Obama could bypass Congress". CNN. Retrieved October 16, 2013.
- "Our National Debt 'Shall Not Be Questioned,' the Constitution Says". The Atlantic. May 4, 2011.
- Sahadi, Jeanne. "Is the debt ceiling unconstitutional?". CNN Money. Retrieved January 2, 2013.
- Rosen, Jeffrey (July 29, 2011). "How Would the Supreme Court Rule on Obama Raising the Debt Ceiling Himself?". The New Republic. Retrieved July 29, 2011.
- Chemerinsky, Erwin (July 29, 2011). "The Constitution, Obama and raising the debt ceiling". Los Angeles Times. Retrieved July 30, 2011.
- "Slaughterhouse Cases, 83 U.S. 36 (1872)". US Supreme Court Center. Retrieved April 14, 2020.
- Engel, Steven A. (October 1, 1999). "The McCulloch theory of the Fourteenth Amendment: City of Boerne v. Flores and the original understanding of section 5". Yale Law Journal. – via HighBeam Research (subscription required). Archived from the original on December 18, 2006. Retrieved June 12, 2013.
- Kovalchick, Anthony (February 15, 2007). "Judicial Usurpation of Legislative Power: Why Congress Must Reassert its Power to Determine What is Appropriate Legislation to Enforce the Fourteenth Amendment". Chapman Law Review. 10 (1). Retrieved July 19, 2013.
- "FindLaw: U.S. Constitution: Fourteenth Amendment, p. 40". Caselaw.lp.findlaw.com. Retrieved August 1, 2010.
- Katzenbach v. Morgan, 384 U.S. 641 (1966).
- Eisenberg, Theodore (January 1, 2000). Katzenbach v. Morgan 384 U.S. 641 (1966). Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). Archived from the original on September 24, 2015. Retrieved June 12, 2013.
- City of Boerne v. Flores, 521 U.S. 507 (1997).
- Flores, 521 U.S., at 507.
- Foner, Eric (1988). Reconstruction: America's Unfinished Revolution, 1863–1877. HarperCollins. ISBN 9780062035868.CS1 maint: ref=harv (link) Preview.
- Goldstone, Lawrence (2011). Inherently Unequal: The Betrayal of Equal Rights by the Supreme Court, 1865–1903. Walker & Company. ISBN 9780802717924.CS1 maint: ref=harv (link) Preview.
- Graber, Mark A. (November 2012). "Subtraction by addition?: The Thirteenth and Fourteenth Amendments". Columbia Law Review. 112 (7): 1501–1549. JSTOR 41708157. Archived from the original on November 17, 2015.CS1 maint: ref=harv (link) Pdf.
- Soifer, Aviam (November 2012). "Federal protection, paternalism, and the virtually forgotten prohibition of voluntary peonage". Columbia Law Review. 112 (7): 1607–1639. JSTOR 41708160. Archived from the original on November 17, 2015.CS1 maint: ref=harv (link) Pdf.
- Bogen, David S. (April 30, 2003). Privileges and Immunities: A Reference Guide to the United States Constitution. Greenwood Publishing Group. ISBN 9780313313479. Retrieved March 19, 2013.
- Garber, Mark A. (2011). "Foreword: Plus or minus one: the Thirteenth and Fourteenth Amendments". Maryland Law Review. 71 (1): 12–20.CS1 maint: ref=harv (link) Pdf.
- See also: Symposium: the Maryland Constitutional Law Schmooze special issue of the Maryland Law Review.
- Halbrook, Stephen P. (1998). Freedmen, the 14th Amendment, and the Right to Bear Arms, 1866–1876. Greenwood Publishing Group. ISBN 9780275963316. Retrieved March 29, 2013. at Questia
- tenBroek, Jacobus (June 1951). "Thirteenth Amendment to the Constitution of the United States: Consummation to Abolition and Key to the Fourteenth Amendment". California Law Review. 39 (2): 171–203. doi:10.2307/3478033. JSTOR 3478033.CS1 maint: ref=harv (link) Pdf.
- McConnell, Michael W. (May 1995). "Originalism and the desegregation decisions". Virginia Law Review. 81 (4): 947–1140. doi:10.2307/1073539. JSTOR 1073539.CS1 maint: ref=harv (link)
- Response to McConnell: Klarman, Michael J. (October 1995). "Response: Brown, originalism, and constitutional theory: a response to Professor Mcconnell". Virginia Law Review. 81 (7): 1881–1936. doi:10.2307/1073643. JSTOR 1073643.CS1 maint: ref=harv (link)
|Wikimedia Commons has media related to Fourteenth Amendment to the United States Constitution.|
|Wikisource has original text related to this article:|
- "Amendments to the Constitution of the United States" (PDF). GPO Access. Archived from the original (PDF) on September 18, 2005. Retrieved September 11, 2005. (PDF, providing text of amendment and dates of ratification)
- CRS Annotated Constitution: Fourteenth Amendment
- Fourteenth Amendment and related resources at the Library of Congress |
Calibration : Up-tests and Down-tests
Instruments are calibrated to measure the process variables in a fixed range of scale. All instruments are assumed to be Linear if the physical quantity or process variable and the resulting output readings of the instrument have a linear relationship.
The relation between input and output will be either linear, square root, angular, or any custom algorithm. The instrument output has to display the readings in terms of the process variable units.
The manufacturer calibrates by comparing the output of the instrument with respect to a standard input. Having obtained such an instrument, which has been marked and calibrated by the manufacturer, the user has to periodically calibrate the instrument to see whether it is working within the prescribed limits.
In order to calibrate an instrument, we need another standard input which can be measured ten times more accurately than with the instrument under calibration. The standard input is varied within the range of measurement of the instrument to be calibrated.
Based on the standard input and the values obtained from the instrument, one can calibrate the instrument.
Purpose of instrument calibration
Calibration refers to the act of evaluating and adjusting the precision and accuracy of measurement equipment.
Instrument calibration is intended to eliminate or reduce bias in an instrument’s readings over a range for all continuous values.
- Precision is the degree to which repeated measurements under unchanged conditions show the same result
- Accuracy is the degree of closeness of measurements of a quantity to its actual true value.
For this purpose, reference standards with known values for selected points covering the range of interest are measured with the instrument in question.
Then a functional relationship is established between the values of the standards and the corresponding measurements. There are two basic situations:
- Instruments which require correction for bias: The instrument reads in the same units as the reference standards. The purpose of the calibration is to identify and eliminate any bias in the instrument relative to the defined unit of measurement.
- Instruments whose measurements act as surrogates for other measurements: The instrument reads in different units than the reference standards.
When do instruments need to be calibrated?
- Indicated by manufacturer
- Every instrument will need to be calibrated periodically to make sure it can function properly and safely. Manufacturers will indicate how often the instrument will need to be calibrated.
- Before major critical measurements
- Before any measurements that requires highly accurate data, send the instruments out for calibration and remain unused before the test.
- After major critical measurements
- Send the instrument for calibration after the test helps user decide whether the data obtained were reliable or not. Also, when using an instrument for a long time, the instrument’s conditions will change.
- After an event
- The event here refers to any event that happens to the instrument. For example, when something hits the instrument or any kinds of accidents that might impact the instrument’s accuracy. A safety check is also recommended.
- When observations appear questionable
- When you suspect the data’s accuracy that is due to instrumental errors, send the instrument to calibrate.
- Per requirements
- Some experiments require calibration certificates. As per our plant requirements.
Basic steps for correcting the instrument for bias
The calibration method is the same for both situations stated above and requires the following basic steps:
- Selection of reference standards with known values to cover the range of interest.
- Measurements on the reference standards with the instrument to be calibrated.
- Functional relationship between the measured and known values of the reference standards (usually a least-squares fit to the data) called a calibration curve.
- Correction of all measurements by the inverse of the calibration curve.
Some people mix up field check and calibration. Field check is when two instruments have the same reading; this does not mean they are calibrated; it may be that both instruments are wrong.
Let’s use thermometer as an example; if a thermometer always read .25 degree higher, this error can not be eliminated by taking averages, because this error is constant.
The easiest way to determine if it is accurate and fix it is to send the thermometer to a calibration laboratory. Another way to reveal constant errors is to have one or more similar thermometers.
One thermometer is used and then replaced by another thermometer. If readings are divided among two or more thermometers, inconsistencies among the thermometers will ultimately be revealed.
Also Read : Smart Transmitter Calibration
Calibration : Up-tests and Down-tests
It is not uncommon for calibration tables to show multiple calibration points going up as well as going down, for the purpose of documenting hysteresis and deadband errors.
Note the following example, showing a transmitter with a maximum hysteresis of 0.313 % (the offending data points are shown in bold-faced type):
Note again how error is expressed as either a positive or a negative quantity depending on whether the instrument’s measured response is above or below what it should be under each condition. The values of error appearing in this calibration table, expressed in percent of span, are all calculated by the following formula:
In the course of performing such a directional calibration test, it is important not to overshoot any of the test points. If you do happen to overshoot a test point in setting up one of the input conditions for the instrument, simply “back up” the test stimulus and re-approach the test point from the same direction as before.
Unless each test point’s value is approached from the proper direction, the data cannot be used to determine hysteresis/deadband error.
Reference : chem.libretexts.org
Credits : by Tony R. Kuphaldt – under Creative Commons Attribution 4.0 License |
Astronomers from the University of Bonn in Germany have discovered a vast structure of satellite galaxies and clusters of stars surrounding our Galaxy, stretching out across a million light years. The work challenges the existence of dark matter, part of the standard model for the evolution of the universe.
PhD student and lead author Marcel Pawlowski reports the team's findings in a paper in the journal Monthly Notices of the Royal Astronomical Society.
The Milky Way, the galaxy we live in, consists of around three hundred thousand million stars as well as large amounts of gas and dust arranged with arms in a flat disk that wind out from a central bar. The diameter of the main part of the Milky Way is about 100,000 light years, meaning that a beam of light takes 100,000 years to travel across it. A number of smaller satellite galaxies and spherical clusters of stars (so-called globular clusters) orbit at various distances from the main Galaxy.
Conventional models for the origin and evolution of the universe (cosmology) are based on the presence of 'dark matter', invisible material thought to make up about 23% of the content of the cosmos that has never been detected directly. In this model, the Milky Way is predicted to have far more satellite galaxies than are actually seen.
In their effort to understand exactly what surrounds our Galaxy, the scientists used a range of sources from twentieth century photographic plates to images from the robotic telescope of the Sloan Deep Sky Survey. Using all these data they assembled a picture that includes bright 'classical' satellite galaxies, more recently detected fainter satellites and the younger globular clusters.
"Once we had completed our analysis, a new picture of our cosmic neighbourhood emerged," says Pawlowski. The astronomers found that all the different objects are distributed in a plane at right angles to the galactic disk. The newly-discovered structure is huge, extending from as close as 33,000 light years to as far away as one million light years from the centre of the Galaxy.
Team member Pavel Kroupa, professor for astronomy at the University of Bonn, adds "We were baffled by how well the distributions of the different types of objects agreed with each other." As the different companions move around the Milky Way, they lose material, stars and sometimes gas, which forms long streams along their paths. The new results show that this lost material is aligned with the plane of galaxies and clusters too. "This illustrates that the objects are not only situated within this plane right now, but that they move within it," says Pawlowski. "The structure is stable."
The various dark matter models struggle to explain this arrangement. "In the standard theories, the satellite galaxies would have formed as individual objects before being captured by the Milky Way," explains Kroupa. "As they would have come from many directions, it is next to impossible for them to end up distributed in such a thin plane structure."
Postdoctoral researcher and team member Jan Pflamm-Altenburg suggests an alternative explanation. "The satellite galaxies and clusters must have formed together in one major event, a collision of two galaxies." Such collisions are relatively common and lead to large chunks of galaxies being torn out due to gravitational and tidal forces acting on the stars, gas and dust they contain, forming tails that are the birthplaces of new objects like star clusters and dwarf galaxies.
Pawlowski adds, "We think that the Milky Way collided with another galaxy in the distant past. The other galaxy lost part of its material, material that then formed our Galaxy's satellite galaxies and the younger globular clusters and the bulge at the galactic centre. The companions we see today are the debris of this 11 billion year old collision."
Kroupa concludes by highlighting the wider significance of the new work. "Our model appears to rule out the presence of dark matter in the universe, threatening a central pillar of current cosmological theory. We see this as the beginning of a paradigm shift, one that will ultimately lead us to a new understanding of the universe we inhabit."
Cite This Page: |
Disenfranchisement after the Reconstruction Era
Disenfranchisement after the Reconstruction Era in the United States of America was based on a series of laws, new constitutions, and practices in the South that were deliberately used to prevent black citizens from registering to vote and voting. These measures were enacted by the former Confederate states at the turn of the 20th century, and by Oklahoma when it gained statehood in 1907, although not by the former border slave states. Their actions defied the intent of the Fifteenth Amendment to the United States Constitution, ratified in 1870, which was intended to protect the suffrage of freedmen after the American Civil War.
Considerable violence and fraud had accompanied the later elections during Reconstruction, as the white Democrats used paramilitary groups, beginning in the 1870s, to suppress black Republican voting and turn Republicans out of office. After regaining control of the state legislatures, Democrats were alarmed by a late 19th-century alliance between Republicans and Populists that cost them some elections. In North Carolina's Wilmington Insurrection of 1898 (long called a race riot by whites), white Democrats launched a coup d'etat and overthrew the city government, the only coup of its kind in United States history. They overturned a duly elected biracial government headed by a white mayor, and widely attacked the black community, destroying lives and property. As a result, many blacks left the city permanently.
Ultimately, white Democrats added to previous efforts and achieved widespread disenfranchisement by law: from 1890 to 1908, Southern state legislatures passed new constitutions, constitutional amendments, and laws that made voter registration and voting more difficult, especially when administered by white staff in a discriminatory way. They succeeded in disenfranchising most of the black citizens, as well as many poor whites in the South, and voter rolls dropped dramatically in each state. The Republican Party was nearly eliminated in the region for decades, and the Democrats established one-party control throughout the southern states.
From the mid to the late 20th century, a wholesale party realignment took place, with white conservatives joining the Republican Party. Until then Southern Democrats controlled the politics of southern states and established white supremacy. As Congressional apportionment of seats was based on the total population, the Southern white Democrats, the Southern bloc, had tremendous legislative power for decades. Section 2 of the Fourteenth Amendment could have been used to reduce Congressional representation for states that denied suffrage on racial grounds, but this provision was not enforced. Opponents of the Southern bloc could not overcome their political power.
In 1912, the Republican Party was split when Roosevelt ran against the regular Taft. In the South by this time, the Republican Party had been hollowed out by the disenfranchisement of African Americans, who were largely excluded from voting. Democrat Woodrow Wilson was elected as the first southern President since 1856. He was re-elected in 1916, in a much closer presidential contest. During his first term, Wilson satisfied the request of Southerners in his cabinet and instituted overt racial segregation throughout federal government workplaces, as well as racial discrimination in hiring. During World War I, American military forces were segregated, with black soldiers poorly trained and equipped.
Disenfranchisement had far-reaching effects in Congress, where the Democratic Solid South enjoyed "about 25 extra seats in Congress for each decade between 1903 and 1953."[nb 1] Also, the Democratic dominance in the South meant that southern Senators and Representatives became entrenched in Congress. They favored seniority privileges in Congress, which became the standard by 1920, and Southerners controlled chairmanships of important committees, as well as leadership of the national Democratic Party. During the Great Depression, legislation establishing numerous national social programs were passed without the representation of African Americans, leading to gaps in program coverage and discrimination against them in operations. In addition, because black Southerners were not listed on local voter rolls, they were automatically excluded from serving in local courts. Juries were all white across the South.
Racial segregation in the U.S. military was ended in 1948 by Executive Order of President Harry S. Truman after World War II. Legal racial segregation in the South did not end until after passage of the Civil Rights Act of 1964. Political disenfranchisement did not end until after passage of the Voting Rights Act of 1965, which authorized the federal government to monitor voter registration practices and elections where populations were historically underrepresented, and to enforce constitutional voting rights. The challenge to maintain voting rights has continued into the 21st century, as shown by numerous court cases in 2016 alone.
- 1 Background
- 2 Post-Reconstruction disenfranchisement
- 3 New state constitutions, 1890 to 1908
- 4 Case studies
- 5 Methods of disenfranchisement
- 6 Congressional response
- 7 Legislative and cultural effects
- 8 Civil Rights Movement
- 9 See also
- 10 References
- 11 Notes
- 12 Further reading
The American Civil War ended in 1865, marking the start of the Reconstruction era in the eleven former Confederate states. Congress passed the Reconstruction Acts, starting in 1867, establishing military districts to oversee the affairs of these states pending reconstruction.
During the Reconstruction era, blacks constituted absolute majorities of the populations in Mississippi and South Carolina, were equal to the white population in Louisiana, and represented more than 40 percent of the population in four other former Confederate states. Southern whites, fearing black domination, resisted the freedmen's exercise of political power. In 1867, black men voted for the first time. By the 1868 presidential election, Texas, Mississippi, and Virginia had still not been re-admitted to the Union. General Ulysses S. Grant was elected as president thanks in part to 700,000 black voters. In February 1870, the Fifteenth Amendment was ratified; it was designed to protect blacks' right to vote from infringement by the states.
White supremacist paramilitary organizations, allied with Southern Democrats, used intimidation, violence and even committed assassinations in order to repress blacks and prevent them from exercising their civil and political rights in elections from 1868 until the mid-1870s. The insurgent Ku Klux Klan (KKK) was formed in 1865 in Tennessee (as a backlash to defeat in the war) and it quickly became a powerful secret vigilante group, with chapters across the South. The Klan initiated a campaign of intimidation directed against blacks and sympathetic whites. Their violence included vandalism and destruction of property, physical attacks and assassinations, and lynchings. Teachers who came from the North to teach freedmen were sometimes attacked or intimidated as well.
The toll of Klan murders and attacks led Congress to pass laws to end the violence. In 1870, the strongly Republican Congress passed the Enforcement Acts, imposing penalties for conspiracy to deny black suffrage. The Acts empowered the President to deploy the armed forces to suppress organizations that deprived people of rights guaranteed by the Fourteenth Amendment. Organizations whose members appeared in arms were considered in rebellion against the United States. The President could suspend habeas corpus under those circumstances. President Grant used these provisions in parts of the Carolinas in late 1871. United States marshals supervised state voter registrations and elections and could summon the help of military or naval forces if needed. These measures led to the demise of the first Klan by the early 1870s.
New paramilitary groups quickly sprang up, as tens of thousands of veterans belonged to gun clubs and similar groups. A second wave of violence began, resulting in over 1,000 deaths, usually black or Republican. The Supreme Court ruled in 1876 in United States v. Cruikshank, arising from trials related to the Colfax Massacre, that protections of the Fourteenth Amendment, which the Enforcement Acts were intended to support, did not apply to the actions of individuals, but only to the actions of state governments. They recommended that persons seek relief from state courts, which had not been supportive of freedmen's rights.
The paramilitary organizations that arose in the mid to late 1870s were part of continuing insurgency in the South after the Civil War, as armed veterans in the South resisted social changes, and worked to prevent black Americans and other Republicans from voting and running for office. Such groups included the White League, formed in Louisiana in 1874 from white militias, with chapters forming in other Southern states; the Red Shirts, formed in 1875 in Mississippi but also active in North Carolina and South Carolina; and other "White Liners," such as rifle clubs and the Knights of the White Camellia. Compared to the Klan, they were open societies, better organized and devoted to the political goal of regaining control of the state legislatures and suppressing Republicans, including most blacks. They often solicited newspaper coverage for publicity to increase their threat. The scale of operations was such that in 1876, North Carolina had 20,000 men in rifle clubs. Made up of well-armed Confederate veterans, a class that covered most adult men who could have fought in the war, the paramilitary groups worked for political aims: to turn Republicans out of office, disrupt their organizing, and use force to intimidate and terrorize freedmen to keep them away from the polls. Such groups have been described as “the military arm of the Democratic Party.”
They were instrumental in many Southern states in driving blacks away from the polls and ensuring a white Democratic takeover of legislatures and governorships in most Southern states in the 1870s, most notoriously during the controversial 1876 elections. As a result of a national Compromise of 1877 arising from the 1876 presidential election, the federal government withdrew its military forces from the South, formally ending the Reconstruction era. By that time, Southern Democrats had effectively regained control in Louisiana, South Carolina, and Florida – they identified as the Redeemers. In the South, the process of white Democrats regaining control of state governments has been called “the Redemption”. African-American historians sometimes call the Compromise of 1877 “The Great Betrayal.”
Following continuing violence around elections as insurgents worked to suppress black voting, the Democratic-dominated Southern states passed legislation to create barriers to voter registrations by blacks and poor whites, starting with the Georgia poll tax in 1877. Other measures followed, particularly near the end of the century, after a Republican-Populist alliance caused the Democrats to temporarily lose some Congressional seats and control of some gubernatorial positions.
To secure their power, the Democrats worked to exclude blacks (and most Republicans) from politics. The results could be seen across the South. After Reconstruction, Tennessee initially had the most “consistently competitive political system in the South”. A bitter election battle in 1888, marked by unmatched corruption and violence, resulted in white Democrats taking over the state legislature. To consolidate their power, they worked to suppress the black vote and sharply reduced it through changes in voter registration, requiring poll taxes, as well as changing election procedures to make voting more complex.
In 1890 Mississippi adopted a new constitution, which contained provisions for voter registration which required voters pay poll taxes and pass a literacy test. The literacy test was subjectively applied by white administrators, and the two provisions effectively disenfranchised most blacks and many poor whites. The constitutional provisions survived a Supreme Court challenge in Williams v. Mississippi (1898). Other southern states quickly adopted new constitutions and what they called the "Mississippi plan." By 1908, all states of the former Confederacy had passed new constitutions or suffrage amendments, sometimes bypassing general elections to achieve this. Legislators created a variety of barriers, including longer residency requirements, rule variations, literacy and understanding tests, which were subjectively applied against minorities, or were particularly hard for the poor to fulfill. Such constitutional provisions were unsuccessfully challenged at the Supreme Court in Giles v. Harris (1903). In practice, these provisions, including white primaries, created a maze that blocked most blacks and many poor whites from voting in Southern states until after passage of federal civil rights legislation in the mid-1960s. Voter registration and turnout dropped sharply across the South, as most blacks and many poor whites were excluded from the political system.
The disenfranchisement of a large proportion of voters attracted the attention of Congress, and as early as 1900 some members proposed stripping the South of seats, related to the number of people who were barred from voting. Apportionment of seats was still based on total population (with the assumption of the usual number of voting males in relation to the residents); as a result white Southerners commanded a number of seats far out of proportion to the voters they represented. In the end, Congress did not act on this issue, as the Southern bloc of Democrats had sufficient power to reject or stall such action. For decades, white Southern Democrats exercised Congressional representation derived from a full count of the population, but they disfranchised several million black and white citizens. Southern white Democrats comprised the “Solid South”, a powerful voting bloc in Congress until the mid-20th century. Their representatives, re-elected repeatedly by one-party states, exercised the power of seniority, controlling numerous chairmanships of important committees in both houses. Their power allowed them to have control over rules, budgets and important patronage projects, among other issues, as well as to defeat bills to make lynching a federal crime.
New state constitutions, 1890 to 1908
Despite white Southerners’ complaints about Reconstruction, several Southern states kept most provisions of their Reconstruction constitutions for more than two decades, until late in the 19th century. In some states, the number of blacks elected to local offices reached a peak in the 1880s although Reconstruction had ended. They had an influence at the local level, where much of government took place, although they did not win many statewide or national seats. Subsequently, state legislatures passed restrictive laws or constitutions that made voter registration and election rules more complicated. As literacy tests and other restrictions could be applied subjectively, these changes sharply limited the vote by most blacks and, often, many poor whites; voter rolls dropped across the South into the new century.
Florida approved a new constitution in 1885 that included provisions for poll taxes as a prerequisite for voter registration and voting. From 1890 to 1908, ten of the eleven Southern states rewrote their constitutions. All included provisions that effectively restricted voter registration and suffrage, including requirements for poll taxes, increased residency, and subjective literacy tests.
With educational improvements, blacks had markedly increased their rate of literacy. By 1891, their illiteracy had declined to 58 percent, whilst the rate of white illiteracy in the South at that time was 31 percent. Some states used grandfather clauses to exempt white voters from literacy tests altogether. Other states required otherwise eligible black voters to meet literacy and knowledge requirements to the satisfaction of white registrars, who applied subjective judgment and, in the process, rejected most black voters. By 1900, the majority of blacks were literate, but even many of the best-educated of these men continued to “fail” the literacy tests administered by white registrars.
The historian J. Morgan Kousser noted, “Within the Democratic party, the chief impetus for restriction came from the black belt members,” whom he identified as “always socioeconomically privileged.” In addition to wanting to affirm white supremacy, the planter and business elite were concerned about voting by lower-class and uneducated whites. Kousser found, “They disfranchised these whites as willingly as they deprived blacks of the vote.” Perman noted the goals of disenfranchisement resulted from several factors. Competition between white elites and white lower classes, for example, and a desire to prevent alliances between lower-class white and black Americans, as had been seen in Populist-Republican alliances, led white Democratic legislators to restrict voter rolls.
With the passage of new constitutions, Southern states adopted provisions that caused disenfranchisement of large portions of their populations by skirting US constitutional protections of the Fourteenth and Fifteenth Amendments. While their voter registration requirements applied to all citizens, in practice they disenfranchised most blacks. As in Alabama, they also “would remove [from voter registration rolls] the less educated, less organized, more impoverished whites as well – and that would ensure one-party Democratic rules through most of the 20th century in the South.”
The new provisions of the state constitutions almost entirely eliminated black voting. Although nothing approaching precise data exists, it is estimated that in the late 1930s less than one percent of blacks in the Deep South and around five percent in the Rim South were registered to vote, and that the proportion actually voting even in general elections, which were of no consequence due to complete Democratic dominance, was much smaller still. Secondly, the Democratic legislatures passed Jim Crow laws to assert white supremacy, establish racial segregation in public facilities, and treat blacks as second-class citizens. The landmark court decision in Plessy v. Ferguson (1896) held that “separate but equal” facilities, as on railroad cars, were constitutional. The new constitutions passed numerous Supreme Court challenges. In cases where a particular restriction was overruled by the Supreme Court in the early 20th century, states quickly devised new methods of excluding most blacks from voting, such as the white primary. Democratic Party primaries became the only competitive contests in southern states.
For the national Democratic Party, the alignment after Reconstruction resulted in a powerful Southern region that was useful for congressional clout. Nevertheless, prior to President Franklin D. Roosevelt, the “Solid South” inhibited the national party from fulfilling center-left initiatives[clarification needed] desired since the days of William Jennings Bryan. Woodrow Wilson, one of two Democrats elected to the presidency between Abraham Lincoln and Franklin D. Roosevelt, was the first Southerner elected after 1856.[nb 2] He benefited by the disenfranchisement of blacks and crippling of the Republican Party in the South. Soon after taking office, Wilson directed the segregation of federal facilities in the District of Columbia, which had been integrated during Reconstruction.
Southern black populations in 1900
|Population of African Americans in Southern states, 1900|
|No. of African Americans||% of Population||Year of law or constitution|
|Texas||622,041||20.40||1901 / 1923 laws|
With a population evenly divided between races, in 1896 there were 130,334 black voters on the Louisiana registration rolls and about the same number of whites. Louisiana State legislators passed a new constitution in 1898 that included requirements for applicants to pass a literacy test in English or his native language in order to register to vote, or to certify owning $300 worth of property, known as a property requirement. The literacy test was administered by the voting registrar; in practice, they were white Democrats. Provisions in the constitution also included a grandfather clause, which provided a loophole to enable illiterate whites to register to vote. It said that “Any citizen who was a voter on January 1, 1867, or his son or grandson, or any person naturalized prior to January 1, 1898, if applying for registration before September 1, 1898, might vote, notwithstanding illiteracy or poverty.” Separate registration lists were kept for whites and blacks, making it easy for white registrars to discriminate against blacks in literacy tests. The constitution of 1898 also required a person to satisfy a longer residency requirement in the state, county, parish, and precinct before voting than did the constitution of 1879. This worked against the lower classes, who were more likely to move frequently for work, especially in agricultural areas where there were many migrant workers and sharecroppers.
The effect of these changes on the population of black voters in Louisiana was devastating; by 1900 black voters were reduced from 130,334 to 5,320 on the rolls. By 1910, only 730 blacks were registered, less than 0.5% of eligible black men. “In 27 of the state’s sixty parishes, not a single black voter was registered any longer; in nine more parishes, only one black voter was.”
In 1894, a coalition of Republicans and the Populist Party won control of the North Carolina state legislature (and with it, the ability to elect two US Senators) and were successful in electing several US Representatives elected through electoral fusion. The fusion coalition made impressive gains in the 1896 election when their legislative majority expanded. Republican Daniel Lindsay Russell won the gubernatorial race in 1897, the first Republican governor of the state since the end of Reconstruction in 1877. The election also resulted in more than 1,000 elected or appointed black officials, including the election in 1897 of George Henry White to Congress, as a member of the House of Representatives.
At the 1898 election, the Democrats ran on White Supremacy and disenfranchisement in a bitter race-baiting campaign led by Furnifold McLendel Simmons and Josephus Daniels, editor and publisher of The Raleigh News & Observer. The Republican/Populist coalition disintegrated, and the Democrats won the North Carolina 1898 election and the following 1900 election. Simmons was elected as the state's US senator in 1900, holding office until 1931 through multiple re-elections by the state legislature and by popular vote after 1920.
The Democrats used their power in the state legislature to disenfranchise minorities, primarily blacks, and ensure that Democratic Party and white power would not be threatened again. They passed laws restricting voter registration. In 1900 the Democrats adopted a constitutional suffrage amendment which lengthened the residence period required before registration, and enacted both an educational qualification (to be assessed by a registrar, which meant that it could be subjectively applied) and prepayment of a poll tax. A grandfather clause exempted from the poll tax those entitled to vote on January 1, 1867. The legislature also passed Jim Crow laws establishing racial segregation in public facilities and transportation.
The effect in North Carolina was the complete elimination of black voters from voter rolls by 1904. Contemporary accounts estimated that seventy-five thousand black male citizens lost the vote. In 1900 blacks numbered 630,207 citizens, about 33% of the state's total population. The growth of the thriving black middle class was slowed. In North Carolina and other Southern states, there were also the insidious effects of invisibility: "[W]ithin a decade of disenfranchisement (sic), the white supremacy campaign had erased the image of the black middle class from the minds of white North Carolinians."
In Virginia, Democrats sought disenfranchisement in the late 19th century after a coalition of white and black Republicans with populist Democrats had come to power; the coalition had been formalized as the Readjuster Party. The Readjuster Party held control from 1881 to 1883, electing a governor and controlling the legislature, which also elected a US Senator from the state. As in North Carolina, state Democrats were able to divide Readjuster supporters through appeals to White Supremacy. After regaining power, Democrats changed state laws and the constitution in 1902 to disenfranchise blacks. They ratified the new constitution in the legislature and did not submit it to popular vote. Voting in Virginia fell by nearly half as a result of the disenfranchisement of blacks. The eighty-year stretch of white Democratic control ended only in the late 1960s after passage and enforcement of the federal Voting Rights Act of 1965 and the collapse of the Byrd Organization machine.
Border States: failed disenfranchisement
The five border states of Delaware, Maryland, West Virginia, Kentucky and Missouri, had legacies similar to the Confederate slave states from the Civil War. The Border States, all slave states, also established laws requiring racial segregation between the 1880s and 1900s; however, disenfranchisement of blacks was never attained to any significant degree. Most Border States did attempt such disenfranchisement during the 1900s.
The causes of failure to disenfranchise blacks and poor whites in the Border States, as compared to their success for well over half a century in former Confederate states, were complicated. During the 1900s Maryland was vigorously divided between supporters and opponents of disenfranchisement, but it had a large and increasingly educated black community concentrated in Baltimore. This city had many free blacks before the Civil War and they had established both economic and political power. The state legislature passed a poll tax in 1904, but incurred vigorous opposition and repealed it in 1911. Despite support among conservative whites in the conservative Eastern Shore, referenda for bills to disenfranchise blacks failed three times in 1905, 1908 and 1910, with the last vote being the most decisive. The existence of substantial Italian immigration completely absent from the Confederacy meant that these immigrants were exposed to the possibility of disfranchisement, but much more critically allowed for much stronger resistance amongst the white population.
In Kentucky, Lexington’s city government had passed a poll tax in 1901, but it was declared invalid in state circuit courts. Six years later, a new state legislative effort to disenfranchise blacks failed because of the strong organization of the Republican Party in pro-Union regions of the state.
Methods of disenfranchisement
Proof of payment of a poll tax was a prerequisite to voter registration in Florida, Alabama, Tennessee, Arkansas, Louisiana, Mississippi, Georgia (1877), North and South Carolina, Virginia (until 1882 and again from 1902 with its new constitution), Texas (1902) and in some northern and western states. The Texas poll tax “required otherwise eligible voters to pay between $1.50 and $1.75 to register to vote – a lot of money at the time, and a big barrier to the working classes and poor.” Georgia created a cumulative poll tax requirement in 1877: men of any race 21 to 60 years of age had to pay a sum of money for every year from the time they had turned 21, or from the time that the law took effect.
The poll tax requirements applied to whites as well as blacks, and also adversely affected poor citizens. Many states required payment of the tax at a time separate from the election, and then required voters to bring receipts with them to the polls. If they could not locate such receipts, they could not vote. In addition, many states surrounded registration and voting with other complex record-keeping requirements. These were particularly difficult for sharecropper and tenant farmers to comply with, as they moved frequently.
The poll tax was sometimes used alone or together with a literacy qualification. In a kind of grandfather clause, North Carolina in 1900 exempted from the poll tax those men entitled to vote as of January 1, 1867. This excluded all blacks in the State, who did not have suffrage before that date.
Educational and character requirements
Alabama, Arkansas, Mississippi, Tennessee, and South Carolina created an educational requirement, with review by a local registrar of a voter’s qualifications. In 1898 Georgia rejected such a device.
Alabama delegates at first hesitated, out of concern that illiterate whites would lose their votes. After the legislature stated that the new constitution would not disenfranchise any white voters and that it would be submitted to the people for ratification, Alabama passed an educational requirement. It was ratified at the polls in November 1901. Its distinctive feature was the “good character clause” (also known as the “grandfather clause”). An appointment board in each county could register “all voters under the present [previous] law” who were veterans or the lawful descendants of such, and “all who are of good character and understand the duties and obligations of citizenship.” This gave the board discretion to approve voters on a case-by-case basis. In practice, they enfranchised many whites, but rejected both poor whites and blacks. Most of the latter had been slaves and unable to attain military service.
South Carolina, Louisiana (1889), and later, Virginia incorporated an educational requirement in their new constitutions. In 1902 Virginia adopted a constitution with the "understanding" clause as a literacy test to use until 1904. In addition, application for registration had to be in the applicant's handwriting and written in the presence of the registrar. Thus, someone who could not write, could not vote.
Eight Box Law
By 1882, the Democrats were firmly in power in South Carolina. Republican voters were mostly limited to the majority-black counties of Beaufort and Georgetown. Because the state had a large black-majority population (nearly sixty percent in 1890), white Democrats had narrow margins in many counties and feared a possible resurgence of black Republican voters at the polls. To remove the black threat, the General Assembly created an indirect literacy test, called the “Eight Box Law.”
The law required a separate box for ballots for each office; a voter had to insert the ballot into the corresponding box or it would not count. The ballots could not have party symbols on them. They had to be of a correct size and type of paper. Many ballots were arbitrarily rejected because they slightly deviated from the requirements. Ballots could also randomly be rejected if there were more ballots in a box than registered voters.
The multiple-ballot box law was challenged in court. On May 8, 1895, Judge Goff of the United States Circuit Court declared the provision unconstitutional and enjoined the state from taking further action under it. But in June 1895, the US Circuit Court of Appeals reversed Judge Goff and dissolved the injunction, leaving the way open for a convention.
The constitutional convention met on September 10 and adjourned on December 4, 1895. By the new constitution, South Carolina adopted the Mississippi Plan until January 1, 1898. Any male citizen could be registered who was able to read a section of the constitution or to satisfy the election officer that he understood it when read to him. Those thus registered were to remain voters for life. Under the new constitution and application of literacy practices, black voters were dropped in great number from the registration rolls: by 1896, in a state where according to the 1890 census blacks numbered 728,934 and comprised nearly sixty percent of the total population, only 5,500 black voters had succeeded in registering.
States also used grandfather clauses to enable illiterate whites who could not pass a literacy test to vote. It allowed a man to vote if his grandfather or father had voted prior to January 1, 1867; at that time, most African Americans had been slaves, while free people of color, even if property owners, and freedmen were ineligible to vote until 1870.[nb 3]
Justice Benjamin Curtis' dissent in Dred Scott v. Sandford, 60 U.S. 393 (1857) had noted that free people of color in numerous states had the right to vote at the time of the Articles of Confederation (as part of the argument about whether people of African descent could be citizens of the new United States):
Of this there can be no doubt. At the time of the ratification of the Articles of Confederation, all free native-born inhabitants of the States of New Hampshire, Massachusetts, New York, New Jersey, and North Carolina, though descended from African slaves, were not only citizens of those States, but such of them as had the other necessary qualifications possessed the franchise of electors, on equal terms with other citizens.
North Carolina’s constitutional amendment of 1900 exempted from the poll tax those men entitled to vote as of January 1, 1867, another type of use of a grandfather clause. Virginia also used a type of grandfather clause.
In Guinn v. United States (1915), the Supreme Court invalidated the Oklahoma Constitution's "old soldier" and "grandfather clause" exemptions from literacy tests. In practice, these had disenfranchised blacks, as had occurred in numerous Southern states. This decision affected similar provisions in the constitutions of Alabama, Georgia, Louisiana, North Carolina, and Virginia election rules. Oklahoma and other states quickly reacted by passing laws that created other rules for voter registration that worked against blacks and minorities. Guinn was the first of many cases in which the NAACP filed a brief challenging discriminatory electoral rules.
In Lane v. Wilson (1939), the Supreme Court invalidated an Oklahoma provision designed to disenfranchise blacks. It had replaced the clause struck down in Guinn. This clause permanently disenfranchised everyone qualified to vote who had not registered to vote in a twelve-day window between April 30 and May 11, 1916, except for those who had voted in 1914. While designed to be more resistant to challenges based on discrimination, as the law did not specifically mention race, the Court struck it down partially because it relied on the 1914 election, when voters had been discriminated against under the rule invalidated in Guinn.
About the turn of the 20th century, white members of the Democratic Party in some Southern states devised rules that excluded blacks and other minorities from participating in party primaries. These became common for all elections. As the Democratic Party was dominant and the only competitive voting was in the primaries, barring minority voters from the primaries was another means of excluding them from politics. Court challenges overturned the white primary system, but many states then passed laws that authorized political parties to set up the rules for their own systems, such as the white primary. Texas, for instance, passed such a state law in 1923. It was used to bar Mexican Americans as well as black Americans from voting; it survived challenges to the US Supreme Court until the 1940s.
The North had heard the South’s version of Reconstruction abuses, such as financial corruption, high taxes, and incompetent freedmen. Industry wanted to invest in the South and not worry about political problems. In addition, reconciliation between white veterans of the North and South reached a peak in the early 20th century. As historian David Blight demonstrated in Race and Reunion: The Civil War in American Memory, reconciliation meant the pushing aside by whites of the major issues of race and suffrage. Southern whites were effective for many years at having their version of history accepted, especially as it was confirmed in ensuing decades by influential historians of the Dunning School at Columbia University and other institutions.
Disfranchisement of black Americans in the South was covered by national newspapers and magazines as new laws and constitutions were created, and many Northerners were outraged and alarmed. The Lodge Bill or Federal Elections Bill or Lodge Force Bill of 1890 was a bill drafted by Representative Henry Cabot Lodge (R) of Massachusetts, and sponsored in the Senate by George Frisbie Hoar. It would have authorized federal electors to supervise elections under certain conditions. Due to a Senate filibuster, as well as trade-off of support with Democrats by western Silver Republicans, the bill failed to pass.
In 1900 the Committee of Census of Congress considered proposals for adding more seats to the House of Representatives because of increased population. Proposals ranged for a total number of seats from 357 to 386. Edgar D. Crumpacker (R-IN) filed an independent report urging that the Southern states be stripped of seats due to the large numbers of voters they had disfranchised. He noted this was provided for in Section 2 of the Fourteenth Amendment, which provided for stripping representation from states that reduced suffrage due to race. The Committee and House failed to agree on this proposal. Supporters of black suffrage worked to secure Congressional investigation of disfranchisement, but concerted opposition of the Southern Democratic bloc was aroused, and the efforts failed.
From 1896 to 1900, the House of Representatives with a Republican majority had acted in more than thirty cases to set aside election results from Southern states where the House Elections Committee had concluded that “black voters had been excluded due to fraud, violence, or intimidation.” Nevertheless, in the early 1900s, it began to back off from its enforcement of the Fifteenth Amendment and suggested that state and federal courts should exercise oversight of this issue. The Southern bloc of Democrats exercised increasing power in the House. They had no interest in protecting suffrage for blacks.
In 1904 Congress administered a coup de grâce to efforts to investigate disfranchisement in its decision in the 1904 South Carolina election challenge of Dantzler v. Lever. The House Committee on Elections upheld Lever’s victory. It suggested that citizens of South Carolina who believed their rights were denied should take their cases to the state courts, and ultimately, the US Supreme Court. Blacks had no recourse through the Southern state courts, which would not uphold their rights. Because they were disfranchised, blacks could not serve on juries, and whites were clearly aligned against them on this and other racial issues.
Despite the Lever decision and domination of Congress by Democrats, some Northern Congressmen continued to raise the issue of black disfranchisement and resulting malapportionment. For instance, on December 6, 1920, Representative George H. Tinkham from Massachusetts offered a resolution for the Committee of Census to investigate alleged disfranchisement of blacks. His intention was to enforce the provisions of the Fourteenth and Fifteenth amendments.
In addition, he believed there should be reapportionment in the House related to the voting population of southern states, rather than the general population as enumerated in the census. Such reapportionment was authorized by the Constitution and would reflect reality, so that the South should not get credit for people and voters it had disfranchised. Tinkham detailed how outsized the South's representation was related to the total number of voters in each state, compared to other states with the same number of representatives:[nb 4]
- States with four representatives:
- Florida, with a total vote of 31,613.
- Colorado, with a total vote of 208,855.
- Maine, with a total vote of 121,836.
- States with six representatives:
- Nebraska, with a total vote of 216,014.
- West Virginia, with a total vote of 211,643.
- South Carolina, given seven representatives because of its total population (which was majority black), counted only 25,433 voters.
- States with eight representatives:
- Louisiana, with a total vote of 44,794.
- Kansas, with a total vote of 425,641.
- States with ten representatives:
- Alabama, with a total vote of 62,345.
- Minnesota, with a total vote of 299,127.
- Iowa, with a total vote of 316,377.
- California, with eleven representatives, had a total vote of 644,790.
- States with twelve representatives:
- Georgia, with a total vote of 59,196.
- New Jersey, with a total vote of 338,461.
- Indiana, with thirteen representatives, had a total vote of 565,216.
Tinkham was defeated by the Democratic Southern Bloc, and also by fears amongst the northern business elites of increasing the voting power of Northern urban working classes, whom both northern business and Southern planter elites believed would vote for large-scale income redistribution at a Federal level.
After Herbert Hoover was elected in a landslide in 1928, gaining support from five southern states, Tinkham renewed his effort in the spring of 1929 to persuade Congress to penalize southern states under the Fourteenth and Fifteenth amendments for their racial discrimination. He suggested reduction of their congressional delegations in proportion to the populations they had disenfranchised. He was defeated again by the Solid South. Its representatives had rallied in outrage that the First Lady had invited Jessie De Priest for tea to the White House with other congressional wives. She was the wife of Oscar Stanton De Priest from Chicago, the first African-American elected to Congress in the 20th century.
Segregation of the federal service began under President Theodore Roosevelt, and continued under President Taft. President Wilson escalated the process, ignoring complaints by the NAACP. The NAACP lobbied for commissioning of African Americans as officers in World War I. It was arranged for W.E.B. Du Bois to receive an Army commission, but he failed his physical. In 1915 the NAACP organized public education and protests in cities across the nation against D.W. Griffith's film Birth of a Nation, a film that glamorized the Ku Klux Klan. Boston and a few other cities refused to allow the film to open.
In 1912 Woodrow Wilson became the first Southerner to win a presidential election since 1856. In 1912, the extra Southern electoral votes were not a decisive factor. Wilson won the election in a landslide, not only winning every Southern and border state electoral vote but also winning a large majority of electoral votes outside the South. However, Southern electoral votes did prove decisive in securing Wilson’s re-election in the much closer 1916 presidential election.
Legislative and cultural effects
20th-century Supreme Court decisions
Black Americans and their allies worked hard to regain their ability to exercise the constitutional rights of citizens. Booker T. Washington, widely known for his accommodationist approach as the leader of the Tuskegee Institute, called on northern backers to help finance legal challenges to disenfranchisement and segregation. He raised substantial funds and also arranged for representation on some cases, such as the two for Giles in Alabama. He challenged the state's grandfather clause and a citizenship test required for new voters, which was administered in a discriminatory way against blacks.
In its ruling in Giles v. Harris (1903), the United States Supreme Court under Justice Oliver Wendell Holmes, Jr. effectively upheld such southern voter registration provisions in dealing with a challenge to the Alabama constitution. Its decision said the provisions were not targeted at blacks and thus did not deprive them of rights. This has been characterized as the "most momentous ignored decision" in constitutional history.
Trying to deal with the grounds of the Court's ruling, Giles mounted another challenge. In Giles v. Teasley (1904), the U.S. Supreme Court upheld Alabama's disenfranchising constitution. That same year the Congress refused to overturn a disputed election, and essentially sent plaintiffs back to the state courts. Even when black plaintiffs gained rulings in their favor from the Supreme Court, states quickly devised alternative ways to exclude them from the political process. It was not until later in the 20th century that such legal challenges on disenfranchisement began to meet more success in the courts.
With the founding of the National Association for the Advancement of Colored People (NAACP) in 1909, the interracial group based in New York began to provide financial and strategic support to lawsuits on voting issues. What became the NAACP Legal Defense Fund organized and mounted numerous cases in repeated court and legal challenges to the many barriers of segregation, including disenfranchisement provisions of the states. The NAACP often represented plaintiffs directly, or helped raise funds to support legal challenges. The NAACP also worked at public education, lobbying of Congress, demonstrations, and encouragement of theater and academic writing as other means to reach the public. NAACP chapters were organized in cities across the country, and membership increased rapidly in the South. The American Civil Liberties Union also represented plaintiffs in some disenfranchisement cases.
In Smith v. Allwright (1944), the Supreme Court reviewed a Texas case and ruled against the white primary; the state legislature had authorized the Democratic Party to devise its own rules of operation. The 1944 court ruling was that this was unconstitutional, as the state had failed to protect the constitutional rights of its citizens.
Following the 1944 ruling, civil rights organizations in major cities moved quickly to register black voters. For instance, in Georgia, in 1940 only 20,000 blacks had managed to register to vote. After the Supreme Court decision, the All-Citizens Registration Committee (ACRC) of Atlanta started organizing. By 1947 they and others had succeeded in getting 125,000 black Americans registered, 18.8% of those of eligible age.
Each legal victory was followed by white-dominated legislatures' renewed efforts to control black voting through different exclusionary schemes. In the 1940s, Alabama passed a law to give white registrars more discretion in testing applicants for comprehension and literacy. In 1958 Georgia passed a new voter registration act that required those who were illiterate to satisfy "understanding tests" by correctly answering 20 of 30 questions related to citizenship posed by the voting registrar. Blacks had made substantial advances in education, but the individual white registrars were the sole persons to determine whether individual prospective voters answered correctly. In practice, registrars disqualified most black voters, whether they were educated or not. In Terrell County, for instance, which was 64% black in population, after passage of the act, only 48 black Americans were able to register to vote in 1958.
Civil Rights Movement
The NAACP's steady progress with individual cases was thwarted by southern Democrats' continuing resistance and passage of new statutory barriers to blacks' exercising the franchise. Through the 1950s and 1960s, private citizens enlarged the effort by becoming activists throughout the South, led by many black churches and their leaders, and joined by both young and older activists from northern states. Nonviolent confrontation and demonstrations were mounted in numerous Southern cities, often provoking violent reaction by white bystanders and authorities. The moral crusade of the Civil Rights Movement gained national media coverage, attention across the country, and a growing national demand for change.
Widespread violence against the Freedom Riders in 1961, which was covered by television and newspapers, the murders of activists in Alabama in 1963 gained support for the activists' cause at the national level. President John F. Kennedy introduced civil rights legislation to Congress in 1963 before he was assassinated.
President Lyndon B. Johnson took up the charge. In January 1964, Johnson met with civil rights leaders. On January 8, during his first State of the Union address, Johnson asked Congress to "let this session of Congress be known as the session which did more for civil rights than the last hundred sessions combined." On January 23, 1964, the 24th Amendment to the U.S. Constitution, prohibiting the use of poll taxes in national elections, was ratified with the approval of South Dakota, the 38th state to do so.
On June 21, 1964, civil rights workers Michael Schwerner, Andrew Goodman, and James Chaney, disappeared in Neshoba County, Mississippi. The three were volunteers aiding in the registration of black voters as part of the Mississippi Freedom Summer Project. Forty-four days later the Federal Bureau of Investigation recovered their bodies from an earthen dam where they were buried. The Neshoba County deputy sheriff Cecil Price and 16 others, all Ku Klux Klan members, were indicted for the murders; seven were convicted. The investigation also revealed the bodies of several black men, whose deaths had never been revealed or prosecuted by white law enforcement officials.
When the Civil Rights Bill came before the full Senate for debate on March 30, 1964, the "Southern Bloc" of 18 southern Democratic Senators and one Republican Senator, led by Richard Russell (D-GA), launched a filibuster to prevent its passage. Russell said:
After 57 working days of filibuster, and several compromises, the Senate had enough votes (71 to 29) to end the debate and the filibuster. It was the first time that Southern senators had failed to win with such tactics against civil rights bills. On July 2, President Johnson signed into law the Civil Rights Act of 1964. The Act prohibited segregation in public places and barred unequal application of voter registration requirements. It did not explicitly ban literacy tests, which had been used to disqualify blacks and poor white voters.
As the United States Department of Justice has stated:
By 1965 concerted efforts to break the grip of state disenfranchisement (sic) had been under way for some time, but had achieved only modest success overall and in some areas had proved almost entirely ineffectual. The murder of voting-rights activists in Philadelphia, Mississippi, gained national attention, along with numerous other acts of violence and terrorism. Finally, the unprovoked attack on March 7, 1965, by state troopers on peaceful marchers crossing the Edmund Pettus Bridge in Selma, Alabama, en route to the state capitol in Montgomery, persuaded the President and Congress to overcome Southern legislators' resistance to effective voting rights legislation. President Johnson issued a call for a strong voting rights law and hearings began soon thereafter on the bill that would become the Voting Rights Act.
Passed in 1965, this law prohibited the use of literacy tests as a requirement to register to vote. It provided for recourse for local voters to federal oversight and intervention, plus federal monitoring of areas that historically had low voter turnouts to ensure that new measures were not taken against minority voters. It provided for federal enforcement of voting rights. African Americans began to enter the formal political process, most in the South for the first time in their lives. They have since won numerous seats and offices at local, state and federal levels.
- African-American history
- African-American Civil Rights Movement (1865–95)
- African-American Civil Rights Movement (1896–1954)
- List of 19th-century African-American civil rights activists
- Timeline of the African-American Civil Rights Movement
- Felony disenfranchisement
- Jim Crow laws
- Nadir of American race relations
- Judicial aspects of race in the United States
- Voting rights in the United States
- , Library of Congress
- "Disenfranchise vs. disfranchise". Grammarist. Retrieved 2014-09-19.
- Klotter, Jeames C.; Kentucky: Portrait in Paradox, 1900-1950; pp. 196-197 ISBN 0916968243
- Valelly, Richard M.; The Two Reconstructions: The Struggle for Black Enfranchisement University of Chicago Press, 2009, pp. 134-139 ISBN 9780226845302
- Valelly; The Two Reconstructions; pp. 146-147
- "Another Open Letter to Woodrow Wilson W.E.B. DuBois, September, 1913". Teachingamericanhistory.org. Retrieved 2013-02-28.
- "Chronology of Emancipation during the Civil War". University of Maryland: Department of History.
- Gabriel J. Chin & Randy Wagner, "The Tyranny of the Minority: Jim Crow and the Counter-Majoritarian Difficulty," 43 Harvard Civil Rights-Civil Liberties Law Review 65 (2008)
- Andrews, E. Benjamin (1912). History of the United States. New York: Charles Scribner's Sons.
- George C. Rable, But There Was No Peace: The Role of Violence in the Politics of Reconstruction, Athens: University of Georgia Press, 1984, p. 132
- "Key Events in the Presidency of Rutherford B. Hayes". American President: A Reference Resource. Miller Center. Retrieved 8 January 2013.
- J. Morgan Kousser, The Shaping of Southern Politics: Suffrage Restriction and the Establishment of the One-Party South, 1880–1910, p.104
- Richard H. Pildes, ‘Democracy, Anti-Democracy, and the Canon’, Constitutional Commentary, Vol.17, 2000, Accessed 10 Mar 2008
- Richard H. Pildes, 'Democracy, Anti-Democracy, and the Canon', Constitutional Commentary, Vol.17, 2000, p. 10, Accessed 10 Mar 2008
- ‘COMMITTEE AT ODDS ON REAPPORTIONMENT’, The New York Times, 20 Dec 1900, accessed 10 Mar 2008
- W.E.B. DuBois, Black Reconstruction in America, 1868–1880, New York: Oxford University Press, 1935; reprint, New York: The Free Press, 1998
- Michael Perman.Struggle for Mastery: Disfranchisement in the South, 1888–1908. Chapel Hill: North Carolina Press, 2001, Introduction
- 1878–1895: Disenfranchisement (sic), Southern Education Foundation, accessed 16 Mar 2008
- J. Morgan Kousser.The Shaping of Southern Politics: Suffrage Restriction and the Establishment of the One-Party South, New Haven: Yale University Press, 1974
- Glenn Feldman, The Disfranchisement Myth: Poor Whites and Suffrage Restriction in Alabama, Athens: University of Georgia Press, 2004, pp. 135–136
- Mickey, Robert; Paths Out of Dixie: The Democratization of Authoritarian Enclaves in America’s Deep South, 1944-1972, p. 87 ISBN 1400838789
- Historical Census Browser, 1900 Federal Census, University of Virginia, accessed 15 Mar 2008
- Julien C. Monnet, 'The Latest Phase of Negro Disenfranchisement', Harvard Law Review, Vol. 26, No. 1, Nov. 1912, p. 42, accessed 14 Apr 2008
- Richard H. Pildes, "Democracy, Anti-Democracy, and the Canon", 2000, p.12, accessed 10 Mar 2008
- North Carolina History Project 'Fusion Politics'
- The North Carolina Collection, UNC Libraries ‘The North Carolina Election of 1898’
- Richard H. Pildes, 'Democracy, Anti-Democracy, and the Canon', 2000, pp.12 and 27 Accessed 10 Mar 2008
- Albert Shaw, The American Monthly Review of Reviews, Vol.XXII, Jul-Dec 1900, p.274
- Richard H. Pildes, ‘Democracy, Anti-Democracy, and the Canon’, Constitutional Commentary, Vol. 17, 2000, pp. 12-13
- Historical Census Browser, 1900 US Census, University of Virginia Archived August 23, 2007, at the Wayback Machine., accessed 15 Mar 2008
- Richard H. Pildes, 'Democracy, Anti-Democracy, and the Canon', 2000, p.12 and 27, Accessed 10 Mar 2008
- "Virginia's Constitutional Convention of 1901–1902". Virginia Historical Society. Archived from the original on 2006-10-02. Retrieved 2006-09-14.
- Dabney, Virginius (1971). Virginia, The New Dominion. University Press of Virginia. pp. 436–437. ISBN 0-8139-1015-3.
- Smith, C. Fraser; Here Lies Jim Crow: Civil Rights in Maryland; p. 66 ISBN 0801888077
- Shufelt, Gordeon H.; ‘Jim Crow among strangers: The growth of Baltimore’s Little Italy and Maryland’s disfranchisement campaigns’; Journal of American Ethnic History; vol. 19, issue 4 (Summer 2000), pp. 49-78
- “Historical Barriers to Voting”, in Texas Politics, University of Texas, accessed 4 November 2012.
- 'Atlanta in the Civil Rights Movement', Atlanta Regional Council for Higher Education
- Rogers Jr., George C. and C. James Taylor (1994). A South Carolina Chronology 1497–1992. University of South Carolina Press. ISBN 0-87249-971-5.
- Holt, Thomas (1979). Black over White: Negro Political Leadership in South Carolina during Reconstruction. Urbana: University of Illinois Press.
- Curtis, Benjamin Robbins (Justice). "Dred Scott v. Sandford, Curtis dissent". Legal Information Institute at Cornell Law School. Archived from the original on 8 July 2012. Retrieved 16 April 2008.
- Richard M. Valelly, The Two Reconstructions: The Struggle for Black Enfranchisement, Chicago: University of Chicago Press, 2004, p.141
- Texas Politics: Historical Barriers to Voting, accessed 11 Apr 2008 Archived April 2, 2008, at the Wayback Machine.
- Keyssar, Alexander ; The Right to Vote: The Contested History of Democracy in the United States, Basic Books, 2000/2009, p. 86 ISBN 0465005020
- Wendy Hazard, ‘Thomas Brackett Reed, Civil Rights, and the Fight for Fair Elections,’ Maine History, March 2004, Vol. 42 Issue 1, pp 1–23
- Richard H. Pildes, 'Democracy, Anti-Democracy, and the Canon', Constitutional Commentary, Vol. 17, 2000, pp.19-20, Accessed 10 Mar 2008
- Richard H. Pildes, ‘Democracy, Anti-Democracy, and the Canon’, Constitutional Commentary, Vol. 17, 2000, pp.20-21 Accessed 10 Mar 2008
- "DEMANDS INQUIRY ON DISFRANCHISING; Representative Tinkham Aims to Enforce 14th and 15th Articles of Constitution. ASKS REAPPORTIONMENT House Resolution Will Point Out Disparity Between Southern Membership and Votes Cast". The New York Times. December 6, 1920. Retrieved September 4, 2012.
- Smith, J. Douglas; On Democracy’s Doorstep: The Inside Story of How the Supreme Court Brought “One Person, One Vote” to the United States; pp. 4-18 ISBN 0809074249
- See Rodden, Jonathan A.; ‘The Long Shadow of the Industrial Revolution: Political Geography and the Representation of the Left’
- Day, Davis S. (Winter 1980). "Herbert Hoover and Racial Politics: The De Priest Incident". Journal of Negro History. Association for the Study of African American Life and History, Inc. 65 (1): 6–17. JSTOR 3031544. doi:10.2307/3031544.
- August Meier, August, and Elliott Rudwick. ‘The Rise of Segregation in the Federal Bureaucracy, 1900–1930.’ Phylon (1960) 28.2 (1967): 178-184. in JSTOR
- Richard H. Pildes, ‘Democracy, Anti-Democracy, and the Canon’, Constitutional Commentary, Vol. 17, 2000, p. 21 Accessed 10 Mar 2008
- Richard H. Pildes, 'Democracy, Anti-Democracy, and the Canon', Constitutional Commentary, Vol. 17, 2000, p.32 Accessed 10 Mar 2008
- Chandler Davidson and Bernard Grofman, Quiet Revolution in the South: The Impact of the Voting Rights Act, Princeton: Princeton University Press, 1994, p.70
- Davidson and Grofman (1994), Quiet Revolution in the South, p. 71
- "Major Features of the Civil Rights Act of 1964". Congresslink.org. Retrieved 2010-06-06.
- "Civil Rights Act of 1964". Spartacus.schoolnet.co.uk. Retrieved 2010-06-06.
- "Civil Rights during the administration of Lyndon B. Johnson". LBJ Library and Museum. Retrieved 2007-02-25.
- "Introduction To Federal Voting Rights Laws". United States Department of Justice. Retrieved 2007-02-25.
- Despite the South's excessive representation relative to voting population, the Great Migration resulted in Mississippi losing seats in Congress due to reapportionment following the 1930 and 1950 Censuses, while South Carolina and Alabama also lost Congressional seats after the former Census and Arkansas following the latter.
- Wilson began his political career as Governor of New Jersey in 1910 and remained Governor until he was elected President, but he grew up in a slaveholding family in Virginia.
- Free men of color could vote in North Carolina prior to 1831 if they met property qualifications, but they were barred from voting in 1835 there and elsewhere after fears raised by the Nat Turner slave rebellion of 1831.
- These figures are correct as of the 1900 Presidential election
- Grantham, Dewey W. 'Tennessee and Twentieth-Century American Politics,' Tennessee Historical Quarterly 54, no 3 (Fall 1995): 210+ online
- Perman, Michael. Struggle for Mastery: Disfranchisement in the South, 1888–1908 (2001).
- Rable, George C. 'The South and the Politics of Antilynching Legislation, 1920–1940.' Journal of Southern History 51.2 (1985): 201-220.
- Valelly, Richard M. The two reconstructions: The struggle for black enfranchisement (U of Chicago Press, 2009). |
Need a quick activity to do with your class?
Why not play Humanagrams?
- Students try to combine letters in different ways to make words.
- There is a class set of 31 letters on regular paper (8.5” x 11”): 26 letters in the English alphabet and each vowel has 2 copies.
- Practice collaboration and communication
- Practice creativity and tinkering
- Explore diversity and inclusion
- How many words can the class spell?
- What is the longest word that the class can spell?
- What is the shortest word?
- What is the highest point word that you can spell?
- What is the lowest point word that you can spell?
- If this was a metaphor for inclusion, do some letters get included more than others? Why? (yes, vowels in every English word. )
- Are some letters not included as often? Why not? (Not as many words that use that letter.)
- How could we include these letters more? (Give them more points / worth more so that people want to use them. In other words, recognize and see the value in others.)
- Take home message – sometimes, we don’t include people, because we don’t see value in them / we look down on them. How can we get to know people, so we see that we have things in common, so we want to include them?
The back story: An example of the Creative Process
Question: Where did the idea for Humanagrams come from?
This is not a new idea. We just put an Educircles spin on this activity and connected the dots between different ideas, but teachers have been doing wordplay activities like this for ages.
I did this activity with my grade 8 class in November 2017. (Actually, I left this activity for my supply teacher to do, but I’ve done this activity in the past.)
I also shared a rough version of this activity with my colleagues. At the time, I think I might’ve called it Human Scrabble, but we’ve gone in a slightly different direction here. You know, trademarks and all that.
Question: Where does the name Humanagrams come from?
That’s a great question! I think I like the name, but I’m not sure.
Recently we shared another back-to-school icebreaker that we use called Human Bingo. I like to do that activity a lot at the start of the school year. When I do it with my students, I use it as an opportunity to teach small talk and do our first lesson on oral communication.
But, I don’t love the name Human Bingo. And my daughter pointed out, it’s kinda creepy. It is, I guess.
So, in this activity, since we couldn’t use the name Human Scrabble, I thought about other word games.
I really like the game Bananagrams. My friend introduced it to me. It’s like Scrabble, but without the pressure of having to memorize a whole bunch of obscure two- and three-letter words. You play it on the table, and you build words in crossword fashion but you can go in any direction, so there’s a lot of flexibility. You’re not constrained to the Scrabble board!
What I really like about Bananagrams is how you get to play around with words. So for example, if you have a word, but you need a letter somewhere else, you can take the letter and slide it into a different spot. (But then you have to make sure that you spell a proper word with the leftover letters from the old word, or you’re a rotten banana!)
So, the words are constantly changing. It’s a great lesson to teach creativity because you’re playing and trying out different possibilities.
I was stuck for a name, and that’s when my friend suggested Humanagrams as a name for this activity which works because that’s what you get when you mash humans and anagrams together. (I imagine that’s how the name Bananagrams came to be as well… but, bananas are mushier.)
QUESTION: What’s with the points? How did you figure out the scoring?
Wikipedia has a fantastic article about letter frequency in different languages.
I guess people have been studying frequencies of letters for code breaking and cryptoanalysis for a while now (among other things.)
I would’ve thought that the vowels – A, E, I, O, U – would’ve have the highest frequency in English words, but it turns out that the letter T is used more often than letter A. (And that letter U, well, it’s halfway down the list.)
So, I put the letter frequencies into a Google spreadsheet, and started playing around with formulas, trying to figure out a way to turn the letter frequencies and the points.
- At first, I thought I could just use the frequency percentages and figure out the opposite (how unlikely were these letters to show up in the alphabet.) But, that got me a lot of percentages all close together up near a hundred percent.
- Then, I thought about looking at the highest frequency letter which is the letter E – it apparently shows up 13% of the time in English words.
- So then, I took 13% as a base and I subtracted all the frequencies from that and that gave me a nice range of percentages spread more evenly across from zero all the way the 100%. (I wanted to find a way to get points ranging from 1 to 10 but not all have them at the same point level.)
- I then used that range of percentages to figure out points out of 10. That was nice but I got a lot of decimals, so I chose to round the numbers.
- But, that meant that the letter E would’ve rounded down to zero and no kid’s gonna want to be worth zero points, so in Excel, there’s a function called round up and that meant all the decimal numbers rounded up to the nearest whole number. And, those whole numbers became points.
QUESTION: How did you figure out the layout for the Humanagrams cards?
Well, when I first did this assignment a year ago, I was thinking like Scrabble tiles. With the big letter in the middle and the point value in the bottom right.
But, since we’re coming up with our own Educircles version of a creative wordplay game, we wanted to change the cards up a little bit.
Having a huge letter in the middle was important so that kids could see the letter from any seat in the class. So, I played around with a bunch of fonts.
- First, I tried the bangers font which is the font that we use in most of our Educircles products, but it didn’t work. I like the comic book look of it, but the font is too dark and heavy and thin and it’s hard to see clearly from the back.
- So, then I tried the Calibri font which is the text font that we use in our slideshows. (I think at the time, we chose the Calibri font because it was a little bit narrower than the other fonts and that helped us squeeze in more words on the slideshow.)
- But on the card, we wanted to have a nice, visible, wide font that people could see from the back, so Calibri didn’t work for us either.
- Next, I played around with Arial and Comfortaa. I was working around with Comfortaa because that’s a nice large font, and actually the lower case letter a is the regular round a that we teach in kindergarten, and I was thinking that eventually I’d like to have a version of this Humanagrams game for little kids with easier math (i.e. points worth 5, 10, 15.) So, I went with Comfortaa.
- (Actually, later on, I discovered I really prefer Helvetica because it’s a nice familiar large bold look, but at that point, I had done too much work and I was running out of time and I didn’t want to go back and switch all the fonts from Comfortaa to Helvetica. Meh.)
Since, I didn’t want to have the number point in the bottom right corner because that’s too much like Scrabble, I wondered how it would look like if the points were centered.
It didn’t look bad, but initially the letter was too close to the number, so I decided to move the letter up so it’s not centered in the card but it is still centered horizontally.
And then I wanted to figure out a way to draw attention to the points because that’s a big part of the game as well. Some points are worth more than others and that’s sort of a metaphor later on about diversity and inclusion and how do we see value in people (and less commonly used letters), so I decide to put a circle around it to highlight the point subtly.
I thought a circle would look good because the title itself has a rounded rectangle around it. I didn’t think a square around the points would look good because the harsh corners of the square wouldn’t really fit with the round the corners of the rectangle on the card.
TEACHER TIP: Oh, by the way, all of this was designed using Google slides, and I just saved it as a PDF. I love Google slides. It is more than just for slideshows. You can change the page set up so that the slides are 8.5 x 11″, so regular paper, And then all of a sudden your spreadsheet program becomes a graphics program. Google slides is what I used to design all of my stuff.
I played around with the size of the circles. One of the really cool things about the Humanagram cards that you may not notice right away is that the size of the circle changes based on the size of the points.
- I don’t think people would notice it right away when you’re just looking at your own card.
- But once a few students have lined up together and the class is looking at that. We’re trying to figure out another word that you could use…
- In my head, I had a picture of one of my old students being really excited and noticing that: “Wait a second! The circles are all different sizes!” Like it would be this great aha moment.
So, once I knew that I wanted the circles to match the point sizes, I had to play around and figure out the largest circle I could have and then figure out the smallest circle I could have. And then I used math to figure out the different circle sizes for all 10 points.
QUESTION: This looks like a lot of work!
It was, but I was really excited as I was doing it. I really like making things look pretty on the computer, so I had some tricks that I have figured out before to help me with this.
For example, one of the tricks that I use with Google slideshows is that I use the master slide feature.
- It lets you change the layout and create your own layouts.
- This makes it easier to change things: if you don’t like something, you just change it on the master slide, and it will change all of the slides that use that layout.
So, I played around with a few different possible layouts.
And then, once I found the layout that I really like – the black rounded rectangle on the outside to identify the tile, the large Comfortaa font for the main letter in the middle, and the idea of a number with points at the bottom middle…
Once I figured that out, I could create that into a master slide layout to use with all of the different letters.
The thought of having to make the different circles on each letter slide was kind of daunting.
I didn’t want to have to figure out the points and the circle size for each different letter, and I knew that a lot of the letters had the same points, so I decided to make a different master slide layout for each point value. And then, later on, when I was going in the regular slides, I just had to choose the layout for the point value of the Letter.
And I knew from previous experience, if I decide I wanted to change something with the way the card looked, I would just have to change the master slide layout instead of having to change multiple letters.
QUESTION: What’s next?
Well, I was watching this TED talk during some background research on our creativity lesson.
It was this guy, Guy Kawasaki, talking about creativity, innovation and business, and he made the point that it’ll never be perfect and if you’re innovating and doing something creative, just get the product out there. And then work on improving it. The idea of the Minimum Viable Product.
So, I have that up on a slide in my workspace to remind me that it doesn’t have to be perfect. Let’s just get something out there and then we can improve upon it later.
As I was working on this project, I had a couple of ideas.
When I was coming up with points from 0 to 10, I was thinking my head that some of my students are not great at mental math. And the point of this game is really to play with the words in different ways and not get bogged down with the rote math skill. So then I wondered how could I make the math easier, and that would be using 5s and 10s.
But, 0, 5, 10 points is just mean because no one’s gonna want zero points, so I think the point value would have to be 5, 10, 15 or maybe 5, 10, 20. I’m not sure yet.
I was also thinking about one of my friends who teaches French. I thought about sharing this resource with her and I wondered if she would use this because French is a little bit different than English. I guess you would have to ignore the accents to make it simpler.
I then thought about a conversation I once had with my friend who I played Bananagrams with. We were talking about how it was really hard to use some letters in English, and then we were wondering whether it be hard to use those same letters in French.
So, after thinking about that past conversation, I wondered whether the letter frequencies would be different in English than in French. And on that Wikipedia article, I noticed it also shows the letter frequencies for a bunch of different languages.
So, then I realized that I needed to change the point values for different languages, because otherwise a letter that you use a lot in French might be worth too many points. (The metaphor about inclusion is ideas that we need to find value in things that are different or less common to us.)
So, then I realized at the bottom of the cards, I had to identify whether it was an English version or French version or Spanish version. (And then I went into the master slide and added a little line explaining that it was the English version.)
I’m from Canada, and I know how much my friends who teach French complain about how hard it is to find French resources.
But, most of the teachers who download our Educircles resources are American. (It might be because we use Teachers pay Teachers to host our resources, or just might be because there are so many more teachers in America than in Canada.)
But, then I was thinking that in America, a lot of people speak Spanish. So point values in French wouldn’t really help them, but point values in Spanish might.
Hmmm, I wonder since French and Spanish are both Romance languages, I wonder if there point values would be relatively similar because the letter frequencies are relatively similar. Not sure.
QUESTION: Wow, you must be really creative to create something like this. I could never come up with something like this.
I know this question seems like a humble brag but stay with me here…
I think I’m pretty creative in some ways – I thought one of my strengths was always connecting teaching ideas to the curriculum and finding neat ways to bring the real world and connecting them to the curriculum.
But, in other ways, I’m not really creative, yet.
Then, as I was putting together the Educircles lesson on creativity, I spent a little bit of time researching what artists and business people were talking about with regards to creativity and innovation.
And the key idea seems to be that creativity is a learnable skill. It ties into that whole growth mindset idea. Through hard work and strategies and effort and a little bit of tinkering and trial and error, we can connect the dots in new ways.
So, I made a lesson plan to help teachers teach creativity. And, it has nothing to do with art.
- It has to do with coming up with lots and lots of ideas (because the more ideas you churn out, the better chances that you find something good in there.)
- It also has to do with having a bunch of different experiences to draw upon. And the way, you have different experiences is by doing things differently from how you normally do them.
It’s a great lesson that we’re going to be releasing in a few days on Educircles. But of course, if I am going to teach a week of creativity in my class, I also have to teach the curriculum, so there’s a writing journal assignment built into the lesson. That way, teachers can collect information about writing fluency, organization, grammar and the mechanics of writing… (Deep down, I’ll always be a Grade 8 English Language Arts teacher, I guess.)
Wow, this was a long read.
Yep, it was. I’m intentionally leaving it as sort of a stream of consciousness style of writing. I want to walk through the creative process I use in creating Humanagrams.
Some people might look at the Humanagrams cards and say wow that looks pretty – I could never create that on the computer. I could never come up with that idea.
That’s the whole point of what we’re trying to do here in Educircles. We spend the hours and hours of prep time behind the scenes, so you don’t have to.
You have other things to do, like classroom management, differentiating for your students, communicating with parents, that pile of marking that you’re avoiding making eye contact with…
SIDE NOTE: I bet you could be super duper creative. Creating things is a process. The final draft never looks like the first draft. (Uh oh, how many of our students do a first draft and they’re done!)
I wrote this long post to model what the creative process could look like.
I didn’t just sit down with a blank computer screen and boom! A creative spark flashed and I produced the finished Humanagrams cards in one go.
As you can see, from the bold words above, there was a lot of trial and error.
- I was thinking back to other conversations I had
- I thought about other things that I have done in previous projects
- I thought about conversations with friends
- I thought about games I’ve played.
- I tried things and they didn’t work, so I tried something else…
- I was starting to figure out ways I could connect the dots in new ways.
And that’s what creativity is: Connecting the dots in new ways.
And we do that every day…
- when we’re figuring out how to deal with a problem we have in our relationships,
- when we are figuring out what to teach and how to connect it to the curriculum, or
- figuring out how to handle life when the world throws you a curveball.
So, everyone can be creative – but the challenge is finding the time because we’re so busy.
If you’ve read this far, and you like this stuff, you should sign up for email goodness to stay in the loop. Click Here
You might also want to check out our lesson on creativity:
How do you use Humanagrams in your class?
Also, please share the joy. If this was useful to you, please share it on your social networks. The world needs more human anagrams… |
- What is normal distribution used for?
- How do you determine normal distribution?
- How do you use a normal distribution table?
- Is a normal distribution positively skewed?
- What does it mean when data is not normally distributed?
- What is a normal distribution in statistics?
- What does it mean when data is normally distributed?
What is normal distribution used for?
The normal distribution is the most widely known and used of all distributions.
Because the normal distribution approximates many natural phenomena so well, it has developed into a standard of reference for many probability problems.
distributions, since µ and σ determine the shape of the distribution..
How do you determine normal distribution?
In order to be considered a normal distribution, a data set (when graphed) must follow a bell-shaped symmetrical curve centered around the mean. It must also adhere to the empirical rule that indicates the percentage of the data set that falls within (plus or minus) 1, 2 and 3 standard deviations of the mean.
How do you use a normal distribution table?
To use the z-score table, start on the left side of the table go down to 1.0 and now at the top of the table, go to 0.00 (this corresponds to the value of 1.0 + . 00 = 1.00). The value in the table is . 8413 which is the probability.
Is a normal distribution positively skewed?
For example, the normal distribution is a symmetric distribution with no skew. … Right-skewed distributions are also called positive-skew distributions. That’s because there is a long tail in the positive direction on the number line. The mean is also to the right of the peak.
What does it mean when data is not normally distributed?
Too many extreme values in a data set will result in a skewed distribution. Normality of data can be achieved by cleaning the data. … Never forget: The nature of normally distributed data is that a small percentage of extreme values can be expected; not every outlier is caused by a special reason.
What is a normal distribution in statistics?
The normal distribution is a continuous probability distribution that is symmetrical on both sides of the mean, so the right side of the center is a mirror image of the left side. The area under the normal distribution curve represents probability and the total area under the curve sums to one.
What does it mean when data is normally distributed?
The Data Behind the Bell Curve A normal distribution of data is one in which the majority of data points are relatively similar, meaning they occur within a small range of values with fewer outliers on the high and low ends of the data range. |
|Areas of Science||
Aerodynamics & Hydrodynamics
|Time Required||Short (2-5 days)|
|Material Availability||Readily available|
|Cost||Very Low (under $20)|
AbstractHave you ever noticed how some jet planes have small, vertical projections as the tips of the wings? They're called winglets. What are they there for?
The goal of this project is to measure the effects on flight performance when winglets are added to a paper airplane design.
Andrew Olson, Ph.D., Science Buddies
- NASAexplores.com, date unknown. "Paper Winglets," NASAexplores.com, archived version available at http://www.theonlinepaperairplanemuseum.com/AZMuseum/W/WingletsPlane/WingletsPlanePlan.pdf.
Cite This PageGeneral citation information is provided here. Be sure to check the formatting, including capitalization, for the method you are using and update your citation, as needed.
Last edit date: 2020-01-12
The Boeing jet in Figure 1 has winglets at the tips of its wings. Why are they there? What do they do?
Figure 1. A Boeing 757 jet with winglets at the tips of the wings.
As an airplane moves through the air, the wings generate lift by creating an area of low pressure above the upper surface of the wing. The higher air pressure beneath the lower surface of the wing lifts the plane. At the tip of the wing, the high and low pressure air meet.
The air forms miniature tornadoes, called wing tip vortices that spread out behind the plane (see Figure 2, below).
Figure 2. Wing tip vortices made visible behind a plane using colored smoke.
Wing tip vortices cause two problems:
- the turbulent airflow they create can be strong enough to flip an airplane that encounters it;
- they also increase the drag forces on the airplane that generates them, decreasing fuel efficiency.
While there is no way to completely eliminate the vortices, winglets help reduce their negative effects.
In this project, you will test paper airplanes built both with and without winglets and measure the effect on flight performance. When doing your background research, you should also study vertical stabilizers. In the simple designs used in this project, winglets will also function as vertical stabilizers.
Terms and Concepts
- Vertical stabilizer
- Horizontal stabilizer
- Center of lift
- Center of gravity
- Wing tip vortices
- What are the three forces acting on a glider in flight?
- What relationship between these forces is needed for stable flight?
- How will the addition of winglets affect these forces?
- How will the addition of winglets affect flight performance?
- You'll definitely want to check out the Gliders section (among others) of NASA's Beginner's Guide to Aeronautics. This site is packed with useful information on the science of flight:
NASA, 2005a. "Guided Tours of the Beginner's Guide to Aeronautics," NASA, Glenn Research Center [accessed June 8, 2006] http://www.grc.nasa.gov/WWW/K-12/airplane/guided.htm.
- Here are two links with alternative designs for folded paper airplanes. The second link (Palmer, 2000) has an excellent plan (PL-1, "Joe's Favorite") for testing with and without winglets:
- NASA, 2005b. "Folding Paper Airplane: How To Build a JET Model," NASA Glenn Research Center [accessed June 8, 2006] http://www.grc.nasa.gov/WWW/K-12/WindTunnel/Activities/foldairplane.html.
- Palmer, J., 2000. "Joseph Palmer's Paper Airplanes," [accessed June 15, 2006] http://www.josephpalmer.com/planes/Airplane.shtml.
- Here are some sources of information on winglets:
- ScienceIQ.com, (n.d.). "Taming Twin Tornadoes," NASA Aerospace Technology Enterprise [accessed June 25th, 2013] http://www.scienceiq.com/Facts/TwinTornadoes.cfm.
- Larson, G.C., 2001. "How Things Work: Winglets," originally published in Air & Space/Smithsonian, Aug/Sep 2001, available online: https://www.airspacemag.com/flight-today/how-things-work-winglets-2468375/.
News Feed on This Topic
Materials and Equipment
- Paper for making airplanes
- Tape measure to measure flight distance
- An indoor location with open space to test-fly the planes
- Optional: stop watch to measure flight time
- Do your background research so that you are knowledgeable about the terms, concepts, and questions above.
Start with your favorite paper airplane design. Figure 3, below, shows one popular model (see the first suggestion in the Variations section, below, for ideas on optimizing the design). This NASA link has another design you can try: http://www.grc.nasa.gov/WWW/K-12/WindTunnel/Activities/foldairplane.html.
- Using your chosen design, build several identical paper planes.
- Test-fly each plane at least 5 times, and measure the distance flown. Be careful to launch the planes at the same angle, and with the same amount of force each time. Note any instabilities in the flight characteristics (nose dives, rolling, turning). Optional: you can also use a stop watch to measure the flight duration. Keep track of the data in your lab notebook.
- Fold a small portion of each wing tip up to create equal-sized winglets on each wing, and repeat the test flights.
- Calculate the average flight distance for each plane, both with and without winglets.
- Did flight distance improve with winglets? Were there improvements in other flight characteristics?
If you like this project, you might enjoy exploring these related careers:
- Experiment with the design of the simple folded airplane to optimize the flight characteristics before trying winglets. For example, you can shorten the plane by folding back a portion of the nose before folding up the wings (step 3 in Figure 2, above). (What effect does this have on the center of gravity? What effect does this have on the center of lift?) You can alter the surface area of the wings slightly by experimenting with exactly where to place the fold in step 4 of Figure 2. Test your designs with multiple flight tests and keep track of the results in your lab notebook. Then use your best design to see if winglets improve performance even further.
- Experiment to find the optimal size for winglets.
- Does it matter if you fold the winglets down or up?
- The simple folded airplanes used in this project normally lack vertical stabilizers. Vertical stabilizers resist forces that would tend to make the plane yaw (nose moving from side to side). In this simple type of paper airplane, winglets can function as vertical stabilizers. Another type of paper airplane (made with laminated construction methods) generally does include a vertical stabilizer as part of the design. For more details, see the Science Buddies project What Makes a Good Aerodynamic Design? Test Your Ideas with High-Performance Paper Gliders. Do winglets improve the flight characteristics of high-performance paper gliders?
- For a more advanced project on winglets using a wind tunnel, see the Science Buddies project Winglets in Wind Tunnels.
Ask an ExpertThe Ask an Expert Forum is intended to be a place where students can go to find answers to science questions that they have been unable to find using other resources. If you have specific questions about your science fair project or science fair, our team of volunteer scientists can help. Our Experts won't do the work for you, but they will make suggestions, offer guidance, and help you troubleshoot.
Ask an Expert
News Feed on This Topic
Looking for more science fun?
Try one of our science activities for quick, anytime science explorations. The perfect thing to liven up a rainy day, school vacation, or moment of boredom.Find an Activity
Explore Our Science Videos
How to Make an Archimedes Screw - STEM Activity
Physics and Chemistry of an Explosion Science Fair Project Idea
How to make an anemometer (wind speed meter) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.