text stringlengths 4 602k |
|---|
Similarity Worksheet Pdf
Spot the Differences Page 1 - LOOK at the two pictures. Similarity is a form of proportion used to compare sizes of shapes and objects and the same rules apply when solving both similarity and proportion. WORKSHEETS: Regents-Similarity 1 GEO basic: 20: TST PDF DOC TNS: Regents-Similarity 2 GE/A basic: 8/6: TST PDF DOC TNS: Regents-Similarity 3 GEO/GE/A perimeter and area: 1/8/7: TST PDF DOC TNS: Regents. your FAFSA with the information on this worksheet any other required documents. Ryan and Kathy each drew a triangle with an angle of 20 degrees. Common Sentence Patterns for Prepositions of Place There are two common sentence patterns that we use with prepositions of place. What is the scale for the model? Other proportion review:. mathsmalakiss. Geometry Final Exam Review Worksheet (1) Find the area of an equilateral triangle if each side is 8. A tree 24 feet tall casts a shadow 12 feet long. Category: Geometry and Patterns Congruence and Symmetry Congruent Figures and Similar Figures. What is the scale factor? c) What are the lengths of the legs of the triangle in part b)?. 1 Relationships of Side Lengths and Angle Measures 1. A printable geometry worksheet from the similarity and congruence series. In order to preserve the tips of your brush pens, we recommend either printing on a very smooth printer paper such as HP Premium Choice Laserjet Paper, or using a sheet of tracing paper or marker paper over the top of your printed worksheet to practice. 8-D is a quality management tool and is a vehicle for a cross- functional team to articulate thoughts and provides scientific determination to details of problems and provide. PRACTICE: Pg 465 #7-17, 19-20, 23, 25, 28, 29 Wednesday, 11/28 or Thursday, 11/29 7-3: Triangle Similarity QUIZ 1: 7-1 & 7-2 I can use the triangle similarity theorems to determine if two triangles are similar. UPDATE: There are actually all new funds varieties in editable PDF kind go try All New: FREE Printable Budget Types You Can Edit. The number of columns and rows you can have in a worksheet is limited by your computer memory and your system resources. Equilateral Triangle: A triangle with all the sides of same length is called equilateral triangle. Our worksheet invites the use of critical thinking skills at every level. What is the perimeter of STR? a) 32 b) 37 c) 40 d) 42 e) 120 11. 5 and 45 5. Proving Triangles Similar Worksheet State if the triangles in each pair are similar. 12 Similarity and Proof. org 2 3 The accompanying diagram shows part of the architectural plans for a structural support of a building. Worksheet 8. Download PDF versionDownload DOC versionDownload the entire collection for only $99 (school license). The ratio of number of boys to girls in a classroom is 6 to 5. Geometry Worksheet Similar polygons Name_____ 1. These review sheets are great to use in class or as a homework. Similar triangles word problems Author: User at WREN HIGH SCHOOL Created Date: 2/27/2012 2:46:25 PM. Two 2 2 matrices Aand Bare called similar if there exists a linear transformation T: R2!R2 such that both Aand Brepresent Tbut with respect to di erent bases. Looking for high quality free EFL / ESL / ESOL worksheets? We offer a constantly growing diverse collection of cool ready-made printable handouts and other fun resources to support teaching and learning English at lower and higher levels. QUAD ∼ LITE We will match the corresponding angles and corresponding sides of the two similar polygons below. Similar Triangles State if the triangles in each pair are similar. Merit Badge Counselors may not require the use of this or any similar workbooks. Brad is 6 feet tall. 32 5 sig figs 107. 8th Grade Online Math Worksheets. She stands next to a sculpture 105 ft long. 293 in Girls Get Curves) In chapter 17 of Girls Get Curves, we saw how to prove that two triangles are similar with the AA shortcut - see p. Free Literacy Worksheets for Teachers, Parents, and Kids. (congruent/similar) 9. Congruence and Similarity 14. Scale Drawings - Worksheet #1 6) The scale on a map is ¼ in : 18 mi. lexingtonma. Sheppard-Brick 617. Grade 4 questions ©MathsWatch Clip. Use congruence and similarity criteria for triangles to solve problems and to prove relationships in geometric figures. Changing Dissimilar Fraction To Similar Fraction. For questions 4 – 9, find the geometric mean of each pair of numbers. Similar triangles worksheet pdf. • Answer all questions. 4 Similarity in Right Triangles 5 February 18, 2010 Feb 1111:51 AM a r s h b s b h r h a a c b Write the proportionality statements for each set of similar triangles. pdf similar triangles The lengths of corresponding sides are in proportion, called the scale factor. Find the value of each variable. Change color of lines & letters. Use your dictionary to verify that. As you start eliminating debts, you’ll start to build some serious momentum. 12 Prop Line Segs. extraordinary. Block Date Topic Homework 11 M 12/8. It possesses a neurotoxin similar to cobras. org 2 5 In the diagram below of right triangle ABC, CD is the altitude to hypotenuse AB, CB =6, and AD =5. Using the word bank, write a synonym for each word on the line below. Is c = d+e? How do you know?. Significant Figures Worksheet Key 1. PYTHAGOREAN THEOREM - WORKSHEET For each triangle find the missing length. Why does urea pass across the dialysis membrane but glucose and amino acids do not? 3. This selection of worksheets and lessons shows students how identify and use triangles that are similar and/or congruent. Worksheet on Similar Triangles for Grade 10 - Questions (1) Check whether the which triangles are similar and find the value of x. pdf format A lesson plan can be found at this site. Similar Triangles - 1 Students are asked locate a pair of similar The student names a pair of similar triangles and correctly solves for x and y but does not provide sufficient justification of the similarity. Preview the activity prior to assigning it to class. Example: It was a hot day. Dilations worksheet Name _____ Determine if the following scale factor would create an enlargement, reduction, or isometric figure. For this purpose, you can use animal name for each keyword. Before look at the worksheet, if you know the stuff related to triangle congruence postulates and theorem,. Questions 1. Then ask your friend Luis if he likes, is interested in or bored by the same thing ¿Te gusta la música? OR ¿Te interesa la música?. 1 Slope and Similar Triangles Worksheet Discuss each of the questions below with your partner. Is \C ˘=\C? How do you know? 2. Paragraph #1 My Dog Romeo is so much fun to play with. The will learn how to read words, sentences and 2 short stories. Printable PDF KS3 and KS4 Congruence and Similarity Worksheets with Answers. PLAN is a rectangle and AS ⊥LN. Equilateral Triangle: A triangle with all the sides of same length is called equilateral triangle. Category: Geometry and Patterns Congruence and Symmetry Congruent Figures and Similar Figures. Baseline or benchmark data are needed to show improvement. A cameraman zooms in on an object to photograph a close-up. EB is parallel to DC. teach-nology. Our staff has over 40 years experience in the math field. Proportion. See Mathematical Processes in LMS Library. This is a Tenth class CBSE Syllabus Lesson of Similar triangles. pdf format A lesson plan can be found at this site. Watch this video lesson to learn how you can tell if two figures are similar by using similarity transformations Geometry 6. Worksheet 4. Part of a collection of free shapes worksheets from K5 Learning. Similar Figures worksheet are shapes that have a similar shape however different sizes. These worksheets will help students compare and contrast the treatment of these themes. Free worksheet(pdf) and answer key on. How to Use the Tool. Free worksheet #01 on 'ratio and proportion'. Significant Figures Worksheet Key 1. 96 KB (Last Modified on December 7, 2013) Comments (-1). Yes, AA similarity Yes, AA Similarity Yes, SSS Similarity (Use the parallel lines) (Use the vertical Angles) 2 3 4 4 6 8 = = Identify the similar triangles in each figure. 3 42 not similar 2) 5 7 5 7 70 ° 20 28 20 28 110 ° similar 3) 3. Redundant D Unknown or unnamed 5. The length of the smallest side of QRS is 280, what is the length of the longest side of QRS? A 40-foot flagpole casts a 25-foot shadow. 3 worksheet similar triangles answers. Dilation usually affect the size of an object along one line. We are working hard on a new platform for setting, building and monitoring homework. Look at the similarity statement to help. Download Fillable Form Sfn60399 In Pdf - The Latest Version Applicable For 2020. More Area Worksheets. Identify triangles as similar or not and use proportions to finding missing lengths of similar triangles. Explain why a characteristic which helps an animal to live longer will generally tend to become more common in the population as a result of evolution by natural selection. New Vocabulary similar figures corresponding sides corresponding angles indirect measurement Math Online glencoe. Students use proportions to determine side lengths of similar figures. Synonyms are words that have similar meanings. They allow the child to test his or her knowledge, and they offer them a practical application for their learning. I love my mom. Words that look or sound alike or similar can be the cause of spelling and grammar errors. Graphing functions worksheet pdf. Free worksheet(pdf) and answer key on. The measure of the diagonal is used to give screen size. The will learn how to read words, sentences and 2 short stories. Practice adding similar fractions with this customizable and printable worksheet. Are the triangles similar? Show how you know using ratios. Students will access a website where they can read about the structures found in an earthworm dissection and label diagrams. pdf Adjectives Worksheet -answers. Humerus Ulna Radius Carpals Phalanges Metacarpals Did you know that the bones in your hands and arms are similar to the bones in the wings of birds and bats?. Problems for area and perimeter of rectangles/squares with images, word problems. Congruence and Similarity 14. Copy each pair. 2) If the mobile phase in a chromatographic experiment moved 15 centimeters and the Rf value of one of the compounds in the mixture was 0. flashcards (words) flashcards (definitions) vocab matching quiz; Resources: Online Copy of Story-- The story is not in the public domain, so the link can become obsolete very quickly. Humerus Ulna Radius Carpals Phalanges Metacarpals Did you know that the bones in your hands and arms are similar to the bones in the wings of birds and bats?. w c _A_lmlI [r\iQgmhPtcsw drIeQsOenrLvgeAdW. Students choose words of opposite meanings from a word bank to match against a list of words. Similar searches: Manual J Worksheet Apa Worksheet Dos Worksheet Worksheet Rfm Worksheet Worksheet 2 Integers Worksheet Sentences Worksheet Rfm Moles Worksheet Guideline Worksheet Step 4 Worksheet Worksheet Physics Grammar Worksheet Tense Worksheet 5 Things I Like About Myself Worksheet State Worksheet Algebra Worksheet Worksheet For Movers Habitats Worksheet. 3 and 15 4. 96 KB (Last Modified on December 7, 2013) Comments (-1). pdf - Name Similar Figures Date Each pair of figures is similar Find the missing side 2 1 x 3 9 1 12 3 x 20 3 4 4 x 16 8 5 x 4. Area of a rectangle : This is calculated by. Similar And Dissimilar Fractions. ©5 g250 Q102m RKcu ptja J pS ho pfNt Cw1avr Ae9 KLaL pC 5. Writing reinforces Maths learnt. Create free worksheets for simplifying expressions (pre-algebra / algebra 1), as PDF or html files. Decisions like these are often much harder to make than, for example, comparing three similar IT systems, where Decision Matrix Analysis or some form of financial analysis can help you decide. I don’t love it. Worksheet 7. com • Extra Examples • Personal Tutor • Self-Check Quiz 10-7 The figures in each pair below have the same shape but different sizes. Similar patterns worksheet for preschool children. A child 4 ft tall casts a shadow 12 ft 4. 2) 9) Use similar triangles to find the length of the lake. 3 12 W 12/10. A printable worksheet with questions on similar figures including matching and calculating corresponding sides. We also offer books, videos, and our world famous art pages. Worksheet given in this section will be much useful for the students who would like to practice problems on similar triangles. similarity because all inherited the feature from their common paleodont ancestor. Students use proportions to determine side lengths of similar figures. I E rM Bamdte J bw 1i 0t KhK tI enGfViCn FiStDep KPKrAe4- YALltgfe gb Wr7a b. First, the teacher reads each sentence. Proving Triangles Similar Worksheet. They transform it by rotating the triangle shown above 90 clockwise about the origin. T i SA Qlal J lr QiZg 9hOtps N brSe BsTeUr2v Leed 6. The original object and the developed picture are _____ in shape. They caught and tagged 72 deer and released them back into the park. pdf - Free download Ebook, Handbook, Textbook, User Guide PDF files on the internet quickly and easily. A worksheet I made on finding missing lengths in similar shapes. 4GHI has angles of 38 and 26. Exploration Sheet Answer Key Perimeter, Circumference and Area Worksheet A. If you received capital gain distribu-tions as a nominee (that is, they were paid to you but actually belong to some-one else), report on Schedule D, line 13, • • • • • • • • • • •. Details you'll be tested on include information on. Similar And Dissimilar Fractions - Displaying top 8 worksheets found for this concept. Geometry Worksheet Triangle Congruence Proofs Name: Date: Block: 1) Given: BD ⊥ AB, BD ⊥ DE, BC DC≅ Prove: ∠A ≅ ∠E Thoughts: 2. Think about: shape, color, texture, size, weight, age, condition, movable parts, or anything written on it. We have solving proportions, similar polygons, using similar polygons, similar triangles, and similar right triangles for your use. Y M BABlDl4 lr zitg 0hJtwsN mrAeasUeQrGvWezd a. All worksheets are printable, either as a. I'd like to not have to do this individually in each workbook as that would still be time consuming to open each workbook. What is the perimeter of STR? a) 32 b) 37 c) 40 d) 42 e) 120 11. 9 Worksheet Find the missing length. 2 R 9A FlylH nr 0imgZh wtmsO fr 5e qsCeWrev peRdA. You can decide whether two shapes are Similar Figures worksheet by following a progression of changes that will demonstrate that they are similar. n 6 1M Ba ld CeY EwHiGtfh b UIgn7f ri npistPe O mGje Fo cmPeZtHrlyr. • A spectrogram is a visual representation of how sound frequencies change over time. Similar Triangle Worksheet 1) What two criteria are needed for triangles to be similar? State whether or not the following triangles are similar. Dimensional Changes Worksheet The scale factor of two similar polygons is 2:3. Same or Different Worksheet 4 Answer Key Item 4798 Same or Different A preschool worksheet Circle the fruit that is different. P 7 VAvlXl0 Hrki Agih Gtbs 0 nr he 7s he sr bv1e DdH. If they are similar, complete the similarity statement and state the. Cher will identify two functional words that are similar, when given a similarity and foils. This is a math PDF printable activity sheet with several exercises. It is an analogue for similar triangles of Venema's Theorem 6. Ryan and Kathy each drew a triangle with an angle of 20 degrees. Such software is often used by teachers to make classroom materials and tests. What is the length of ? a) 3 in b) 6 in c) 7 in d) 9 in. These worksheets are from preschool, kindergarten to grade 6 levels of maths. 1 aG ca L because they are both marked as right angles. similar features are placed in one group. The triangles in each pair are similar. What is the length of ? a) 3 in b) 6 in c) 7 in d) 9 in. We’ll concentrate on the metric units in this worksheet. Watch this video lesson to learn how you can tell if two figures are similar by using similarity transformations Geometry 6. by the length. Sketching Similar Triangles Rally Coach worksheet Similar Triangles-Finding Missing. Learn what it means for two figures to be similar, and how to determine whether two figures are similar or not. The cortex is embedded with pigment granules that impart color to hair. By the AA postulate, we need to con rm that two of the angles are con-gruent to establish triangle. We add new worksheets on a weekly basis. GPS Middle School Math. Worksheet 8 - Hybridization When atoms bond to form molecules, they use molecular orbitals. com Draw a line between the same shapes. 12 and 2 Use the right triangle on the right to complete the following. Don’t be concerned with interest rates, unless two debts have a similar payoff balance. ESL Numbers Worksheet. 12 Similarity II. org 2 5 In the diagram below of right triangle ABC, CD is the altitude to hypotenuse AB, CB =6, and AD =5. Free Algebra 1 worksheets created with Infinite Algebra 1. I love to travel. 1 Ratio and Proportion GOAL 1: Computing Ratios If a and b are two quantities that are measured in the same units, then the ratio of a to b is. pdf similar triangles The lengths of corresponding sides are in proportion, called the scale factor. WORKSHEETS: Regents-Similarity Proofs GEO/GE: 5/9: TST PDF DOC TNS: Practice-Similarity Proofs: 1: WS PDF. Similar Polygons Worksheet Answer Key – If you find a template that you want to use, you may also double-click on the template thumbnail to open it and start customizing it immediately! You will discover others call for a premium account and a number of the templates are absolutely free to use. Similar Shapes (Area and Volume) Name: _____ Instructions • Use black ink or ball-point pen. a) b) 5 3 1 a 2 0 1 5 1 2 b c) d 4 m 5 m 15. 1 Congruence MathLinks: Grade 8 (Student Packet 14) 4 PROPERTIES OF CONGRUENT FIGURES 1. Similar Triangles. 22 scaffolded questions on equation, graph. your FAFSA with the information on this worksheet any other required documents. Practice Worksheet: Triangle Congruence Name:_____ In exercises 1-3, name a triangle congruent to the given triangle and state the congruence conjecture. Similar Polygons Worksheet Answer Key. Worksheet 7. It supports visual thinking techniques, enabling students to easily create and update graphic organizers, concept maps, idea maps and other visual diagrams. Modelo: la música Me gusta la música. 4 Prove Triangles Similar by AA. com delivers thousands of printable math worksheets, charts and calculators for home school or classroom use on a variety of math topics including multiplication, division, subtraction, addition, fractions, number patterns, order of operations, standard form, expanded form, rounding, Roman numerals and other math subjects. QUESTIONS. Free Grammar Worksheets for Teachers, Parents, and Kids. 1 Ratio and Proportion GOAL 1: Computing Ratios If a and b are two quantities that are measured in the same units, then the ratio of a to b is. Compound Interest Name_____ Worksheets Calculate the total amount of the investment or total paid in a loan in the following situations: 1. This one has five questions, with answers, on dilations. (congruent/similar)) Shape & Space / Similarity and Congruence / Video Interactive / Print Activity. 4JKL has angles of 26 and 115. Students determine if two given triangles are similar. 6 Similarity. pdf HW #21 Harriet Hedgehog. QUESTIONS. Fill in the Blank / Cloze Sentence Worksheets. We hope you find them very useful and interesting. Subscribe to the Create Art with ME Newsletter for COUPONS, New Lessons & Worksheets, and Updates! (1-2x a Month) * indicates required. 10 quiz questions with answers. com Congruent Shapes CONGRUENT figures are the same size and shape. WORKSHEET - SIMILAR POLYGONS & TRIANGLES (p. 1) Given: ∆MNP ~ ∆STW SW = _____ N T 15 m 24 m. 2 (Similar Triangle Construction Theorem). 5 and 45 5. Future Tenses Mixed Exercises PDF. b) Work out the area of the trapezium EBCD. What is the measure of the remaining angle? 39. Before going to class, some students have found it helpful to print out Purplemath's math lesson for that day's topic. The composition of ferritic filler metal before welding b. Try to make sense of it. short vowel sound worksheets 3rd grade These high quality pdf worksheets are pages from our phonics activity book for preschool. Similar Figures Worksheet Name _____ Show All Work Where Necessary! You can use proportional relationships to find missing side lengths in similar figures Solve each proportion. pdf Verbs Worksheet - Answers. 4 Similarity in Right Triangles 5 February 18, 2010 Feb 1111:51 AM a r s h b s b h r h a a c b Write the proportionality statements for each set of similar triangles. Two 2 2 matrices Aand Bare called similar if there exists a linear transformation T: R2!R2 such that both Aand Brepresent Tbut with respect to di erent bases. Number revision from GCSE Maths Tutor. Word Doc PDF. Even though the words have similar sounds, one phonological element is different in both. New Vocabulary similar figures corresponding sides corresponding angles indirect measurement Math Online glencoe. This worksheet is a supplementary preschool resource to help teachers, parents and children at home and in school. b) A similar triangle has a hypotenuse 52 cm long. Phylogeny is the evolutionary history of a species: a. Example : Calculate the area of the shape shown : 13cm. Every PDF fraction worksheet here has a detailed answer key that shows the work required to solve the problem, not just final answer! Fraction Multiplication. Unit 1: Similarity, Congruence, and Proofs This unit introduces the concepts of similarity and congruence. You can decide whether two shapes are Similar Figures worksheet by following a progression of changes that will demonstrate that they are similar. State if the polygons are similar. The definition of similarity is explored through dilation transformations. Reteaching 4-7 Similarity and Indirect Measurement In each figure, find h. mathworksheets4kids. Under a dilation of scale factor 2 with the center at the origin, if S (6, 7), what will be the. Similar And Dissimilar Fractions. Our site includes quizzes, worksheets, lessons and resources for teachers and students interested in using technology to enhance music education. 12 and 2 Use the right triangle on the right to complete the following. A limiting factor controls the growth of a population. Consolidate how to determine a similarity equation for the corresponding sides of similar triangles, and how to solve the similarity equation using their knowledge of ratios and proportions. overlapping triangle proofs 1000 images about math stuff on pinterest | jokes and humor 4 asa aas congruence kuta software proving triangles congruent worksheet the student states given one or two more statements that fail to establish of base angles moving forward stage weirshonorsadvancedmath1 hw answers sss sas my ccsd geometry students need practice with using name corresponding parts. A geometry worksheet with questions on similar triangles including matching corresponding angles and sides and also calculating dimensions with similar triangles. Grade 8/9 Math & Science Team. Worksheet by Kuta Software LLC Assignment #31b Similar Polygons and Scale Factor Name_____ ID: 4 Date_____ Period____ ©z v2l0q1U5s gKHuNtxaq ESWocfzt]wbaBrOew nLWLFCC. If so, state how you know they are similar and complete the similarity statement. In those cases you are often given a ratio. • A spectrogram is a visual representation of how sound frequencies change over time. !!On!7!and!8!use! thegridpointsgiven. 12 Similarity II. Rate Gain Worksheet in these instruc-tions if you complete line 18 of Sched-ule D. mathworksheets4kids. Indirect Measurement Worksheet Name: _____ Date: ___/___/___ Rita designs and test model rockets. Class worksheets. Students determine if two given triangles are similar. What can you claim using Substitution property ? 3. Suppose the dimensions of a 9-inch screen are 5 1/2 inches by 7 1/2 inches. u o 5A MlclB tr Lijgnh 6t5s t Prje 1sQeArfv de Xda. If they are similar, complete the similarity statement and state the. Significant Figures Worksheet Key 1. If 4ABC is a triangle, DE is a segment, and H is a half-plane bounded by ←→. EQ: In what ways can I represent the relationships that exist between similar figures. May 4th, 2019. Using the energy from the sun, water and carbon dioxide from the atmosphere and nutrients, they chemically make their own food. Doesn't get too difficult. 1) If a 6 ft tall tent casts a 10 ft long shadow then how long is the shadow that a 9 ft tall adult elephant casts? 2) A model plane has a scale of 1 in : 6 yd. In this lesson, you’ll learn how to use words like “gustar” and “encantar”. What is the length of BD? 1) 5 2) 9 3) 3 4) 4 6 In right triangle ABC shown in the diagram below, altitude BD is drawn to hypotenuse AC, CD =12, and AD =3. 12 Similarity and Proof. 5 and 45 5. Congruent Figures. HIJ is similar to STR. Some of the worksheets for this concept are Fractions work, Fractions work converting mixed fractions to, Grade 5 fractions work, Adding fractions like denominators, Converting fractions decimals and percents, Fraction, Fraction, Improper and. (1) (2) (3) (1) Basic Multiple Meaning Word Search Doc PDF; (2) Later Developing Multiple Meaning Words Word Search Doc PDF; (3) Multiple Meaning Sentence Search Doc PDF. l Worksheet by Kuta Software LLC Find the missing length. Students use proportions to determine side lengths of similar figures. Preview the activity prior to assigning it to class. All about kid's learning through spot the difference games, find difference pictures, spot difference puzzles, printable spot difference games. Home school worksheets are a vital part of the student's home school experience. Indicate how many significant figures there are in each of the following measured values. 12 Ratio and Proportion. Indicate how many significant figures there are in each of the following measured values. 3 42 not similar 2) 5 7 5 7 70 ° 20 28 20 28 110 ° similar 3) 3. Questions 1. 12 Similarity and Proof. Similarity is a form of proportion used to compare sizes of shapes and objects and the same rules apply when solving both similarity and proportion. Students read the short fiction passages and determine the life lesson of the story. A designation for aluminum alloys c. Two figures that have the same shape are said to be similar. Here you will find all we have for Similar Triangles Worksheet Pdf. similar, and apply similarity postulates and theorems in problems and proofs. This worksheet generator produces a variety of worksheets for the four basic operations (addition, subtraction, multiplication, and division) with fractions and mixed numbers, including with negative fractions. Similarity Worksheet State if the triangles in each pair are similar. Category: Geometry and Patterns Congruence and Symmetry Congruent Figures and Similar Figures. The ratio of the lengths of. Watch this video lesson to learn how you can tell if two figures are similar by using similarity transformations Geometry 6. Then find the area and the perimeter. |
The El Niño/Southern Oscillation or ENSO is the most important form of variability in the Earth’s climate on times scales greater than a year and less than a decade. It is a quasi-periodic phenomenon that occurs across the tropical Pacific Ocean every 3 to 7 years, and on average every 4 years.
The ENSO can causes extreme weather such as floods, droughts and other weather disturbances in many regions of the world. Developing countries dependent upon agriculture and fishing, particularly those bordering the Pacific Ocean, are the most affected.
This animation produced by the Australian Bureau of Meteorology shows how the ENSO works:
During El Niño years, trade winds in the tropical Pacific weaken, and the eastern part of this ocean warms up. During La Niña years, the trade winds get stronger again, and it’s the western part of the ocean that becomes warmer. Warmer oceans create more clouds and rain.
This cycle is linked to the Southern Oscillation: an oscillation in the difference in air pressure between the eastern and western Pacific:
The top graph shows variations in the water temperature of the tropical eastern Pacific ocean: when it’s hot we have an El Niño. The bottom graph shows the air pressure in Tahiti minus the air pressure in Darwin, Australia — up to a normalization constant, this called the Southern Oscillation Index, or SOI. If you stare at the graphs a while, you’ll see they’re quite strongly correlated—or more precisely, anticorrelated, since one tends to go up when the other goes down. You’ll also see that the cycles are far from perfectly periodic.
The ENSO has been changing in a number of ways in the last few decades. Some attribute these changes to global warming, but this is still uncertain.
During the last several decades the number of El Niño events has increased, and the number of La Niña events has decreased. The question is whether this is a random fluctuation or a normal instance of variation for that phenomenon, or the result of global climate changes towards global warming:
Gabriel A. Vecchi and Andrew T. Wittenberg, El Niño and our future climate: where do we stand?, WIRES Climate Change, (9 February 2010).
Kevin E. Trenberth and Timothy J. Hoar, The 1990-1995 El Niño-Southern Oscillation event: Longest on record, Geophysical Research Letters 23 (January 1996), 57–60.
Some studies of historical data show that the recent El Niño variation may be linked to global warming. For example, one of the most recent results is that even after subtracting the positive influence of decadal variation, shown to be possibly present in the ENSO trend, the amplitude of the ENSO variability in the observed data still increases, by as much as 60% in the last 50 years:
Alexey V. Fedorov and S. George Philander, Is El Niño changing?, Advances in Atmospheric Sciences 288 (2000), 1997–2002.
Qiong Zhang, Yue Guan, Haijun Yang, ENSO Amplitude change in observation and coupled models, Advances in Atmospheric Sciences 25 (2008), 331–336.
It is not certain what exact changes will happen to ENSO in the future: different models make different predictions. It may be that the observed phenomenon of more frequent and stronger El Niño events occurs only in the initial phase of the global warming, and then (e.g., after the lower layers of the ocean get warmer as well), El Niño will become weaker than it was. It may also be that the stabilizing and destabilizing forces influencing the phenomenon will eventually compensate for each other. More research is needed to provide a better answer to that question, but the current results do not exclude the possibility of dramatic changes. See:
William J. Merryfield, Changes to ENSO under CO2 doubling in a multimodel ensemble, Journal of Climate 19 (2006), 4009–4027.
G. A. Meehl, H. Teng and G. Branstator, Future changes of El Niño in two global coupled climate models, Climate Dynamics 26 (2006), 549.
Abstract: The variability of ENSO, the largest interannual climate variation of the Pacific ocean-atmosphere system, and its relation to the Pacific Decadal Oscillation and global warming are documented. Analysis using the Empirical Mode Decomposition method, which is useful for analyzing nonlinear, nonstationary climate records, reveals that ENSO contains strong seasonal, biannual, decadal signals, as well as a monotonic trend that is shown to be tied closely to global warming. The frequencies of the interannual components of ENSO are higher when the decadal components of ENSO are in the warm phase, and are also increasing with the global warming trend. It is also argued that the decadal signals and the trend are connected to for the abnormal ENSO events in the 1990s.
Winds called trade winds blow west across the tropical Pacific. During La Niña years, water at the ocean’s surface moves west with the wind, warming up in the sunlight as it travels. So, warm water collects at the ocean’s surface in the western Pacific. This creates more clouds and rainstorms in Asia. Meanwhile, since surface water is being dragged west by the wind, cold water from below gets pulled up to take its place in the eastern Pacific, off the coast of South America.
Furthermore, because the ocean’s surface is warmer in the western Pacific, it heats the air and makes it rise. This actually helps the trade winds blow west: wind blows west to fill the ‘gap’ left by rising air.
So, it’s a kind of feedback loop: the oceans being warmer in the western Pacific helps the trade winds blow west, and that makes the western oceans warmer than the eastern ones.
Of course, one may ask: why do the trade winds blow west?
Without an answer to this, the story so far would work just as well if we switched the words ‘west’ and ‘east’. That wouldn’t mean the story is wrong. It might just mean that there were two stable states of the Earth’s climate: a La Niña state where the trade winds blow west, and another state—say, the El Niño—where they blow east. One could imagine a world permanently stuck in one of these phases. Or perhaps it could flip between these two phases for some reason.
Something roughly like the last choice is actually true. But it’s not so simple: there’s not a complete symmetry between west and east!
Why not? Mainly because the Earth is turning to the east. Air near the equator warms up and rises, so new air from more northern or southern regions moves in to take its place. But because the Earth is fatter at the equator, the equator is moving faster to the east. So, this new air from other places is moving less quickly by comparison… so as seen by someone standing on the equator, it blows west. This is an example of the Coriolis effect.
Beware: a wind that blows to the west is called an easterly. So the westward-blowing trade winds I’m talking about are called "northeasterly trades" and "southeasterly trades" on this picture. But don’t let that confuse you.
Terminology aside, the story so far should be clear. The trade winds have a good intrinsic reason to blow west, but in the La Niña phase they’re also part of a feedback loop where they make the western Pacific warmer… which in turn helps the trade winds blow west.
But then comes an El Niño. Now for some reason the westward winds weaken. This lets the built-up warm water in the western Pacific slosh back east. And with weaker westward winds, less cold water is pulled up to the surface in the east. So, the eastern Pacific warms up. This makes for more clouds and rain in the eastern Pacific—that’s when we get floods in Southern California. And with the ocean warmer in the eastern Pacific, hot air rises there, which tends to counteract the westward winds even more.
In other words, all the feedbacks reverse themselves. But note: the trade winds never mainly blow east. Even during an El Niño they still blow west, just a bit less. So, the climate is not flip-flopping between two symmetrical alternatives. It’s flip-flopping between two asymmetrical alternatives.
One remaining question is: why do the westward trade winds weaken? We could also ask the same question about the start of the La Niña phase: why do the westward trade winds get stronger then?
The short answer is that nobody is exactly sure. Or at least there’s no one story that everyone agrees on. There are actually several stories… and perhaps more than one of them is true. So, at this point it is worthwhile revisiting the actual data:
The top graph shows variations in the water temperature of the tropical Eastern Pacific ocean. When it’s hot we have El Niños: those are the red hills in the top graph. The blue valleys are La Niñas. Note that it’s possible to have two El Niños in a row without an intervening La Niña, or vice versa!
The bottom graph shows the Southern Oscillation Index or SOI. This is basically the air pressure in Tahiti minus the air pressure in Darwin, Australia.
So, when the SOI is high, the air pressure is higher in the east Pacific than in the west Pacific. This is what we expect in an La Niña: that’s why the westward trade winds are strong then! Conversely, the SOI is low in the El Niño phase. This variation in the SOI is called the Southern Oscillation.
If you look at the graphs above, you’ll see how one looks almost like an upside-down version of the other. So, El Niño/La Niña cycle is tightly linked to the Southern Oscillation.
Another thing you’ll see from is that ENSO cycle is far from perfectly periodic! Here’s a graph of the Southern Oscillation Index going back a lot further:
This graph was made by William Kessler. His explanations of the ENSO cycle are well worth reading:
• William Kessler, El Niño: How it works, how we observe it.
It is worthwhile seeing his comments on theories about why an El Niño starts, and why it ends. These theories involve three additional concepts:
• The thermocline is the border between the warmer surface water in the ocean and the cold deep water, 100 to 200 meters below the surface. During the La Niña phase, warm water is blown to the western Pacific, and cold water is pulled up to the surface of the eastern Pacific. So, the thermocline becomes deeper in the west than the east. When an El Niño occurs, the thermocline flattens out:
• Oceanic Rossby waves are very low-frequency waves in the ocean’s surface and thermocline. At the ocean’s surface they are only 5 centimeters high, but hundreds of kilometers across. The surface waves are mirrored by waves in the thermocline, which are much larger, 10-50 meters in height. When the surface goes up, the thermocline goes down.
• The Madden-Julian oscillation is a pulse that moves east across the Indian Ocean and Pacific ocean at 4-8 meters/second. It manifests itself as patches of anomalously high rainfall and also anomalously low rainfall. Strong Madden-Julian Oscillations are often seen 6-12 months before an El Niño starts.
With this bit of background, we hope the reader is prepared to understand what Kessler wrote in his El Niño FAQ:
There are two main theories at present. The first is that the event is initiated by the reflection from the western boundary of the Pacific of an oceanic Rossby wave (type of low-frequency planetary wave that moves only west). The reflected wave is supposed to lower the thermocline in the west-central Pacific and thereby warm the SST [sea surface temperature] by reducing the efficiency of upwelling to cool the surface. Then that makes winds blow towards the (slightly) warmer water and really start the event. The nice part about this theory is that the Rossby waves can be observed for months before the reflection, which implies that El Niño is predictable.
The other idea is that the trigger is essentially random. The tropical convection (organized large-scale thunderstorm activity) in the rising air tends to occur in bursts that last for about a month, and these bursts propagate out of the Indian Ocean (known as the Madden-Julian Oscillation). Since the storms are geostrophic (rotating according to the turning of the earth, which means they rotate clockwise in the southern hemisphere and counter-clockwise in the north), storm winds on the equator always blow towards the east. If the storms are strong enough, or last long enough, then those eastward winds may be enought to start the sloshing. But specific Madden-Julian Oscillation events are not predictable much in advance (just as specific weather events are not predictable in advance), and so to the extent that this is the main element, then El Niño will not be predictable.
In my opinion both these two processes can be important in different El Niños. Some models that did not have the MJO storms were successful in predicting the events of 1986-87 and 1991-92. That suggests that the Rossby wave part was a main influence at that time. But those same models have failed to predict the events since then, and the westerlies have appeared to come from nowhere. It is also quite possible that these two general sets of ideas are incomplete, and that there are other causes entirely. The fact that we have very intermittent skill at predicting the major turns of the ENSO cycle (as opposed to the very good forecasts that can be made once an event has begun) suggests that there remain important elements that are await explanation.
Apparently the simplest climate model that exhibits somewhat realistic ENSO behavior is the ‘minimal model’ of the equatorial ocean-atmosphere system called the Zebiak–Cane model or ZC model:
S. E. Zebiak and M. A. Cane, A model El Niño–Southern Oscillation, Mon. Weather Review 115 (1987), 2262–2278.
For a quick overview see:
Mark Decker, A model El Niño–Southern Oscillation, slide presentation, 1 February 2006.
S. E. Zebiak and M. A. Cane, Forecasts of tropical Pacific SST using a simple coupled ocean/atmosphere dynamical model.
In the Zebiak–Cane model, when the ocean-atmosphere coupling strength rises above a certain critical value , a Hopf bifurcation occurs. In other words, for , there is a stable equilibrium for the behavior of ocean and atmosphere, or in mathematical language, an attractive fixed point. However, for the ocean and atmosphere display periodic behavior, or in mathematical language, a limit cycle.
A picture is probably worth a thousand words, so here is a picture of a Hopf bifurcation where :
You can see that the solution spirals in to a stable fixed point for , but approaches a circle—the ‘limit cycle’—for . This picture is taken from
Perhaps the simplest differential equation exhibiting a Hopf bifurcation is:
This is much clearer in polar coordinates:
These say that the solution goes round and round at constant angular velocity , while the distance from the origin, , approaches either if or the unique positive solution of
if . Solving this equation, we see in the latter case that we get a limit cycle with radius
For more, see Hopf bifurcation.
A rather different pedagogical toy model of ENSO may be found here:
In its simplest form, this model uses a delay-differential equation:
() to model the possible effect of oceanic Rossby waves. By rescaling time appropriately and redefining the constants we get the dimensionless form, equation (2) of the paper:
Quoting the paper:
Lastly, the model also considers equatorially-trapped ocean waves propagating across the Pacific, before interacting back with the central Pacific region after a certain time delay. These ocean waves are “hidden” Rossby waves which move westward on the thermocline, reflect off the rigid continental boundary in the West and then return eastward along the equator as Kelvin waves.
The delay term has a negative coefficient representing a negative feedback. To see the reason for this, let us consider a warm SST [sea surface temperature] perturbation in the coupled region. This produces a westerly wind response that deepens the thermocline locally (immediate positive feedback), but at the same time, the wind perturbations produce divergent westward propagating Rossby waves that return after time to create upwelling and cooling, reducing the original perturbation.
Note that equation (9) from this paper,
with an annual forcing resembles the bistable potential on the page stochastic resonance in the case of a periodic forcing, with an additional linear time-delayed feedback term.
The Hopf bifurcation model above gives perfectly periodic behavior, so it does not explain the irregularity of the ENSO cycle. The peculiar variability in the ‘period’ of the ENSO has been studied with the help of stochastic differential equations:
Abstract: The El Niño/Southern Oscillation (ENSO) phenomenon is the dominant climatic fluctuation on interannual time scales. It is an irregular oscillation with a distinctive broadband spectrum. In this article, we discuss recent theories that seek to explain this irregularity. Particular attention is paid to explanations that involve the stochastic forcing of the slow ocean modes by fast atmospheric transients. We present a theoretical framework for analysing this picture of the irregularity and also discuss the results from a number of coupled ocean–atmosphere models. Finally, we briefly review the implications of the various explanations of ENSO irregularity to attempts to predict this economically significant phenomenon.
Kleeman actually discusses two general theories for the irregularity of the ENSO:
Perhaps the slowly varying modes involved interact with each other in a chaotic way.
Perhaps the slowly varying modes involved interact with each other in a non-chaotic way, but also interact with rapidly-varying chaotic modes, which inject noise into what would otherwise be simple periodic behavior.
Of course there is a third option: Perhaps the slowly varying modes involved interact with each other in a chaotic way, but also interact with rapidly-varying chaotic modes. However, Kleeman concentrates on the second option.
As a toy model, Kleeman describes the behavior of version of equations (1) and (2) to which white noise has been added. This produces irregular cyclic behavior even for . Roughly speaking, the noise keeps knocking the solution away from the stable fixed point at , so it keeps going round and round, but in an irregular way.
Note that a transformation from Cartesian coordinates to polar coordinates in the presence of noise cannot use the usual chain rule of calculus, but has to use e.g. the Itô formula for Itô stochastic differential equations instead.
For a talk on this subject, see:
It is also interesting to consider a stochastic version of the delayed action oscillator. For details, see
and also our page on stochastic delay differential equations.
As usual, a good place to start is the Wikipedia article:
Australian Government, Bureau of Meteorology, El Niño Southern Oscillation (ENSO).
US National Weather Service, Climate Prediction Center, The ENSO cycle.
Kevin E. Trenberth, The definition of El Niño, Bulletin of the American Meteorological Society 78 (1997), 2771–2777.
For our attempts to understand and predict El Niños, see:
and the discussions here:
The Southern Oscillation Index or SOI is computed from the air pressure anomaly at sea level in Tahiti minus the air pressure anomaly at sea level in Darwin, Australia. The precise definition is here.
You can download SOI data from here:
You can get the mean monthly air pressure anomalies in Darwin and Tahiti here, and also read more about how the SOI is computed:
• Southern Oscillation Index based upon annual standardization, Climate Analysis Section, NCAR/UCAR. This includes links to monthly sea level pressure anomalies in Darwin and Tahiti, in either ASCII format (click the second two links) or netCDF format (click the first one and read the explanation).
Niño 3.4 is the area-averaged sea surface temperature anomaly in the region 5°S-5°N and 170°-120°W. The Oceanic Niño Index or ONI is the 3-month running mean of the Niño 3.4 index.
You can get Niño 3.4 data here:
• Niño 3.4 data since 1870 calculated from the HadISST1, NOAA. Discussed in N. A. Rayner et al, Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century, J. Geophys. Res. 108 (2003), 4407.
You can also get Niño 3.4 data here:
• Monthly Niño 3.4 index, Climate Prediction Center, National Weather Service.
The actual temperatures in Celsius are close to those at the other website, but the anomalies are rather different, because they’re computed in a way that takes global warming into account. See the website for details.
Niño 3.4 is just one of several official regions in the Pacific:
• Niño 1: 80°W-90°W and 5°S-10°S.
• Niño 2: 80°W-90°W and 0°S-5°S
• Niño 3: 90°W-150°W and 5°S-5°N.
• Niño 3.4: 120°W-170°W and 5°S-5°N.
• Niño 4: 160°E-150°W and 5°S-5°N.
For more details, read this:
You can get Niño 1+2, 3, 4 and 3.4 here:
It gives the actual temperatures and then the “anomalies”. These do not match those in the previous sources for Niño 3.4.
You can get daily average surface air temperatures from here:
More precisely, there’s a bunch of files here containing worldwide daily average temperatures on a 2.5° latitude × 2.5° longitude grid (144 × 73 grid points), from 1948 to 2010. If you go here the website will help you get data from within a chosen rectangle in a grid, for a chosen time interval. These files are also available at
These are 'NetCDF files'. After downloading this data, you can convert the data in a chosen rectangle and time interval into a format suitable for R software using this program written by Graham Jones:
The TAO array, renamed the TAO/TRITON array on 1 January 2000, consists of approximately 70 moorings in the Tropical Pacific Ocean, telemetering oceanographic and meteorological data to shore in real-time via the Argos satellite system.
You can download data here:
There is an older data set at U.C. Irvine:
The data is stored in an ASCII files with one observation per line. Spaces separate fields and periods (.) denote missing values.
Among other things they write:
This data was collected with the Tropical Atmosphere Ocean (TAO) array which was developed by the international Tropical Ocean Global Atmosphere (TOGA) program. The TAO array consists of nearly 70 moored buoys spanning the equatorial Pacific, measuring oceanographic and surface meteorological variables critical for improved detection, understanding and prediction of seasonal-to-interannual climate variations originating in the tropics, most notably those related to the El Nino/Southern Oscillation (ENSO) cycles.
The moorings were developed by National Oceanic and Atmospheric Administration’s (NOAA) Pacific Marine Environmental Laboratory (PMEL). Each mooring measures air temperature, relative humidity, surface winds, sea surface temperatures and subsurface temperatures down to a depth of 500 meters and a few a of the buoys measure currents, rainfall and solar radiation. The data from the array, and current updates, can be viewed on the web at the this address .
The data consists of the following variables: date, latitude, longitude, zonal winds (west0, east>0), meridional winds (south0, north>0), relative humidity, air temperature, sea surface temperature and subsurface temperatures down to a depth of 500 meters. Data taken from the buoys from as early as 1980 for some locations. Other data that was taken in various locations are rainfall, solar radiation, current levels, and subsurface temperatures.
The latitude and longitude in the data showed that the bouys moved around to different locations. The latitude values stayed within a degree from the approximate location. Yet the longitude values were sometimes as far as five degrees off of the approximate location.
Looking at the wind data, both the zonal and meridional winds fluctuated between -10 m/s and 10 m/s. The plot of the two wind variables showed no linear relationship. Also, the plots of each wind variable against the other three meteorolgical data showed no linear relationships.
The relative humidity values in the tropical Pacific were typically between 70% and 90%.
Both the air temperature and the sea surface temperature fluctuated between 20 and 30 degrees Celsius. The plot of the two temperatures variables shows a positive linear relationship existing. The two temperatures when each plotted against time also have similar plot designs. Plots of the other meteorological variables against the temperature variables showed no linear relationship.
There are missing values in the data. As mentioned earlier, not all buoys are able to measure currents, rainfall, and solar radiation, so these values are missing dependent on the individual buoy. The amount of data available is also dependent on the buoy, as certain buoys were commissioned earlier than others.
All readings were taken at the same time of day. |
13 years ago, amateur scientist Willis Eschenbach developed a thought experiment that he hoped would very simply illustrate how the Greenhouse Effect works.
The main claim is that the addition of a steel shell surrounding a planetary surface will cause the inner surface to emit TWICE (235 -> 470 W/m2) the radiation as compared to not having steel shell. This should significantly raise the surface temperature from ~254K to ~302K.
Is this true? No.
Willis gives us the freedom to construct any power source with any chosen radius. I will choose a mini nuclear reactor wrapped in a steel housing, with the total surface area being 1 m2. The inner radius of steel housing is 75% of total radius.
Let us go through the equations to make set up Willis’ initial scenario (A):
The nuclear power reactor is ONLY capable of making its wall 254.041K – to meet Willis’ initial criteria. It is not capable of anything greater, because nuclear reactions are fixed. No varying levels of downstream radiation will enable nuclear fission reactions to generate more joules.
Now let us see what happens when we add a steel shell (B):
I will give Willis credit for doing a good job of demonstrating the real greenhouse effect:
Outgoing radiation is halfed and T2 (our “surface”) has increased from 253.726K to 253.884K, a very feeble gain.
The problem with Willis’ approach is that he doesn’t reduce outgoing radiation and relies on his heat source to crank up … when there is no physical way it can do so.
|T#||Willis A||Willis B||Reality A||Reality B|
|T2||253.726K (235W)||301.732K (470W)||253.726K (235W)||253.884K (235.588 W)|
|T3||253.726K (235W)||213.490K (117.393 W)|
So there you have it. The real steel greenhouse effect managed to raise the surface temperature by 0.062%.
Subsequent additions of steel shells will keep raising the surface temperature (T2), in an inverse asymptotic fashion approaching nuclear reactor wall temperature (T1).
Enjoy 🙂 -Zoe
132 thoughts on “Real Steel Greenhouse Effect”
Zoe, I can’t check your maths. Let’s hope Willis shows up to defend his argument. Intuitively, steel is a good conductor. But a steel shell would stop the atmosphere convecting. Convection elevates a molecule to a height where there is less material that is capable of absorbing radiation directly to space. Ozone is a large molecule that efficiently absorbs wave lengths that are most commonly emitted by the Earth system and its presence gives rise to an increase in gas temperature above an altitude of 10,000 metres in the mid latitudes. There is no evidence that the enhanced emission by the molecules warmed by ozone (nitrogen and oxygen) actually warms the layers beneath the point where that temperature increase occurs.
The supposed warming effect of CO2 fails to take into account the efficiency of convective processes.
Willis should spend a little time living in a sea container in an environment where air temperature falls to near zero overnight.
Well, um, there is no atmosphere in this example. No convection at all.
Merely adding conductive matter on top of our “surface” would also raise temperature, but not as much as a detached shell via radiative means.
More useful maths. https://fee.org/articles/41-inconvenient-truths-on-the-new-energy-economy/
Thanks, Zoe, so interesting as always. I plan to check your math soon 😉
LikeLiked by 1 person
Ha ha. Please do. Computer formatting formulas is a pain in the neck.
I don’t think it’s fair to call Willis an amateur scientist,
nor would it be fair to use that title for you.
You both present scientific articles, so are scientists
by my definition of scientist, even if not paid for your work.
There are plenty of people with science degrees
paid to do reckless climate scaremongering. They
are called “scientists”, but always wrong wild guesses
of the future climate are not science.
No need to print the following
portion of my comment:
Your three posts in three days is like a miracle.
Every reader wishes you would post more often.
Once in a week, or two, would be fine
if you had the time. It’s nice to be “in demand”.
I publish a climate science and energy blog
where I present the best articles I’ve read that
day. So far over 319,000 page views.
I have only recommended two other blogs
in the past few years, that have consistently
good articles. Your blog is also consistently
good, and I’d recommend it … but …
you don’t have a regular publishing
schedule, so if people visit on my
recommendation, and see nothing new
for a month or two, they’ll get mad at ME.
This already happened with two friends
who I told about your late 2021
global sea ice area articles
— both of them thought you had
“gone out of business”
after those two articles.
If you had a regular schedule, such as a
a post every week, or every two weeks,
or even once a month, that would allow me
to recommend your blog.
I get 300 to 400 page views a day.
PS: If you are anything like my wife,
you are now mad at me for giving
you advice. If so, please tale it out
on Willis E. ha ha
Bingham Farms, Michigan
I’m not mad. I’m a just very busy gal. I’d love to have more posts, but, as you can see, I do original research. Original research is very time consuming. When I don’t have the time, I make errors.
I don’t want to have a blog that just links to other articles. Neither do I want to have boiler plate posts about updates to climate data.
Right now I have just 2 ideas for next posts. After that, I will again have to scour for something original.
Ideally I’d like to share everything I know, but I think that will be boring for most people. My computer skills aren’t interesting to people. Those posts get like no views.
LikeLiked by 1 person
So, let me annoy you even more with
an example. Let’s say you have the time
to do 12 posts in a year. Your readers
would be happy if they knew Zoe would
be publishing something in the first week
of every month. That would mean NOT
publishing three articles in three days,
as your just did. Those three articles could
have been published over three months.
When readers don’t see a post for a long
time, or there is no predictable schedule,
they stop looking. … I know genius can’t
be rushed, but if you publish three articles
in three days, and perhaps nothing else
for the next month, readers disappear.
That’s just my two cents, which is
probably worth one cent after inflation.
LikeLiked by 1 person
This is not a job. I’m not on a schedule. When I have free time and desire, I write articles. Making this a scheduled task is making it like work, of which I have plenty.
Thanks for the suggestion tho. Let’s see what happens.
LikeLiked by 1 person
Sorry, I jumped to the conclusion that
having more readers was your goal.
That’s apparently not a priority.
I also failed to explain my ideas well.
There is no need for you to do more work,
or to work at different times.
Your publishing schedule does not
have to match your writing schedule.
Your work is just as valuable if presented
the day it is finished, or many months
I had recommended your blog to two
friends in late 2021 after reading two articles
on Global Sea Ice Area. With no new articles
in the next 2+ months, both friends deleted
your blog from their Bookmarks list, and never
visited again. Which I thought was a shame.
My own blogs serve a different purpose.
I merely share the best articles I’ve read
that day, and rarely write my own articles.
I used to write articles for my for-profit
newsletter ECONOMIC LOGIC, from
1977 to 2020. That was a lot of work.
Now I use my blogs as an antidote
for social and mass media censorship,
and parroting government press releases.
So I want more readers, and assumed that
desire applied to everyone with a blog
I was just trying to be helpful.
Bingham Farms, Michigan
Nah. Publishing is the instant reward for writing. I have 7 posts I never published. I’m still writing them. lol. Actually they are crap, and I should delete them. And that happens every time I take that approach. If I start writing, it must be finished today or early tomorrow, or never.
Don’t apologize. Thank you for the advice.
I didn’t even think anyone would read my blog.
So the outgoing radiation is halved, while the source output stays the same. Please explain how T2 remains largely unchanged. I follow the math, but to me it is counter intuitive.
What were you expecting? Willis result or no warming at all?
The net flow became ~118W, down from 235W. Less cooling … warmed up.
A net flow of 0W between surface and shell would mean the surface and shell is same temperature as nuclear reactor wall. This is impossible but presented for guidance.
Nothing in this system will exceed nuclear wall temperature.
I was expecting no warming. To me it is intuitive that no part of the system can achieve a higher temperature than the nuclear wall temp. It is also intuitive that the source output remains constant. So I guess I was expecting the cooling to remain the same, but after some condideration I agree that it must reduce. Radiation is inefficient
LikeLiked by 1 person
Just to clear things up … I never doubted the REAL “greenhouse effect” (really: thermal resistence effect) of preventing cooling leading to a warming. I just never believed warming can exceed the SOURCE. I still don’t.
LikeLiked by 1 person
The greenhouse effect depends upon stability in the medium. The atmosphere is anything but stable. Its highly mobile. Cooling ultimately depends on radiation to space but in the interim, it is very efficiently maintained via conduction and displacement, ie convection. The part of the globe that receives the most intense radiation, inside the tropics of Capricorn and Cancer, is temperature limited due to the efficiency of plant transpiration, evaporation from open bodies of water, the movement of ocean currents and most of all, convection. Accordingly, the tropics radiate much less energy to space than is received directly from the sun due to the temperature decline as gas expands as it is elevated. It is here that the tropopause, where the air is coldest, due to simple decompression and the impact of convection in displacing ozone upwards, that the air is as cold as over the poles in winter.
As the Hadley cell is energized by convection, the descent and compression of air in the mid latitudes is enhanced. Here, the air is dry, its greenhouse potential reduced and radiation to space proceeds with relative freedom. Dry air is not conducive to the production of cloud. The temperature of the air in the mid latitudes, is dependent, as it is everywhere else, on the partial pressure of ozone that is elevated by convection in extratropical (polar) cyclones to the upper limits of the atmosphere, from where it is transported to the mid latitudes, affecting cloud cover.
This is a complex system, incapable of being mathematically modelled. The most sophisticated minds think in terms of an ‘open system’ whereby the atmosphere is susceptible to change according to change in the wider solar and intergalactic influences. Sophistication is gained via a study of the geography and history of the atmosphere.
Greenhouse effect. Propaganda. A simple idea, easily assimilated, that is maintained in order to secure an economic, social and political objective. A scam.
The greenhouse effect is what it is. The claim that CO2 is a control button for the climate i something different. Most scientists do not dare to challenge this view, in fear of destroying their own reputation. Roy Spencer is known for having said that since the climate system is so complicated, we do not know whether humans have caused 10% or 90% of the observed warming. Of late though, he feels obliged for some reason, to state that: “I still provisionally side with the view that warming has been mostly human-caused”. He can’t back it up, but safe to preserve ones reputation since there is doubt. And this is how it goes, decade after decade. I hope this is the last decade. The truth must come, be it in favor of the alarmist or the sceptic side.
CO2 causes between 0% and 100%
of the global warming, with both
0% and 100% unlikely, in my opinion.
I gave Roy Spencer a hard time on his blog
about guessing CO2 had a major effect
when the correct answer is no one knows.
I liked his 10% to 90% range.
But then I felt bad when later found out UAH
had been defunded and he and Christy
were providing UAH data on a voluntary basis
for the past few years. Hard to criticize a
scientist who does that.
Firstly, Willis Eschenbach’s thought experiment does NOT reflect how a real greenhouse works and neither does it reflect how the man-made (fictional) climate change greenhouse effect works – as these both require an atmosphere, whereas the thought experiment requires a vacuum within the infinitely small gap between the inner sphere and the outer shell. However, the thought experiment DOES correctly demonstrate how a colder object (the outer steel shell) can make a hotter object (the inner sphere) even hotter. I admit to once being a Joe Postma supporter in believing that a colder object could never make a hotter object even warmer – but I was wrong.
The thing that everyone who refutes Willis Eschenbach’s thought experiment overlooks is the TIME required to establish the new steady-state temperature of the outer shell, once the outer steel shell is introduced. From the moment that the outer shell is introduced until the out shell reaches it’s steady-state temperature, not all of the energy that is constantly being generated by the inner sphere is actually being radiated away from the outer surface of the outer shell. That energy, which being generated by the inner sphere but which is not being radiated away from the outer surface of the outer shell does NOT disappear. Rather, it manifests itself as kinetic energy (and thus temperature increases) in BOTH the inner sphere and the outer shell. No ‘additional’ energy is required to be created by the power-source within the inner sphere to raise the temperatures of the inner sphere and outer shell. Rather, it is only the regular energy that the power source constantly generates (but which has not been immediately radiated away from the outer surface of the outer shell), which is responsible for the temperature increases in BOTH the inner sphere and the outer shell.
To help further, let me give you an analogy, using the transfer MONEY instead of ENERGY. I know that many dislike analogies, but this one is good, so please bear with me and let the (necessarily long) story unfold….
A person has been raised to believe that they should give away 10% of their wealth (which is represented by the funds held in their checking account) to an external charity, each month.
On the first day of the first month, when this person starts their first paid job (say at a monthly rate of $1000), they have zero dollars in their checking account, so they give away nothing to charity. At the end of “month one”, they receive their $1000 salary paid into their checking account.
At the commencement of “month two”, they now have $1000 in their checking account and so give away a $100 check to charity. At the end of “month two”, they again receive their $1000 salary paid into their checking account.
At the commencement of “month three” they now have $1900 in their checking account and so give away a $190 check to charity. At the end of “month three”, they again receive their $1000 salary paid into their checking account.
At the commencement of “month four” they now have $2710 in their checking account and so give away a $271 check to charity. At the end of “month four”, they again receive their $1000 salary paid into their checking account.
You will see that eventually their checking account will reach $10,000 and they will give away $1,000 each month to charity, and they will earn $1,000 for the next month – the state of financial equilibrium has been reached. This is like the equilibrium state of ‘inner sphere without the outer shell in situ’ in the thought experiment.
Many years pass and this hardworking and generous person rears a child and this child is also raised with a very charitable nature. However, unlike their hardworking parent, this child (now entering adulthood) does not become an independent worker in their own right but instead, remains wholly dependent upon their generous parent for their entire income. The parent, believing that “charity begins at home”, continues to give 10% of their wealth away but now, all of their charitable giving goes to their dependent child, rather than the external charity.
On the first day of the first month (donation day), when the dependent child leaves home, the dependent child has no wealth in their own checking account and so gives no charitable donation to any external recipient and similarly gives no charitable donation to their hard-working parent either. However, the parent has $10,000 in their checking account, so the parent now gives away 10% of their wealth (as a $1,000 check) posted to their dependent child – and, on arrival, this check is paid into the dependant child’s own checking account, so the dependant child’s checking account jumps to $1,000 dollars, whilst the parent’s checking account drops to $9,000. However, by the end of the first month, the parent is paid their regular $1,000 salary so the parent’s own wealth is again restored to $10,000.
At the commencement of “month two” (donation day), the parent posts another $1,000 check to the dependent child. For the first time, the dependent child similarly posts a check for 10% of it’s own wealth (the $1,000 gratefully received last month) to an external charity and also posts a check for 10% of it’s own wealth back to their generous parent (the child, like the shell in the thought experiment, has two directions for giving). So, the child posts a check for $100 to an external charity and posts a check for $100 check back to their parent (leaving the child $800). The two checks sent between the parent and the dependent child always cross in the post. On their arrival, the dependent child will now have $1,800 whilst the Parent will now have $9,100 but, by the end of “month two” the parent also receives their regular $1,000 paycheck, so the parent’s own wealth is restored, but this time to $10,100.
At the commencement of “month three” (donation day), the parent posts a $1,010 check to the dependent child. Similarly, the dependent child posts a check for 10% of it’s own wealth (now $1,800) to an external charity and also posts a check for 10% of it’s own wealth back to their generous parent. So, the child posts a check for $180 to an external charity and posts a check for $180 check back to their parent (leaving the child $1,440). The two checks sent between the parent and the dependent child again cross in the post. On their arrival, the dependent child will now have $2,450 whilst the Parent will now have $9,270 but, by the end of “month three” the parent also receives their regular $1,000 paycheck, so the parent’s own wealth is restored, but this time to $10,270.
At the commencement of “month four” (donation day), the parent posts a $1,027 check to the dependent child. Similarly, the dependent child posts a check for 10% of it’s own wealth (now $2,450) to an external charity and also posts a check for 10% of it’s own wealth back to their generous parent. So, the child posts a check for $245 to an external charity and posts a check for $245 check back to their parent (leaving the child $1,960). The two checks sent between the parent and the dependent child again cross in the post. On their arrival, the dependent child will now have $2,987 whilst the Parent will now have $9,488 but, by the end of “month four” the parent also receives their regular $1,000 paycheck, so the parent’s own wealth is restored, but this time to $10,488.
You will find that, eventually, the parent’s checking account will grow to reach $20,000 and that the parent (whilst still earning a salary of only $1,000 per month) will be required give away $2,000 each month to the dependent child. The dependant child’s checking account will grow to reach $10,000 and the dependent child will give a $1,000 check to their ‘external’ charity and will give a $1,000 check to their parent – the new state of financial equilibrium has been reached. This is like the steady state of ‘inner sphere and the outer shell in situ’ in the thought experiment.
All the ‘additional’ money in the system is accounted for (it only ever came from the gainful employment of the working parent and yet, by the introduction of a wholly dependent child, the parent has, over time, become wealthier – twice as wealthy in fact – whilst still only earning the same $1,000 salary each month. Neither the parent nor their dependent child has fraudulently created any fake money. The wealth held in the two checking accounts is entirely genuine – but the (wholly dependent) yet generous child’s “back-giving” has allowed the parent’s own wealth to increase.
A dollar, when given by a poorer person to a richer person, must inevitably make the richer person one dollar richer. However, the richer person will always ‘outgive’ to the poorer person – the net flow of dollars is always from the wealthier parent to the poorer child.
The same is true with the energy conveyed by the photons that are emitted from a colder object toward a warmer object – the radiant energy from these photons will be thermalized by the warmer object (like the dollar from a poor person, each Joule has to be accounted for). However, the warmer object will always ‘outgive’ the amount of energy it gives to colder object. The net flow of energy is always from the hotter surface to the colder surface.
The colder object does not prevent the hotter object from emitting all of it’s radiant exitance at the level prescribed by the S-B law. Similarly, the hotter object does not prevent the colder object from emitting all of it’s radiant exitance (on both it’s surfaces) at the level prescribed by the S-B law.
Finally, the steel greenhouse thought-experiment is based upon a shell having a radius that is only just larger than the radius of the sphere – that requirement in the thought experiment is necessary to ensure that all of the back-radiation from the inside surface of the shell falls upon the surface of the sphere – none is ever allowed to reach another area upon the inside surface of the shell. By this stipulation, the energy held in the sphere is now double that which would have been held if the nearby shell was absent and hence the temperature is now greater by a factor of the fourth root of two i.e. 1.189 times greater than if the nearby shell was absent. If the radius of the shell is not just slightly larger than that of the sphere but significantly larger, then the effect of back-radiation is significantly diminished. A larger radius for the shell can again be explained in the “charitable parent and dependent child” analogy: if the Charitable Parent had three dependent children (instead of one) then the parent would continue to give 10% of their wealth away each month which would be shared equally between the three dependent children. Each of the three dependent children would again give 10% of their wealth to external charities and would also give 10% of their wealth as generous giving back to their internal family but this time, the parent would only receive one third of that which it would have got back had with one child because each of the three children distribute that internal giving as one third to the parent and then one third to the other sibling#1 and also one third to other sibling#2 (and as the two other siblings do exactly the same thing then the inter-sibling transfers count as zero net effect). In summary, the parent does get more wealth because of the existence of three dependent children but not as much as would be the case with just one dependent child i.e. as the radius of the shell increases in proportion to the radius of the sphere, the temperature-increasing (wealth) effect of the back-radiation from the inside surface of the shell upon the external surface of the sphere becomes less.
As I said earlier, ‘back radiation’ between the sphere and the shell is only significant in a vacuum. As soon as a gas is present in the gap between the sphere and the shell, the standing temperature difference between the sphere and the shell is reduced (the gas molecules act like a ‘resistive’ short-circuit). As the density of gas molecules is increased, the ‘resistive’ short circuit will tend to become a true ‘short circuit’ where the sphere temperature and the shell temperature are exactly the same.
LikeLiked by 1 person
One caveat: There will never be a “same temperature” under any circumstance following a radial line, precisely because of the radius difference.
LikeLiked by 1 person
Regarding the caveat: Yes, if we are splitting hairs, then you are correct: the radius of the inner sphere (and hence surface area) must always be smaller than that of the outer shell and so the Watts/M^2 radiant emittance (and hence S-B temperature) must always be greater on the surface of the inner sphere. However, in my defence, in the opening paragraph I did state that “…the thought experiment requires a vacuum within the infinitely small gap between the inner sphere and the outer shell”. That ‘infinitely small gap’ allows for the Radius of the inner shell to approximate to the Radius of the outer shell. That is important for the mathematics used in the thought experiment.
To assist you to imagine the thought experiment more faithfully, try imagining the inner shell to comprise a steel ball bearing with a 2mm diameter which is tightly surrounded by 999 steel shells each 1mm thick acting as laminations. You have now created a steel sphere of radius 1000mm. Now, imagine that you have miraculously ‘vanished’ the penultimate lamination from within the sphere and you have created an inner sphere with a radius of 998mm and a vacuum gap of 1mm and an outer steel shell which has an inner radius of 999mm and an outer radius or 1000mm. Once you’ve then miraculously inserted the radio-active power source into the centre of the of the inner sphere then you have constructed (in your head of course) a reasonable working model for the thought experiment to operate, with both the inner sphere and the outer shell each having a radius of approximately 1 metre.
If the vacuum gap was instead allowed to be filled with gaseous molecules then the inner shell would no longer represent it’s S-B temperature (being 1.189 times greater than the Kelvin temperature of the outer shell), as Radiation would no longer be the only energy transport mechanism. Instead, convection would allow more energy to move across the air gap to the outer shell and thus causing the surface temperature of the inner shell to be reduced closer to (although not exactly the same as) that of the outer shell. And the more gas molecules place in that air-gap then the more effective the convection would be. That was the point that I was trying to make. To help further: if that penultimate lamination were to be re-inserted i.e. miraculously ‘unvanished’, then the energy transport mechanism would then comprise only conduction, such that the temperature of the laminations which previously comprised the inner shell would be identical to the surface temperature of the outer surface (assuming steel has perfect conductance).
As I have said, several times, the mathematics used by Willis does require the radius of the inner sphere to be the ‘same’ as the radius of the outer shell. If the radius of the inner sphere was allowed to be 1 and the radius of the outer shell was allowed to be 2 then simple trigonometry can be used to show that only 1/3 of the ‘back-radiation’ from the inner-surface of the outer shell will strike the surface of the inner sphere such that the S-B temperature of the surface of the inner sphere will only be the fourth root of 1.3333 times greater than that of the Kelvin temperature of the outer shell i.e. the ability of the outer shell to affect the surface temperature of the inner sphere is substantially reduced.
LikeLiked by 1 person
Glad to see you’re back posting.
Zoe, I have found a mistake in your calculations. In the third line of the section with the steel shell, you plug in 254 K for T1. However, this value (254 K) was calculated from the case without the steel shell, so it no longer applies when you add the steel shell. You have to treat T1 as a variable.
As a result of using the old value for T1, your solution with the steel shell no longer obeys energy conservation. The outer shell emits less energy to space than the nuclear reactor is producing. So energy is piling up in the system, and it will get hotter.
Wrong. T1 is not and can not be a variable, for the reason I explained. Nuclear reactions are FIXED! You will not get more energy from the fission of fuel based on anything else down the line.
I’m not an expert on nuclear physics, but that is absolutely true.
As a thought experiment, take a regular 9V battery, and create a heat source by directly connecting the terminals. The max power you get is P=IV= 0.5 A × 9V = 4.5 W.
Let’s assume the battery has infinite “fuel”, voltage never drops, and you have perfect conversion to heat. You can couple the “short” to a metal shell with an inner surface area of 1m^2.
So, you will have the same experiment, but the inner metal shell gets 4.5W/m^2. If we imagine the shell to have zero thickness, it will also emit 4.5W/m^2 to space.
But let’s add some thickness. Now we got 3 W/m^2 emerging to space.
This is where the problem starts. Will adding another shell cause more than 4.5 W/m^2 to be available at the inner wall? Will the battery crank out more power? No, of course not.
The max temperature of the inner wall is limited to ~94K. It is max, it is fixed. Nothing else you put down the line will cause it to rise.
LikeLiked by 1 person
While the energy density of the fuel is fixed, the rate of reaction, and therefore the rate of energy released, and therefore the equilibrium temperature, are definitely not fixed, and depend on factors such as density of the fuel, neutron flux, and even the speed of the neutrons. If the rate of reaction were fixed, then we wouldn’t be able to control the output of reactors, or make nuclear weapons, nor would be have to worry about meltdowns.
At equilibrium, the total rate of energy emitted by the outer shell must equal the total rate of energy being absorbed by the inner shell, and this rate of energy must match the power output of the reactor. while the rate of energy transport across a medium is proportional to the difference in temperature on either side and inversely proportional to the thickness.
Putting it all together, the power output is given by the reactor. The temperature of the outside part of the shell is then determined by the power output and the radius using the Stefan-Boltzmann Law. And the temperature of the inside part of the shell is given by the the power output, the outer shell temperature, and Fourier’s Law.
If the temperature of the reactor were fixed, then you have a violation of either the first or second laws of thermodynamics, since either you have energy disappearing or the reactor heating the inner part of the shell higher than its own temperature, depending on if you set the inner part no higher than the reactor, or if you have the same total emission on the outer shell as the power output of the reactor.
I gave a link. The link shows diagrams with a maintained room temperature. The temperature is not adjusted to that of what’s outside. The fuel rate is adjusted to maintain the enclosed (inside building) temperature.
So whether it’s hot or cold outside, the reactor’s surface is set to room temperature. I just use that same assumption. What’s inside the steel housing is the room, and its temperature is maintained.
What would happen if the flux, rather than temperature, was maintained constant and the sink (people’s usage of generated electricity) was abruptly halted? Oopsie
Which means that the power output of the reactor is adjusted to keep the temperature the same, and so doesn’t address the point that with a given power output, T1 will be different with the shell versus without.
A reactor is set to keep T1 stable. It is DANGEROUS to keep the flux stable.
I think you misunderstand stability. The logic circuit is not set to make stable whatever new temperature may arise by keeping the flux the same. It is set to a specific temperature. If the room temperature in a real reactor goes to 40C for some unknown reason, that is also dangerous, and the reactor will reduce fuel further so that the wall sensor is back at 20C, or whatever is the standard.
I don’t know why that’s hard to understand. My home central heating/cooling system is set to a fixed temperature: 71F. When it gets too hot, the AC goes on. Too cold, the heat goes on. My system is not set to produce a fixed flux, and neither is a normal nuclear reactor.
It’s not that it’s hard to understand, it’s that you’re not comparing apples to apples anymore when you vary the flux.
The entire point of the thought experiment is that with a given flux, T1 will change. If the flux is no longer given, then it’s a completely different thought experiment.
No, the entire point of the thought experiment is that T2 will change. T1 doesn’t exist in Willis’ set up.
No existing nuclear reactor ever built creates a constant flux.
Willis sets the flux at a constant amount, in his case 235 W/m^2, and lets the temperature change as determined by the Stefan-Boltzmann law, Fourier’s law, and the conservation of energy.
You vary the flux to force the temperature the same, so it’s no longer addressing the same points.
Right. His nuclear reactor is a unicorn. Mine is a horse with an attached horn.
The central claim of the greenhouse gas theory is that outgoing flux is reduced. Is it not? Reducing emissivity should lead to a warming. But Willis keeps the emission the same. So who is misleading?
Close; the core principle of the greenhouse effect is that the ratio of surface flux to escaping flux is reduced via absorption and emission of IR by greenhouse gasses, e.g. only a certain percentage actually passes through. It’s the Beer-Lambert law applied to the atmosphere.
At equilibrium, the first law of thermodynamics dictates that the escaping flux is equal to the incoming flux. So the outgoing flux is determined entirely by the incoming flux.
If you compare two systems with the same flux, one without the reduction and one with the reduction (either via Fourier’s law or the greenhouse effect), the latter will have a higher surface temperature than the former, just from those two above principles.
lol. but you kept the albedo fixed!
You have 235 initially (current GHG levels) because albedo is set to ~0.3 from the ~340 we get from the sun.
Let’s say we introduce so much GHGs that Earth now emits 117.5.
Since the albedo is determined to be just (shortwave in – longwave out) / shortwave in
All that will happen is that albedo will increase to ~0.655.
The solar received at the surface will be ~117.5.
There’s only a warmup because you adjust some parameters and not others.
I account correctly that new fluxes will be 117.5 and 117.5, but you want to keep 235 and 235 as if it’s some sacred fixed number.
Albedo is the ratio of reflected flux / incoming flux of visual (shortwave) radiation. Emissivity is the ratio of emitted thermal radiation to that of a perfect blackbody. What you described is the imbalance ratio, which is zero for equilibrium.
Thought you’d go there.
“Albedo is the ratio of reflected flux / incoming flux of visual (shortwave) radiation”
Yeah, and in order to satisfy the equilbrium at TOA, reflected shortwave = incoming shortwave minus outgoing longwave.
So if you reduce outgoing LW, you increase reflected shortwave … albedo goes up.
The amount of shortwave reaching the surface is reduced.
That’s not true at all. Reflected shortwave = incoming – absorbed shortwave flux. Absorbed shortwave flux is only equal to outgoing longwave flux at equilibrium. If outgoing longwave flux does not match absorbed shortwave flux, then it’s no longer in equilibrium and the temperature changes until it’s equal due to the first law of thermodynamics.
Again, you keep albedo fixed as if it’s a conserved quantity. It’s not.
Now you’re suggesting that GHGs do not in fact reduce outgoing radiation. And yet the textbook math requires it.
Look at my response to Jarle.
“is only equal to outgoing longwave flux at equilibrium.”
Yeah, and a new equilbrium will be established, and there is no fixation to the old.
We’re not even talking about the real albedo – what determines surface insolation. Only 48% of insolation reaches the surface, not 70%.
On Venus, albedo is 80%! and only 1-2% of TOA insolation reaches the surface.
Zoe wrote “So if you reduce outgoing LW, you increase reflected shortwave … albedo goes up.”
There is no physical mechanism for that to happen.
Really? Filling the atmosphere with more molecules that can absorb in shortwave such as H2O and CO2 will not affect albedo? No more clouds will be created if we add water vapor galore?
Funny how Venus has 80% albedo, and only 1-2% insolation reaches the surface …
CO2 has an inconsequential absorption of shortwave. Clouds are liquid water, not water vapor. Water vapor increases as a feedback to increased temperature. Increased water vapor’s major and immediate effect is absorption of longwave. Increased water vapor does allow for increased clouds, but different kinds of clouds have differing balances of effects on longwave retention and shortwave reflection. Any effects on shortwave albedo are distinctly secondary in time, causality, and influence.
And yet … while only 48% of the sun reaches Earth’s surface, only 1-2% reaches Venus’ surface.
CO2’s absorption bands and transmissivity are also set by temperature and pressure. Venus has an incredible new set of absorption bands that are not seen on Earth, including in the shortwave portion.
It is completely possible to reduce the longwave emission without touching albedo, e.g. with CO2 alone. Water vapor concentration is entirely dependent on the Clausius–Clapeyron relation.
Right now at TOA:
340 SW IN = 105 SW OUT + 235 LW OUT
Now we reduce LW out to 230
340 > 105 + 230
Imbalance. Now what happens?
Don’t contradict yourself a 2nd time.
I already answered this a few comments ago:
“If outgoing longwave flux does not match absorbed shortwave flux, then it’s no longer in equilibrium and the temperature changes until it’s equal due to the first law of thermodynamics.”
Be more clear. We’re not talking about temperature changes at the bottom. We are talking about TOA fluxes. Are you saying that 230 LW returns to 235?
Zoe asked “Now we reduce LW out to 230. 340 > 105 + 230. Imbalance. Now what happens?”
Energy accumulates in the atmosphere–lower in the atmosphere first, because there is more LW absorption happening by the air above there. LW heading toward space is intercepted by greenhouse gases between it and space, then transferred by collision to surrounding molecules of all types, and some of it reradiated in all directions. The now-warmer greenhouse gases increase their radiation because they have more energy.
That’s more outgoing radiation trying to squeeze through the obstacle course of greenhouse gases between those radiating molecules and space. Failing to squeeze through, that extra radiation is absorbed by those intermediate molecules, thereby increasing the energy in those intermediate molecules.
The proportion of LW radiated toward space, that actually makes it to space without being intercepted, switches location from being radiated by greenhouse gases lower in the atmosphere, toward those higher in the atmosphere. But those gases higher in the atmosphere radiate less than the gases lower in the atmosphere, because the former are colder. That lesser radiation makes those higher gases retain more absorbed energy, thereby warming.
Eventually the greenhouse gases in the lower layers radiate enough, so even with their difficulty of making it all the way to space, enough LW does make it to space to for their outflow to balance the inflow from the Sun.
Skip the drama. Why can’t you just give a simple answer?
Let’s try again:
340 > 105 + 230
Imbalance. Now what happens?
Are you saying things at TOA will return to:
340 = 105 + 235
YES or NO
The first law of thermodynamics dictates that the temperature will increase, thereby increasing the emitted surface flux and so the TOA flux, until the TOA flux matches the incoming absorbed flux.
OMG. Does LW return back to 235 from 230? YES or NO?
Yes. The surface temperature increases as required by the first law of thermodynamics, du = dq – dw. Since the surface emitted flux is proportional to T^4, it will also increase, and since the TOA flux is proportional to the surface flux, it will also increase, until flux-in = flux-out.
OK, so 230 will return to 235, you claim
Physics works in real time. We don’t need to wait for a change in 5 W/m^2. Same principle should work for a change in 0.0000000000000000001 W/m^2.
So, how can you say it’s possible to lower OLR when it will instantaneously return to its previous state?
Nobody said anything about instantaneous. That first law of thermodynamics is a differential equation, du = dq – dw where du is the differential of internal energy, dq the differential of heat, and dw the differential of work.
Since no work is being done on the atmosphere, dw = 0, so du = dq.
For most materials, like the air, u = CT, so du = CdT, where C is the specific heat capacity. And since dq is the total heat flow, dq = (P_absorbed – P_emitted) dt = (P_absorbed – (1/λ) εσT^4) dt, where A εσT^4 is the Stefan-Boltzmann law for a greybody, and λ is the ratio of surface emission to TOA flux (e.g. the strength of the greenhouse effect).
This means that the rate of change of the temperature, dT/dt = (P_absorbed – (1/λ) εσT^4) / C.
Equilibrium occurs when dT = 0, which happens when P_absorbed = (1/λ) εσT^4, rearranging gives T = (λ*P_absorbed/εσ)^(1/4) as the equilibrium temperature.
With a gradual change in λ, there is a gradual change in T.
Why do you have to be so obtuse? Can you stick to just one thing? We can discuss temperature later.
You said that it’s possible to reduce OLR, and at the same time claim it will be restored. So any minute change is quickly offset.
Bringing in heat capacity doesn’t change this either because it’s active in both OLR reduction and subsequent return. So you can just drop it from your argument.
I’m giving you the physics and explaining the mechanism behind the outgoing flux eventually matching the incoming flux, and why it’s not instantaneous.
It takes time for the flux to match, as described in the math above, so a faster change in λ leads to a greater imbalance between incoming and outgoing flux. If you change λ fast enough, you increase the imbalance faster than it can equalize.
“If you change λ fast enough, you increase the imbalance faster than it can equalize.”
Very much false. It takes same time to unbalance as rebalance.
Why? You explained yourself: The IR-active gases are surrounded by non-IR active gases. While IR active gases can drop temperature (and emission) faster, they are still surrounded by gases that can’t. Those non-IR gases will cool slower. They can only effectively cool by bumping into IR gases. And guess what? They will transfer their energy to the IR gas to emit, thereby slowing down the IR gas cooling.
It’s quite amusing that you only consider heat capacity one way but not the other way.
Heat capacity works both ways exactly the same. Which is why I urged you to drop this.
That doesn’t follow, since λ is determined entirely by Schwarzschild’s equation.
Uhm, what does Schwarzchild have to do with this?
Emission equations can only deal with IR active substances.
You want to warm nitrogen and oxygen by conduction on the rebalancing part, but can’t comprehend nitrogen and oxygen warming H2O/CO2 by conduction when they cool too fast on the imbalancing side.
Schwarzschild’s equation gives you the transmittance of a reactive medium. dI(v) = nσ(v) (B(T,v) – I(v)) ds, where I(v) is the intensity of incident radiation at frequency v, n is the density of the reactive medium, σ(v) is the cross-section at frequency v, B(T,v) is the Planck emission at temperature T and frequency v, and s is the path.
We are using bulk parameters. Why did you change topics to transmissions at wavelengths?
Schwarzschild’s equation is given in terms of spectral radiance. You can simply integrate over the relevant range if need be to get the transmittance in a band, including the entire spectrum.
Are you under 23 years old?
It seems you are making an energy balance argument, but instead of calculating the difference between energy in and out, you are just stating that T1 can’t change because of energy balance.
But let’s do the math. In your first calculation, you correctly set that the energy produced by the nuclear reactor must equal the energy leaving from r1. by conduction, q=4 pi k r1 r2 (T1-T2)/(r2-r1) = 235 W.
In the second calculation, you no longer set this condition. Instead, you arrive at T2=253.884 K, and you set T1=254.041 K . Now if we calculate conduction, we get q=4 pi k r1 r2 (T1-T2)/(r2-r1) = 117.5 W. This is exactly half as much as before. But the nuclear reactor is still producing 235 W.
So the reactor is still producing 235 W, but only 117.5 W is escaping from radius r1 by conduction. Energy balance is not satisfied.
LikeLiked by 1 person
“1.811 x 10^10 kJ/mol of U-235”
See that? The fuel is supplied at a constant rate, and you get fixed joules per second. Fixed watts over fixed area = Fixed W/m^2.
But you want to violate physics so as to get more W/m^2 from the nuclear reactor. It just doesn’t work that way.
Zoe, just to make sure we’re on the same page: which of these points do you agree with?
1) the reactor produces a constant output power, regardless of the environment, as you have just said. This is 235 W
2) in your second calculation, only 117.5 W leave the reactor through conduction
3) the reactor is producing more energy than it is getting rid of, so it must be getting hotter
LikeLiked by 1 person
As I said, the temperature goes up by 0.158K.
The reactor is not producing more energy than it is getting rid of, it is producing the same energy that it is not getting rid of a little bit more.
You have a different solution? What is it?
Oh I see what you’re saying. It’s simple: conduction is a loss. Your nuclear reactor is split between powering the local matter and emitting to space.
You didn’t treat conduction as a loss in the first calculation (no shell), so why do so in the second?
In my solution to the second calculation, I have almost the same equations as you, but I do exactly what you did in the first calculation and set the energy produced by the reactor equal to the conduction away. Then your first line looks like
235 W=4 pi k r1 r2 (T1-T2)/(r2-r1) = 4 pi r2^2 sigma (T2^4-T3^4) = 4 pi r2^2 sigma T3^4
This leads to (using your numbers so 4 pi r2^2=1 m^2)
T3 = (235 W/m^2/sigma)^.25
T2 = 2^.25 T3
T1 = T2 + (235 W) (r2-r1) / (4 pi k r1 r2)
From here just plug numbers into the first equation, to get T3 = 253.73 K
Plug this into the second equation to get T2 = 301.73 K
Plug this into the first equation to get T1 = 302.05 K
If you’re that clever why did you fall for my trick?
I was describing a real world nuclear reactor, but you believed my false description of a fake reactor.
“Almost all of the current reactors built to date use thermal neutrons to sustain the chain reaction. These reactors contain neutron moderator that slows neutrons from fission until their kinetic energy is more or less in thermal equilibrium with the atoms (E < 1 eV) in the system."
In real world reactors, the flow is actually cut off to obtain equibrium temperature of environment.
The first equations tell what the operators would set as that equilbrium temperature to.
The other type of reactor is a fast reactor. In this scenario the flow increases.
So you see, the trick is that Willis used a power source that doesn't actually produce a constant flux 🙂
Then I distracted you with the battery example. But that would actually work as you said. But you didn't catch on.
I like to that you discovered that I'm a tricky gal. There are a few other little physics/description tricks on this site.
I never however manipulate data. That is 100% as is. If I made a calculation mistake, then it's a mistake. I provide the code to catch it. I don't play around in this area.
Nice trick. My calculation used a constant power source, because that was the original problem statement from Willis, and more importantly, that is a more analogous situation to the earth. The amount of sunlight incident on the earth does not change based on the earth’s temperature.
But if you are assuming the power source adjusts its power to maintain a constant temperature, regardless of the environment, of course the result is that you will get a constant temperature. In that case your calculation may be more appropriate.
Yes, correct. Willis stated a constant power source, but used a physically unconstant power source.
T1 is constant, but T2 changes, with a normal nuclear reactor.
I’m not assuming discarding the environment. The environment IS the inner wall of steel housing. The designers made the device work as initial equations show.
I’m glad you took the bait. I like discussions. Thank you very much!
Always good to have a thoughtful discussion, thanks.
I think the next natural question is: tricks aside, let’s say we actually have a constant power source, for example a heating resistor with controlled P=I*V. Would you then agree with my solution?
Yes, it’s correct. Although I don’t fully understand the electrical ramifications of having two metals separated by an infinitely tiny gap. Probably a complication there. But otherwise, yes.
I might be confused, but isn’t Q measured in Watts, and the flux of 235 is W/m^2? So for calculating T1, shouldn’t you be setting the left hand side of the equation equal to 235 x the surface area of the outer sphere? And would this not significantly increase T1?
The surface area is 1.
Actually, if you look closely at this supposedly scientifically accurate animation, the temperature at the point of emission stays the same, which implies unchanging emissivity??
It doesn’t make sense to me. I’d have to see the math.
The basic one layer model warm-up formula is:
Where S is ~340, a = 0.3, and o is opacity (1 minus emissivity) = 0.78
So the sun’s 340 W/m^2 becomes ~390 at the surface.
If opacity is increased from 0.78 to 0.81 (emissivity decreased by 0.03)
The result becomes 400 W/m^2 at the surface.
Well, that’s the basic textbook theory anyway.
All you’ve done is succeeded in showing that you did not understand the problem set up by Willis. The internal source is intended to supply a constant energy flux. Willis’ solution to the problem is correct.
But his power source is a unicorn – doesn’t exist. I use a real nuclear reactor.
That’s not *his* power source. That’s YOUR straw man. Did you even read what he wrote?
“For our thought experiment, imagine a planet the size of the Earth, a perfect blackbody, heated from the interior at 235 watts per square metre of surface area. How is it heated from the interior? Doesn’t matter, we’ll say “radioactive elements”, that sounds scientific.”
He tells you to heat the interior in any manner that you like so long as it supplies a constant 235 watts per square metre of surface area.
You didn’t do that.
Exactly. His power supply is a unicorn. I use a real power supply.
He needs the output of a brown dwarf. But he doesn’t use a brown dwarf:
“For our thought experiment, imagine a planet the size of the Earth”
He needs something at least 10x the size of Jupiter.
It’s a thought experiment. You did not satisfy the constraints of the thought experiment which was to use a power supply that provides a constant 235 W/m^2. You did not solve the stated problem correctly. Now you are stomping your feet like a child because you got called on it.
Got called on what? I specifically said that the temperature is fixed. I put the world “real” in the title. If you don’t like what I did, that’s your problem, not mine.
I don’t believe in unicorns or brown dwarfs the size of our planet.
The stated problem is unreal and physically impossible.
“I specifically said that the temperature is fixed.” That is NOT the problem that Willis described. Why is that so difficult for you to understand?
“I put the world “real” in the title.” So you think that there is a “real” nuclear reactor that can supply 235 W/m^2 to an Earth-sized sphere?
“I don’t believe in unicorns or brown dwarfs the size of our planet.” Sure you do. You believe that there is a nuclear reactor that can supply 235 W/m^2 to an Earth-sized sphere.
“The stated problem is unreal and physically impossible.” Why are you having so much trouble understanding what a thought experiment is?
Face it, your post is full of ignorance and nonsense.
Oh and by the way, conduction is not a “loss”. At steady state the net flux of energy out through any spherical surface containing the source must be 235 W/m^2.
You can admit that you don’t really understand physics.
If I don’t understand physics, how did I get the right answer for a real world nuclear reactor?
“You believe that there is a nuclear reactor that can supply 235 W/m^2 to an Earth-sized sphere.”
Don’t slander me. The sphere I use has a surface area of 1 square meter.
“Why are you having so much trouble understanding what a thought experiment is?”
I don’t. My post is a thought experiment. I didn’t actually build this device.
Willis’ setup is science fiction, mine is REAL.
Did you know that real satellites don’t use “insights” from Einstein’s thought experiments? Why? Because they don’t correspond with reality. They end up fixing his insights with fudge factors, or just ignoring it all together and using parameters from actual observations.
“conduction is not a “loss”. At steady state”
I don’t use steady flux. Conduction is a loss.
Zoe, a well known example of a thought experiment is Albert Einstein imagining himself chasing a beam of light. It is intentionally unrealistic, for the very purpose of being valid and useful. Here is an explanation: https://en.wikipedia.org/wiki/Thought_experiment
But Einstein’s insights were not useful. Fudge factors need to be added to line up with observations.
“But Einstein’s insights were not useful,” Zoe Phin claimed. Here endeth my attempt to find any value in Zoe Phin’s opinions.
Because he’s a sacred cow to you? Well, Robert J Oppenheimer thought he was a useless imbecile, and he built a nuclear bomb. What did Einstein build? Thought experiments? Tell me who uses them in REAL things. I don’t mean those that are now hunting for dark matter and the like using his presumptions.
“Einstein was a physicist, a natural philosopher, the greatest of our time.”
– Robert Oppenheimer, “On Albert Einstein”, December 13, 1965
Oppenheimer only started to like and be friends with Einstein in the last decade of his life, mainly because they were both outspoken about the proliferation of nuclear weapons. That’s a good thing.
But by praising Einstein he gets to partially blame him too.
“Yeah uhum sure … if it wasn’t for Einstein … there would be no bomb. It was his special insights that made it possible”.
That’s a paraphrasing of what he said.
Even though Einstein didn’t think nuclear weapons were possible, and he was of ZERO use to the Manahattan project at the time.
Learn to read between the lines!
Although, Einstein wasn’t alone in doubting nuclear energy. Tesla also failed to understand it.
For Oppenheimer’s praise to be worth something, he would have to establish what great technology or LASTING insight came from Einstein that couldn’t be achieved otherwise. Just buttering up his legacy with Hallmark card rhetoric won’t do.
We know there’s very little insights that will remain standing, and a complete collapse of his paradigms is eminent. Though the mythology may still persist for now.
But to his credit, he did incite a lot of good debates, and being wrong is still scientifically useful.
“His part was that of creating an intellectual revolution, and discovering more than any scientist of our time how profound were the errors made by men before then. He did write a letter to Roosevelt about atomic energy. I think this was in part his agony at the evil of the Nazis, in part not wanting to harm any one in any way; but I ought to report that that letter had very little effect, and that Einstein himself is really not answerable for all that came later. I believe he so understood it himself.”
More along the same theme. You see how his rhetoric is shaped by his friendship and alliance to the same cause?
This was of course great, but nothing to do with Einstein’s mostly plagiarized hypotheses that made him famous and gave him a Nobel Prize.
Oppenheimer explicitly said in that speech that he didn’t think Einstein deserved any of the blame, nor did he deserve any credit for it. I was directly addressing your claims that Oppenheimer blamed Einstein, even partially.
Also, Einstein didn’t get the Nobel for either special or general relativity, and it’s entirely beside the point that a thought experiment need not be physically realistic to give insight.
OK, in THAT speech, sure.
The Nobel citation reads that Einstein is honoured for “services to theoretical physics, and especially for his discovery of the law of the photoelectric effect”
But he didn’t discover the photoelectric effect (Hertz). Nor was he first to suggest light is discrete packets of energy (Planck). Nor did he coin the term “photon” (Compton).
Einstein never did any physical experiments to discover anything.
Many people feel this refererence to theoretical physics is relativity by the backdoor.
The photoelectric effect was seen as a paradox until Einstein was able to explain it using Planck’s theory of quantization (which, incidentally, Planck never performed any experiments either, neither did Maxwell, Dirac, Schrödinger, Boltzmann, etc.; that’s what theoretical physics is.). Nobody else thought to do so.
This explanation resolved more than just the paradox, and was able to make testable predictions on the nature of the photoelectric effect which were later confirmed. Not only that, but his explanation was one of the core principles to the formation of quantum mechanics.
Whatever. Einstein is nothing without Planck and Millikan.
I use Planck’s formula to convert a spectrum into a power flux. Einstein didn’t need to exist for me to do that.
It’s a shame they gave a prize to Einstein before Millikan.
The debate in 1921 was whether to have ZERO prize or give it to Einstein. It’s obvious it was more of a politcal move, as usually there are several contenders every year.
There was no debate between which candidate to pick, and in a haste they gave to Einstein.
Even so, he didn’t get the Nobel prize for either theory of relativity.
And Planck is nothing without Stefan or Boltzmann. And both of those would be nothing without Maxwell, Clausius, And they would be nothing without Ampere, Faraday, Gauss, etc. And so on and so on. Even Newton and Huygens would be nothing without Copernicus, Brahe, Kepler, Galileo, …
Physics, and indeed science in general, is built on the improvement and extension of the work of those that have come before.
But Planck offered something new. He proposed E=hf.
Once you offer something like that, it’s obvious. Quanta is already in the formula.
And Einstein proposed K = hf – W for the photoelectric effect, which was later verified. Like I said, he was the one that resolved a major paradox of the time.
He was also the first to propose that Planck’s E-hf meant that light always came in discrete packets. Before then, everyone thought it was just a mathematic trick to make the equation give the right values. Using that proposition, he explained why photoelectric effect didn’t obey Maxwell’s equations. His explanation was rejected for years because it ran contrary to the prevailing idea of the theory of light radiation, until Millikan verified it experimentally, which he did so only because Einstein proposed it.
He also proposed Ruv – 1/2 R guv = Tuv, which is probably one of the most revolutionary equations in physics.
There are wave-only theorists that have no problem explaining light with just waves.
Isn’t K = hf – W just the first law of thermodynamics with different letters and Q=hf ?
The wave-only attempt at explaining the photoelectric effect is exactly what lead to the paradox that Einstein resolved.
K isn’t the internal energy of a system, it’s the kinetic energy of a photoelectron ejected by a quantum of light out of a potential of W; though it is inspired by the first law of thermo. That’s why W is called the work function, even though no work is actually being done.
There are wave-only theorists today! They don’t see any paradox. The “paradox” is a created narrative.
1st LoT doesn’t have “internal energy of a system”. It has a CHANGE of internal energy of a system. An ejected photon would cause a change.
It really is just a restatement of 1st LoT. Not impressed at all.
Can you cite an example of one of them explaining the photoelectric effect with wave-only physics?
No, because I don’t have a link to everything I read. But I can tell you that if you assume it exists and are open minded … you will easily find it.
That’s not how science works, and leaves one open to believing in things that don’t actually exist except in the minds of those that want it to.
Einstein deserves credit. However, the glorification of him is probably due to a very one-sided promotion of him. It is always negative when such a “climate” arises. It is just as forbidden to criticize Einstein as it is to criticize negative consequences of immigration or negative effects of transitioning to renewables. Do it, any you are instantly turned into a fool that nobody wants to have a discussion with anymore.
I found this book a delight. A balancing weight to the mad glorification af Einstein.
LikeLiked by 2 people
Of course he deserves credit. Being wrong about a lot of things tells others what not do.
I’m not right about a lot of things either, but I don’t publish papers and persist that they’re right – causing others to chase phantoms. Nor do I hire/persuade people to slander the experiments of others.
At least he confessed in private that he worries he may have achieved nothing that will last.
“Do it, any you are instantly turned into a fool that nobody wants to have a discussion with anymore.”
Ha! I get that all the time in my professional work discussing things with non-experts and experts that will be shown to be wrong.
There’s a parameter in HITRAN’s database named after Einstein. So maybe he did something right. Or he plagiarized that too?
He was certainly great at self-promotion and calling his critics “racist” when he couldn’t address their reasonable arguments.
“If I don’t understand physics, how did I get the right answer for a real world nuclear reactor?”
You didn’t. A “real-world” nuclear reactor cannot maintain a perfectly constant temperature.
“The sphere I use has a surface area of 1 square meter.” – And of course that is not Willis’s problem statement. Nor is your straw man that you can use a real-world nuclear reactor to satisfy his constant energy source parameter.
“My post is a thought experiment.”
Your post claims that Willis’s solution to his problem is wrong. It is not, hence your post is wrong. Face it, you either did not understand the parameters set up in the problem, or you purposefully lied about them.
“Willis’ setup is science fiction, mine is REAL.”
Are you sure? What “real-world” nuclear reactor will fit inside a sphere with 1m^2 of surface area? What object has an emissivity of 1? Keep digging that hole.
“Conduction is a loss.” Nope. Conduction does not annihilate energy. It is not a loss.
“I don’t use steady flux.” Are you sure? If you are not calculating steady state, then where is your time derivative term?
Face it. You don’t understand basic physics.
‘A “real-world” nuclear reactor cannot maintain a perfectly constant temperature.’
Sure, it can’t keep it exactly perfect, but that’s the aim!
‘What “real-world” nuclear reactor will fit inside a sphere with 1m^2 of surface area.’
The kind where all energy is made to generate heat, not electricity.
Ever shorted a battery? Gets hot, doesn’t it? Extrapolate to nuclear …
‘Nope. Conduction does not annihilate energy. It is not a loss.’
That’s not what a loss means, bozo. It means the external flux is reduced. That’s why we insulate things. The energy is not annihilated, it goes into the insulating material.
‘If you are not calculating steady state, then where is your time derivative term?’
Steady state and steady flux are not the same thing. Steady state is when there is no further temperature changes. Steady flux is when the flux doesn’t change. The two can overlap depending on your problem, but they are not the same.
“What object has an emissivity of 1?”
That is just such a petty argument. Don’t give yourself away so easily.
“The kind where all energy is made to generate heat, not electricity.
Ever shorted a battery? Gets hot, doesn’t it? Extrapolate to nuclear ”
Are you not aware that in order to maintain a constant temperature a control system is needed in order to control the reaction and thus the heat output?
What “real world” control system are you using that fits inside a sphere with 1 m^2 of surface area?
“That’s not what a loss means, bozo. It means the external flux is reduced.”
If the external flux is reduced, then where has the energy gone?
“The energy is not annihilated, it goes into the insulating material.”
And at steady state, the same amount of energy that goes into the material, goes out. You don’t understand what insulation does. It changes the temperature gradient required to pass the same amount of heat.
“Steady state is when there is no further temperature changes.”
And that is what you calculated. You are so clueless that you don’t even understand the calculation that you performed. You have no time-dependence in your calculations and thus you are attempting to calculate steady state conditions.
“That is just such a petty argument.”
The pettiness has been set up by you from the beginning. You aren’t calculating anything “real-world”. You didn’t understand the specifications of Willis’s problem and now you are squirming to cover your ass.
You don’t understand basic physics.
Do you even read the material given to you?
It’s not just for a steady temperature that you need a control mechanism for. You need to cut the fuel to prevent a critical state, and so the story is the same for a constant flux – which is never done.
You do realize that if citizens do not use the electricity made by nuclear plant … it doesn’t continue to “pump” the same energy. You understand what would happen if a nuclear plant produced a constant flux while demand dropped?
Do you even have a brain?
As for the control mechanism, it’s not a problem. There’s a company that designed and bringing online reactors that produce 60MW and take up [much[ less than 1000 cubic meters for the entire operating chain. Can you math? Well, maybe it doesn’t scale perfectly, but heyo, we couldn’t build tiny drones decades ago. So there is little doubt about the feasibility.
“You don’t understand what insulation does. ”
You don’t know what words mean, since you interpret them to your own liking. If you don’t like the word “loss”, how about “sink”? Have you heard of a heat sink?
Do you put a heat sink on your computer processor to warm or cool it? Think hard about this. Times up … you definitely want a a good heat sink to cool your processor.
The term “sink” was precisely created as an analogy to what happens in a sink: water falls down the drain, out of sight, and reduces the water content in your sink. The water is “lost” from your POV.
‘You are so clueless that you don’t even understand the calculation that you performed.
You have no time-dependence in your calculations and thus you are attempting to calculate steady state conditions.’
You are so incredibly obtuse. Yes! I’m doing steady state without a constant flux. You are more than welcome to run this setup through time, and you will get the same result.
The temperature in a nuclear reactor is CONTROLLED. Fuel flow is reduced.
As soon as you raised dT/dt your dq/dt is reduced, you uneducable parrot.
“You didn’t understand the specifications of Willis’s problem.”
I very much understood the specifications of Willis problem. That’s why you accused me first of a strawman, remember? Be consistent in your accusations, jerk face. I already explained to someone else that I can be tricky, and modeled a REAL WORLD scenario.
Call it a bait and switch if you will, but it got good engagment. Much better than usual.
“You don’t understand basic physics”
I’m glad you understand the physics of a real world nuclear reactor. You should apply for head of a nuclear plant with your knowledge that flux must be kept constant without regard to plant temperature and customer demand.
Now take the time to figure out why nuclear techs would not want to build a “constant flux” reactor in space. Seriously, give it some good thought.
“It’s not just for a steady temperature that you need a control mechanism for. You need to cut the fuel to prevent a critical state, and so the story is the same for a constant flux – which is never done.”
I’m not the one that has a problem with the need for a control mechanism. YOU do. There are NO real-world nuclear reactors that can fit inside a sphere with 1m^2 surface area. NONE. YOUR solution is make-believe. It’s not real-world in any way, shape, or form.
“You do realize that if citizens do not use the electricity made by nuclear plant … it doesn’t continue to “pump” the same energy. You understand what would happen if a nuclear plant produced a constant flux while demand dropped?”
Entirely irrelevant to Willis’s problem. The problem statement REQUIRED a constant energy source. YOU did not solve the stated problem. You made up some other make-believe problem.
“Do you even have a brain?” Clearly my brain works far better than yours. If you had a reasonable IQ you would have recognized how ludicrous your arguments are and have given up by now.
“There’s a company that designed and bringing online reactors that produce 60MW and take up [much[ less than 1000 cubic meters for the entire operating chain.”
Name it and show that ALL dimensions will fit inside a 1m^2 surface area sphere, including ALL fuel rods, and control system. Then you can go ahead and explain how this REAL-WORLD system can operate while entirely enclosed in a perfect blackbody spherical shell.
“Have you heard of a heat sink?” Indeed I have. I also know that it has nothing to do with your solution. Your solution has no time dependence and thus it is a solution for steady state. This should be obvious to anyone that understands basic heat transfer.
“Yes! I’m doing steady state without a constant flux.” WRONG. The flux is IDENTICAL through any surface that encloses the heat source. You REDUCED the heat output of the source fore the problem with the surrounding shell thus VIOLATING the conditions put forth in Willis’s problem.
“The temperature in a nuclear reactor is CONTROLLED. Fuel flow is reduced.” In YOUR make-believe problem that may be the case, but in the problem that Willis proposed the HEAT OUPUT of the source is controlled, thus fuel flow is NOT reduced. Are you so obtuse that you don’t think a control system can be fabricated to control heat output? If you were as smart as you think you are then you would have come up with a real-world system that DOES satisfy Willis’s problem statement instead of one that DOES NOT.
“As soon as you raised dT/dt your dq/dt is reduced,” Not is you wanted to solve the ACTUAL problem that Willis posed. Instead you made up some different make-believe problem. Why is that?
“I very much understood the specifications of Willis problem, you uneducable parrot.” Then why didn’t you set up a real-world system that satisfied his problem statement you incompetent ignoramus?
“That’s why you accused me first of a strawman, remember?” I’m STILL accusing you of a straw man. That’s all you have because you weren’t smart enough to come up with a real-world system that is consistent with Willis’s problem statement.
“I already explained to someone else that I can be tricky, and modeled a REAL WORLD scenario.” Tricky? You aren’t even intelligent enough to understand that you haven’t solved a real-world problem, and you aren’t smart enough to figure out a real-world system that is consistent with the actual problem statement.
“Call it a bait and switch if you will” Actually, I call it sheer stupidity and now you are trying to cover your ass.
“I’m glad you understand the physics of a real world nuclear reactor.” You keep using that phrase “real-world” but you don’t even have a clue about how real-world reactors work.
“Now take the time to figure out why nuclear techs would not want to build a “constant flux” reactor in space.”
It makes no difference if they want to build it or not. Willis’s problem requires a constant power source. He never said it had to be a nuclear reactor. You really are logically challenged. Keep digging that hole though. It’s getting deeper.
The parrot passed the age where it can learn new phrases.
Good one Zoe. I also like to eat feces.
“There’s a company that designed and bringing online reactors that produce 60MW and take up [much[ less than 1000 cubic meters for the entire operating chain. Can you math?”
I can’t believe I let that pass. I assumed that you were intelligent enough to make at least one valid point. So you’ve now admitted that there is no real-world nuclear reactor that will fit inside a sphere with surface area of 1m^2.
So, you can dispence with your “real-world” nonsense now.
In my experiment:
r1 = 0.282 × 0.75 = 0.2115
V = 4/3*pi*r^3 = 0.0396 m^3
60MW / 1000 m^3 = 60KW/m^3
60 KW / m^3 × 0.0396 m^3 = 2376 W
2376 W > 235 W QED.
You can’t even do basic math.
You’re right, Zoe. Now I see that I’m a moron.
We’ve been able to build toy helicopters, toy race cars, and basic logic should extend to us being able to build nuclear reactors inside smaller objects.
You scaled linearly and still had a 10x buffer left over. That is impressive.
Aww, poor Zoe. I’ve been more abusive than anyone in the history of your blog. I’m sorry, but I can’t control myself. You may as well go ahead have your first ban.
I see that you couldn’t bear being shown to be wrong Zoe. That’s OK. It’s a common trait of dishonest deniers like you that suffer from the Dunning-Kruger effect. Enjoy your blissful ignorance.
Regardless of things being explained to you repeatedly, you ignore it, and continue to be an obnoxious abusive ass.
“Har har har … nuclear doesn’t scale linearly … I don’t actually know how it scales, and I’ll also ignore the 10x buffer you left har har har you ignorant Zoe.”
“You should’ve done exactly what Willis did … imagine a planet the size of Earth that can emit like a brown dwarf >10x the size of Jupiter har har har because that is more realistic than imagining innovation in nuclear physics”
“I hate you for having different criteria than Willis. You were supposed to just rewrite his post in your own words, because that is interesting.”
“Regardless of things being explained to you repeatedly, you ignore it”
You’re describing yourself Zoe. We’ve now established that there is no “real-world” nuclear reactor that can fit inside a sphere of the size your analyzed.
I don’t hate you Zoe. I pity you. You think that you have uncovered something profound when all you have done is to show that you don’t understand basic science.
“We’ve now established”
Have we? The question is if it could be done, not whether it has.
YOU JUST KEEP REPEATING that it’s not possible. Evidence? Nothing but your say so.
“you don’t understand basic science.”
Classic loser bully tactic.
You still didn’t get a different answer than me under my conditions.
You really still believe someone would be crazy enough to make a fixed flux nuclear reactor.
I have to admit this was entertaining: I clearly came in late to check Zoe’s math. But why do scientists (either amateur or not) insist on thought experiments? Isn’t physics based on real experiments? I came across this recent article by Hermann Harde (https://scc.klimarealistene.com/produkt/verification-of-the-greenhouse-effect-in-the-laboratory/) who seem to have done a proper experiment that shows that greenhouse effect “works as advertised” (although he admits there’s nothing to be worried about…). I am quite puzzled because in my small way I did some tests (at constant flux !…so respecting Willis’ requirement) that showed “no back-radiation heating” (this was discussed a bit in the comments at https://phzoe.com/2020/03/04/dumbest-math-theory-ever/). I was happy with my result…but now it seems there are new data of much better quality than mine, that tell the opposite…
Now that I had some time to look through it, I noticed this is not the full paper. I considered purchasing it, but I noticed something strange that stopped me: Do they have TWO power inputs?
This setup is too complicated to make sense of. I think I can, but the whole thing smells funny to even bother. Sorry
Dumb question : why is not the solution of the nuclear core @ r2 + the shell @ r3 as follows: T2 = 253.73 K (dictated by the imposed flux 235 W/m2), T3 = 213.36 K (= T2/2^0.25) ? Total emitted power to space = 235 W (core) – 117.5 W (shell towards the inside) + 117.5 (shell towards the outside) = 235 W = as imposed.
T1 is held constant like a real nuclear reactor.
OK, then let’s say that I’m just about right (0.13°C difference for T3). For us engineers such a difference is invisible 🙂 |
A mirror is an optical device which can reflect light. Usually, however, only those devices are meant where the angle of reflection equals the angle of incidence (see Figure 1). This means that diffraction gratings, for example, are not considered as mirrors, although they can also reflect light.
Mirror surfaces do not need to be flat; there are mirrors with a curved (convex or concave) reflecting surface (see below).
Properties of a Mirror
Various basic properties characterize a mirror:
- The reflectivity (or reflectance) is the percentage of the optical power which is reflected. Generally, it depends on the wavelength and the angle of incidence, for non-normal incidence often also on the polarization direction.
- The reflection phase is the phase shift of reflected light, i.e., the optical phase change obtained when comparing light directly before and directly after the reflection. The phase shift can depend on the wavelength and the polarization direction. If the phase change is different between s and p polarization (for non-normal incidence), the polarization state of incident light will in general be modified, except if it is purely s or p polarization.
- Mirrors work only in a limited wavelength range, i.e., they exhibit the wanted reflectivity only within that range. The width of that range is called the reflection bandwidth. Of course, its exact value generally depends on the angle of incidence, the polarization and on the tolerance for the reflectivity.
- Similarly, there can be a limited range of angles of incidence, particularly for dielectric mirrors.
- The surface shape (e.g. spherically convex curved) is also relevant, see below.
Additional properties can be relevant in various applications:
- A high surface quality is often important in laser technology. The surface flatness of laser mirrors and others is often specified in wavelengths, e.g. λ / 10. As surface defects are largely a random phenomenon, only worst-case or statistical specifications can be given. For small localized defects, it is common to give “scratch & dig” specifications according to the US standard MIL-REF-13830B: there are two numbers, quantifying the severity of scratches (shallow markings or tearings) and digs (pit-like holes) basically by a comparison of their visual appearance with those of defects in certain standard parts. A quality figure of simple parts could be 80-50, a commercial quality is 60-40, laser mirrors should normally have 20-10 or better, and high precision parts can have 10-5. There is also the standard ISO 10110-7, which also contains a more rigorous definition based on the size of defects rather than only their visual appearance.
- For use with high-power lasers, the optical damage threshold may be of interest – particularly in conjunction with pulsed lasers, as these tend to have high peak powers. It is often specified for nanosecond pulses.
Types of Mirrors
Metal-coated Mirrors – Back Side and First Surface Mirrors
Ordinary mirrors as used in households are often silver mirrors on glass. These basically consist of a glass plate with a silver coating on the back side. The coating is thick enough to suppress any significant transmission from any side. Nevertheless, the reflectivity is substantially below 100%, since there are absorption losses of a few percent (for visible light) in the silver layer.
Household mirrors typically have the coating on the back side, so that one has a robust glass surface outside, which can be cleaned easily, and the coating on the back side (with an additional layer) is well protected. For other applications, one uses first surface mirrors, where the light is incident directly on the coating and does not reach the mirror substrate.
For use in laser technology and general optics, more advanced types of metal-coated mirrors have been developed. These often have additional dielectric layers on top of the metallic coating in order to improve the reflectivity and/or to protect the metallic coating against oxidation (enhanced and protected mirrors). Different metals can be used, e.g. gold, silver, aluminum, copper, beryllium and nickel/chrome alloys. Silver and aluminum mirrors are particularly popular. Others are mostly used as infrared mirrors.
The article on metal-coated mirrors gives more details.
The most important type of mirror in laser technology and general optics is the dielectric mirror. This kind of mirror contains multiple thin dielectric layers. One exploits the combined effect of reflections at the interfaces between the different layers. A frequently used design is that of a Bragg mirror (quarter-wave mirror), which is the simplest design and leads to the highest reflectivity at a particular wavelength (the Bragg wavelength).
In contrast to some metal-coated mirrors, dielectric mirrors are usually made as first surface mirrors, which means that the reflecting surface is at the front surface, so that the light does not propagate through some transparent substrate before being reflected. That way, not only possible propagation losses in the transparent medium are avoided, but most importantly additional reflections at the front surface, which could be particularly relevant for non-normal incidence.
Generally, dielectric mirrors have a limited reflection bandwidth. However, there are specially optimized broadband dielectric mirrors, where the reflection bandwidth can be hundreds of nanometers. Some of those are used in ultrafast laser and amplifier systems; they are sometimes called ultrafast mirrors, and they also need to be optimized in terms of chromatic dispersion.
Laser mirrors as used to form laser resonators, for example, are also usually dielectric mirrors, having a particularly high optical quality and often a high optical damage threshold. Some of them are used as laser line optics, i.e., only with certain laser lines. Also, there are supermirrors with a reflectivity extremely close to 100%, and dispersive mirrors with a systematically varied thin-film thickness. They can be used for high-Q optical resonators, for example.
See the article on dielectric mirrors for more details.
Dichroic mirrors are mirrors which have substantially different reflection properties for two different wavelengths. They are usually dielectric mirrors with a suitable thin-film design. For example, they can be used as harmonic separators in setups for nonlinear frequency conversion.
While many mirrors have a plain reflecting surface, many others are available with a curved (convex or concave) surface, for example for focusing laser beams or for imaging applications.
Most curved mirrors have a spherical surface, characterized by some radius of curvature R. A mirror with a concave (inwards-curved) surface acts a focusing mirror, while a convex surface leads to defocusing behavior. Apart from the change of beam direction, such a mirror acts like a lens. For normal incidence, the focal length (disregarding its sign) is simply R / 2, i.e., half the curvature radius. For non-normal incidence with an angle θ against the normal direction, the focal length is (R / 2) · cos θ in the tangential plane and (R / 2) / cos θ in the sagittal plane.
There are also parabolic mirrors, having a surface with a parabolic shape. For tight focusing, one often uses off-axis parabolic mirrors, which allow one to have the focus well outside the incoming beam.
There are deformable mirrors, where the surface shape can be controlled, often with many degrees of freedom (possibly several thousands). Such mirrors are mostly used in adaptive optics for correcting wavefront distortions.
Mirror substrates in optics and laser technology often have a cylindrical form, for example with a diameter of 1 inch and a thickness of a couple of millimeters. However, there are also substrates with a rectangular, elliptical or D-shaped front surface, for example. Besides, there are prism mirrors, where a reflecting coating is placed on a prism, and retroreflectors.
The RP Photonics Buyer's Guide contains 185 suppliers for mirrors. Among them:
See also: mirror substrates, reflectivity, reflectance, metal-coated mirrors, dielectric mirrors, Bragg mirrors, quarter-wave mirrors, crystalline mirrors, deformable mirrors, laser mirrors, output couplers, supermirrors, dichroic mirrors, parabolic mirrors, cold mirrors, hot mirrors
and other articles in the category general optics |
|History of Kentucky|
The prehistory and history of Kentucky span thousands of years, and have been influenced by the state's diverse geography and central location. Archaeological evidence of human occupation in Kentucky begins approximately 9,500 BCE. A gradual transition began from a hunter-gatherer economy to agriculture c. 1800 BCE. Around 900 CE, the Mississippian culture took root in western and central Kentucky; the Fort Ancient culture appeared in eastern Kentucky. Although they had many similarities, the Fort Ancient culture lacked the Mississippian's distinctive, ceremonial earthen mounds.
The first Europeans to visit Kentucky arrived in the late 17th century via the Ohio River from the northeast, and later, in the late 18th century, from the southeast through natural passes in the Appalachian Mountains. Early Settlers pursuant to the Treaty of Fort Stanwix 1768, came into conflict with the Shawnee, Cherokee and other tribes in their south of Ohio hunting grounds. This launched Lord Dunmore's war in 1774, and during the Revolution, it became the Cherokee-American Wars that lasted until after statehood. A series of county divisions of Virginia Colony west of the Appalachians resulted in Kentucke County in 1777, the District of Kentucky, and finally its admission into the Union as the 15th state on June 1, 1792.
- The economy rested on family farms, with a rapid growth in tobacco for the national market.
- Slavery: Until 1865 slavery played a significant role in Kentucky's economy and politics. The state remained a border state during the Civil War, with both Union and Confederate sympathizers, and the issue of slavery ultimately led to the state's split between the North and South. Union forces controlled the state during the war. Slavery was finally abolished by the 13th Amendment in late 1865.
- Reconstruction: Following the Civil War, Kentucky underwent a period of Reconstruction, during which the state's political and social structures were reshaped to reflect the post-war era. Blacks gained the right to vote in Kentucky and never lost the right.
- Feuds: Kentucky has a long and complicated history of feuds, especially in the mountains. They were rooted in political, economic, and social tensions. Violence climaxed with the assassination of Governor William Goebel in 1900.
- Industrialization: The late 19th and early 20th centuries saw the rise of industrialization in Kentucky, with the coal mining and manufacturing industries playing a significant role in the state's economy.
- Prohibition: In 1919, the 18th Amendment to the U.S. Constitution went into effect, prohibiting the sale and consumption of alcohol. Kentucky, a major producer of bourbon and other distilled spirits, saw significant social and economic changes as a result, with moonshining in the mountains to provide liquor for the cities to the north..
- Women's Suffrage: In 1920, the 19th Amendment was ratified, granting women the right to vote.
- Civil Rights: The mid-20th century saw significant civil rights struggles in Kentucky and across the United States, with activists fighting for equal rights for African Americans and other marginalized groups.
- Environmentalism: Throughout the latter half of the 20th century and into the 21st, environmental issues have become increasingly important in Kentucky. Especially important are concerns over the coal mining industry's impact on the environment and public health leading to political and social changes.
- Globalization: The late 20th and early 21st centuries have seen significant economic changes as globalization has become a major force in the American and global economies.
- Immigration: In recent decades, Kentucky has seen a significant increase in immigration, leading to demographic changes and debates over immigration policy.
Etymology and nickname
The etymology of "Kentucky" or "Kentucke" is uncertain. One suggestion is that it is derived from an Iroquois name meaning "land of tomorrow". According to Native America: A State-by-State Historical Encyclopedia, "Various authors have offered a number of opinions concerning the word's meaning: the Iroquois word kentake meaning 'meadow land', the Wyandotte (or perhaps Cherokee or Iroquois) word ken-tah-the meaning 'land of tomorrow', the Algonquian term kin-athiki referring to a river bottom, a Shawnee word meaning 'at the head of a river', or an Indian word meaning land of 'cane and turkeys'". Kentucky's nickname, the "Bluegrass State", derives from the imported grass grown in the central part of the state; "The nickname also recognizes the role that the Bluegrass region has played in Kentucky's economy and history."
Pre-European habitation and culture
Paleo-Indian era (9500 – 7500 BCE)
Based on evidence in other regions, humans were probably living in Kentucky before 10,000 BCE; however, archaeological evidence of their occupation has not yet been documented. Stone tools, particularly projectile points (arrowheads) and scrapers, are primary evidence of the earliest human activity in the Americas. Paleo-Indian bands probably moved their camps several times per year. Their camps were typically small, consisting of 20 to 50 people. Band organization was egalitarian, with no formal leaders and no social ranking or classes. Linguistic, blood-type and molecular evidence, such as DNA, indicate that indigenous Americans are descendants of east Siberian peoples.
At the end of the last ice age, between 8000 and 7000 BCE, Kentucky's climate stabilized; this led to population growth, and technological advances resulted in a more sedentary lifestyle. The warming trend killed Pleistocene megafauna such as the mammoth, mastodon, giant beavers, tapirs, short-faced bear, giant ground sloths, saber-toothed tiger, horse, bison, muskox, stag-moose, and peccary. All were native to Kentucky during the ice age, and became extinct or moved north as the ice sheet retreated.
No skeletal remains of Paleo-Indians have been found in Kentucky. Although many Paleo-Indian Clovis points have been discovered, there is little evidence at Big Bone Lick State Park that they hunted mastodons.
The radiocarbon evidence indicates that mastodons and Clovis people overlapped in time; however, other than one fossil with a possible cut mark and Clovis artifacts that are physically associated with but dispersed within the bone-bearing deposits, there is no incontrovertible evidence that humans hunted Mammut americanum at the site.
Archaic period (7500 – 1000 BCE)
The extinction of large game animals at the end of the ice age changed the area's culture by 7500 BCE. By 4000 BCE, the people of Kentucky exploited native wetlands. Large shell middens (trash piles, ancient landfills) are evidence of clam and mussel consumption. Although middens have been found along rivers, there is limited evidence of riverbank Archaic occupation before 3000 BCE. Archaic Kentucky natives' social groups were small: a few cooperating families. Large shell middens, artifact caches, human and dog burials, and burnt-clay flooring indicate permanent Archaic settlements. White-tailed deer, mussels, fish, oysters, turtles, and elk were significant food sources.
Natives developed the atlatl, which launched spears faster. Other Archaic tools were grooved axes, conical and cylindrical pestles, bone awls, cannel coal beads, hammerstones, and bannerstones. "Hominy holes" – depressions in sandstone made by the grinding of hickory nuts or seed to make them easier to use for food – were also used.
People buried their dogs in shell (mussel) mound sites along the Green and Cumberland Rivers. At the Indian Knoll site, 67,000 artifacts have been uncovered; they include 4,000 projectile points and twenty-three dog burials, seventeen of which are well-preserved. Some dogs were buried alone; others were buried with their masters, with adults (male and female), or with children. Archaic dogs were medium-sized, about 14–18 inches (36–46 cm) tall at the shoulder, and were probably related to the wolf. Dogs had a special place in the lives of Archaic and historic indigenous peoples. The Cherokee believed that dogs were spiritual, moral and sacred, and the Yuchi (a tribe who lived near the Green River) may have shared that belief.
The Indian Knoll site, along the Green River in Ohio County, is over 5,000 years old. Although evidence of earlier settlement exists, the area was most densely settled from c. 3000 to 2000 BCE (when the climate and vegetation neared modern conditions). The Green River floodplain provided a stable environment, which supported agricultural development; nearby mussel beds facilitated permanent settlement. At the end of the Archaic period, natives had cultivated a form of squash for its edible seeds and use as a container.
Woodland period (1000 BCE – 900 CE)
Native Americans began to cultivate several species of wild plants c. 1800 BCE, transitioning from a hunter-gatherer society to one based on agriculture. In Kentucky, the Woodland period followed the Archaic period and preceded the agricultural Mississippian culture. It was characterized by the development of shelter construction, stone and bone tools, textile manufacturing, leather crafting, and cultivation. Archaeologists have identified a distinct Middle Woodland cultures, Crab Orchard culture, in the western part of the state. The remains of two groups, the Adena (early Woodland) and the Hopewell (middle Woodland), have been found in present-day Louisville, in the central bluegrass region and northeastern Kentucky.
The introduction of pottery, its widespread use, and the increased sophistication of its forms and decoration (first believed to have occurred around 1000 BCE) are major demarcations of the Woodland period. Archaic pots were thick, heavy, and fragile; Woodland pottery was more intricately designed and had more uses, such as cooking and storing surplus food. Woodland peoples also used baskets and gourds as containers. Around 200 BCE, maize cultivation migrated to the eastern United States from Mexico. The introduction of corn changed Kentucky agriculture from growing indigenous plants to a maize-based economy. In addition to cultivating corn, the Woodland people also cultivated giant ragweeds, amaranth (pigweed), and maygrass. The initial four plants known to have been domesticated were goosefoot (Chenopodium berlandieri), sunflower (Helianthus annuus var. macroscarpus), marsh elder (Iva annua var. macrocarpa), and squash (Cucurbita pepo ssp. ovifera). Woodland people grew tobacco, which they smoked ritually; they still used stone tools, especially for grinding nuts and seeds. They mined Mammoth and Salts Caves for gypsum and mirabilite, a salty seasoning. Shellfish was still an important part of their diet, and the most common prey were white-tailed deer. They continued to make and use spears; late in the Woodland period, however, the straight bow became the weapon of choice in the eastern United States (evidenced by smaller arrowheads during this period). In addition to bows and arrows, some southeastern Woodland peoples also used blowguns.
Between 450 and 100 BCE, Native Americans began to build earthen burial mounds. The Woodland Indians buried their dead in conical (later flat or oval) burial mounds, which were often 10 to 20 feet (3.0–6.1 m) high. The Woodland people were called "Mound Builders" by 19th-century observers.
The Eastern Agricultural Complex enabled Kentucky natives to change from a nomadic culture to living in permanent villages. They lived in bigger houses and larger communities, although intensive agriculture only began with the Mississippian culture.
Mississippian culture (900 – 1600 CE)
Maize became highly productive c. 900 CE, and the Eastern Agricultural Complex was replaced by the Mississippian culture's maize-based agriculture. Native village life revolved around planting, growing and harvesting maize and beans, which made up 60 percent of their diet. Stone and bone hoes were used by women for most cultivation. They grew the "Three Sisters" (maize, beans, and squash), which were planted together to complement each plant's characteristics. Beans climbed the cornstalks, and large squash leaves retained soil moisture and reduced weeds. White-tailed deer were the dominant game animals. Mississippian culture pottery was more varied and elaborate than that of the Woodland period (including painting and decoration), and included bottles, plates, pans, jars, pipes, funnels, bowls, and colanders. Potters added handles to jars, attaching human and animal effigies to some bowls and bottles. Elite Mississippians lived in substantial, rectangular houses atop large platform mounds. Excavations of their houses revealed burned clay wall fragments, indicating that they decorated their walls with murals. They lived year-round in large communities, some of which had defensive palisades, which had been established for centuries. An average Fort Ancient or Mississippian town had about 2,000 inhabitants. Some people lived on smaller farms and in hamlets. Larger towns, centered on mounds and plazas, were ceremonial and administrative centers; they were located near the Mississippi and Ohio River valleys and their tributaries: rivers with large floodplains.
A Mississippian culture developed in western Kentucky and the surrounding area, while a Fort Ancient culture dominated in the eastern portion of what became Kentucky. While the two cultures are similar in numerous ways, the Fort Ancient culture didn't have the temple mounds and chiefs' houses like the Mississippian culture had.
Mississippian sites in western Kentucky are at Adams, Backusburg, Canton, Chambers, Jonathan Creek, McLeod's Bluff, Rowlandtown, Sassafras Ridge, Turk, Twin Mounds and Wickliffe. The Wickliffe Mounds, in western Kentucky, were inhabited from 1000 to 1350 CE; two large platform mounds and eight smaller mounds were distributed around a central plaza. Its inhabitants traded with those of North Carolina, Wisconsin, and the Gulf of Mexico. The Wickliffe community had a social hierarchy ruled by a hereditary chief. The Rowlandton Mound Site was inhabited from 1100 to 1350 CE. The 2.4-acre (0.97 ha) Rowlandton Mound Site has a large platform mound and an associated village area, similar to the Wickliffe Mounds; these settlements were probably established by Late Woodland peoples. The Tolu Site was inhabited by Kentucky natives from 1200 to 1450 CE. It originally had three mounds: a burial mound, a substructure platform mound, and another mound of unknown function. The site has a central plaza and a large, 6.6-foot-deep (2.0 m) midden. A rare Cahokia-made Missouri flint clay 7-inch (180 mm) human-effigy pipe was found at this site. The Marshall Site was inhabited from 900 to 1300 CE, and the Turk and Adams sites from 1100 to 1500. The Slack Farm, inhabited from 1400 to 1650, had a mound and a large village. One thousand or more people could have been buried at the site's seven cemeteries, and some were buried in stone box graves. Native Americans abandoned a large, late-Mississippian village at Petersburg which had at least two periods of habitation: 1150 and 1400 CE.
The Mississippian period ends at about the time of the earliest French, Spanish, and English explorers. Seventeenth-century French explorers documented a number of tribes living in Kentucky until the Beaver Wars in the 1670s including the Cherokee (in southeastern Kentucky caves and along the Cumberland River); the Chickasaw, in the western Jackson Purchase area (especially along the Tennessee River); the Delaware (Lenape) and Mosopelea (at the mouth of the Cumberland River); the Shawnee (throughout the state), and the Wyandot and Yuchi (on the Green River). Hunting bands of Iroquois, Illinois, Lenape and Miami also visited Kentucky.
Beaver Wars and Iroquois dominance
This article is missing information about Beaver wars and Iroquois dominance.(May 2023)
The Eskippakithiki Settlement 18th century
The archaeological evidence (or lack thereof) indicates that for 50 years following the Beaver Wars, there were no Native American settlements in Kentucky, until the appearance of Eskippakithiki. Historians do not think that singular settlement is part of a continuous Kentuckian Native American culture, but rather that it was transplanted from elsewhere, possibly a separatist band from one of the Shawnee towns along the Scioto River in Ohio, or a late Shawnee migration from eastern North Carolina.
Eskippakithiki (also known as Indian Old Fields), was Kentucky's last Native American (Shawnee) village, in the eastern portion of present-day Clark County, in the north central portion of the state. The name translates as "place of blue licks", in reference to the salt licks nearby. It existed from 1718-1754. A 1736 French census reported Eskippakithiki's population as two hundred families.
Eskippakithiki had a population of eight hundred to one thousand. The town was protected by a stout stockade some two hundred yards in diameter, and it was surrounded by three thousand five hundred acres (1,400 ha) of land that had been cleared for crops.
John Finley, a compatriot of Daniel Boone, lived and traded in Eskippakithiki in 1752. Finley said that he was attacked by a party of 50 Christian Conewago and Ottawa Indians, a white French Canadian and renegade Dutchman Philip Philips (all from the St. Lawrence River) who were on a scalp-hunting expedition against southern tribes on January 28, 1753, on the Warrior's Path 25 miles (40 km) south of Eskippakithiki, near the head of Station Camp Creek in Estill County. Major William Trent wrote a letter which first mentions "Kentucky" about the attack on Finley:
I have received a letter just now from Mr. Croghan, wherein he acquaints me that fifty-odd Ottawas, Conewagos, one Dutchman, and one of the Six Nations, that was their captain, met with some of our people at a place called Kentucky on this side Allegheny river, about one hundred and fifty miles (240 km) from the Lower Shawnee Town. They took eight prisoners, five belonging to Mr. Croghan and me, the others to Lowry; they took three or four hundred pounds worth of goods from us; one of them made his escape after he had been a prisoner three days. Three of John Findley's men are killed by the Little Pict Town, and no account of himself ... There was one Frenchman in the Company.— Lucien Beckner
Seven Pennsylvanian traders were in Finley's crew a long with a Cherokee slave. The traders shot at the natives, who took them prisoner and brought them to Canada; some were then shipped to France as prisoners of war. Finley fled, and the next European who went to Eskippathiki found the town burned to the ground.
French colonial period to 1763
Prior to 1763, all of trans-Appalachia including what was later to be known as Kentucke country (and much else besides) was part of Louisiana, an administrative district of New France. It was the first European claim on North American lands west of the Appalachians and south of the Great lakes. Two early pass-bys by Robert de la Salle at the Falls of the Ohio in 1669 (speculatively) and Marquette and Jolliet at the mouth of the Ohio on the Mississippi in 1673 are recorded.
On September 1, 1671, Thomas Batts (Thomas Batte), Thomas Wood, and Robert Fallam (Robert Hallom) set out on horse from Appomattox Town acting under a commission granted to Coloner Abraham Wood to explore the trans-Appalachian waterways. There is much ambiguity about the extent of their travels westward, but they are credited with discovering Wood's River (today the New River), a tributary of the Kanawha River. Some historians believe that their journey reached the basin of the Guyandotte River, or even that of the Tug Fork tributary of the Big Sandy River in extreme eastern Kentucky. On account of Indian impatience, they returned to Fort Henry by Oct. 1. Later, the Kanawha River and by extension, the New River, and their environs would be considered part of south of Ohio lands known by the Native American name Kentucke.
English colonists Gabriel Arthur and James Needham were sent out by Abraham Wood from Fort Henry (present-day Petersburg, Virginia) on May 17, 1673, with four horses and Cherokee and other Native American slaves, to contact the Tomahittan (possibly the Yuchi). They were traveling to the Cherokee capital of Chota (in present-day Tennessee), on the Hiwassee River, to learn their language. The English hoped to develop strong business ties for the beaver-fur trade and bypass the Occaneechi traders who were middlemen on the Cherokee Trading Path.
Needham got into an argument on the return trip with "Indian John", his Occaneechi guide, which became an armed confrontation resulting in his death. Afterward, Indian John tried to have the Tomahittan kill Arthur, but the chief adopted the Englishman.
For about a year, Arthur (dressed as a Tomahittan in Chota) traveled with the chief and his war parties on revenge raids of Spanish settlements in Florida after ten men were killed and ten captured during a peaceful trading mission several years earlier. When the Tomahittan attacked the Shawnee in the Ohio River valley, Arthur was wounded by an arrow and captured. He was saved from a ritual burning at the stake by a Shawnee who was sympathetic to him; when he learned that Arthur had married a Tomahittan woman ("Hannah Rebecca" Nikitie), the Shawnee cured his wound, gave him his gun and rokahamoney (hominy) to eat, and put him on a trail leading back to his family in Chota. Most historians agree that this road was the Warriors' Path which crossed the Ohio at the mouth of the Scioto River, went south across the Red River branch of the Kentucky River, then up Station Camp Creek and through Ouasiota Pass into the Ouasiota Mountains. In June 1674 (or 1678), the Tomahittan chief escorted Arthur back to his English settlement in Virginia. Arthur's accounts of the land and its tribes provided the first detailed information about Kentucky. He was among the first Englishmen (preceded by Batts and Fallam) to visit present-day West Virginia and cross the Cumberland Gap.
After Arthur and Needham, 65 years elapsed before the next recorded whiteman set foot in Kentucke. In 1739, Frenchman Charles III Le Moyne, Baron de Longueil, on a military expedition discovered Big Bone Lick a few miles east of the Ohio River in extreme northern Kentucke. A few years later, in 1744, Robert Smith, an English fur trader on the Great Miami River, confirmed le Moyne's find with additional discoveries at the Lick.
In 1750 and 1751, the first surveys of eastern and northern Kentucky were made by English Virginians Dr. Thomas Walker and Christopher Gist. Walker is sometimes credited with being the first whiteman to pass through the Cumberland Gap. An Ohio Company expedition in Oct. 1750 was led by Christopher Gist traveling from what is today West Virginia through the Pound Gap northerly of Walker's route. However, Walker writes in his journal that in 1748, he met Samuel Stalnaker, a Virginia frontiersman living on the Holston River, who traded with the Cherokee in Kentucky through the Cumberland Gap, and from whom he obtained his own knowledge about the gap.
Other known explorers prior to Boone's legendary expeditions in the 1760s were John Findley, trader 1752; James McBride 1754 (historian John Filson's nominee for "Discoverer of Kentucke"); and Elisha Wallen, one of the long hunters in 1762.
Early European settlement
Settlement and resistance in Kentucke country
From the time of establishment of New France, there were complex and overlapping claims to land south of the Ohio including that which would become the future state of Kentucky. Among the claimants were France, the British Crown via Royal charter of Virginia Colony, Ohio Country Shawnee and allied Algonquin tribes, the northern Iroquois Confederacy, and the Cherokee, Muscogee and allied southern tribes. French claims to Kentucky were lost after its defeat in the French and Indian War and the signing of the Treaty of Paris 1763. The Shawnee, Iroquois and Ohio Country tribes had gained dominion over their Ohio valley hunting grounds by the Treaty of Easton 1758, which also forbade colonial settlement west of the Alleghenys. Kentucky became part of the Indian Reserve of all trans-Appalachian lands acquired by Britain in the Treaty, established by the Royal Proclamation of 1763. The Iroquois claim to much of the state was purchased by the British in the Treaty of Fort Stanwix 1768. The Treaty of Lochaber 1770 and a subsequent erroneous survey establishing Donelson's Indian Line ceding Cherokee claims to a large part of northeastern Kentucky, demarcated the boundary between Cherokee and lands open to settlement. Virginia trans-Appalachian lands, already known as Kentucke country, were organized as Botetourt County in 1770 and Fincastle County in 1772. Their administrative reach effectively extended only to Fort Pitt and the Allegheny River basin in southwestern Pennsylvania. Numerous incidents of conflict between settlers and Native Americans in the south of Ohio lands, an expansive area including Kentucke and the Allegheny River basin upstream to southwestern Pennsylvania, eventually resulted in war.
Early Boone expeditions 1767, 1769
The legendary Daniel Boone is the most familiar frontiersman with respect to his traversal of the Cumberland Gap, and exploration and settlement of Kentucky. Boone obtained his knowledge of the Gap from Walker and Gist in the 1750s. Later, he met up with trader John Findley who also had direct Knowledge of the Gap. Boone's first expedition in 1767 was actually through the northerly pound Gap from Virginia, though he failed to reach the rich heartland of Kentucky north and west of the mountains. In 1769, starting from the Powell River valley in North Carolina, and escorted by Finley, he crossed what was then known as Cave Gap in late May and early June. In a few days they reached the area where Finley had traded with the Eskippakithiki.
- Boone's fall 1773 expedition
- Clark's spring 1774 expedition
- Harrod's spring 1774 encampment
In spring, 1774, James Harrod, with a royal charter from Lord Dunmore, led an expedition to survey land in Kentucke country promised by the British crown to soldiers who served in the French and Indian War. From Fort Redstone, Harrod and 37 men traveled down the Monongahela and Ohio Rivers to the mouth of the Kentucky River, crossing the Salt River into present-day Mercer County.[self-published source] On June 16, 1774, they founded Harrod's Town. The men divided the land; Harrod chose an area about six miles (9.7 km) from the encampment, which he named Boiling Springs. Shawnee attacked a small party of Harrod's in the Fontainbleau area on July 8, killing two men. The others escaped to the camp, about 3 miles (4.8 km) away. As Harrod's men finished the settlement's first structures, Lord Dunmore summoned them to enlist for Dunmore's war. The camp and Harrod's Town settlement were abandoned.
Dunmore War Saga
John Murray, 4th Earl of Dunmore, and Royal governor of Virginia Colony, received news in May of 1774 that fighting between settlers and Ohio country Shawnee Indians had broken out in the upper Allegheny valley and elsewhere.
The Battle of Point Pleasant (the war's only major battle) was fought on October 10, resulting in a victory for the Virginia militia. The Treaty of Camp Charlotte, signed by the Shawnee chief Cornstalk to end the war, ceded Shawnee claims to land south of the Ohio River (present-day Kentucky and West Virginia) to Virginia; the Shawnee were also required to return all European captives and stop attacking barges traveling on the Ohio River.
Starting in 1775, Kentucky grew rapidly as the first settlements west of the Appalachian Mountains were founded. Settlers migrated primarily from Virginia, North Carolina and Pennsylvania, entering the region via the Cumberland Gap and the Ohio River. During this period, settlers introduced commodity agriculture to the region. Tobacco, corn, and hemp were major cash crops, and hunting became less important. Due to ongoing Native American resistance to white settlement, however, by 1776 there were fewer than 200 settlers in Kentucky.
On March 8, 1775, Harrod led a group of settlers back to Harrod's Town. In that same year, the settlements of Boone's Station, Logan's Fort, Lexington and Kenton's Station were established by Daniel Boone, Benjamin Logan, William McConnell and Simon Kenton respectively. Lexington, Kentucky's second-largest city and former capital, is named for Lexington, Massachusetts (the site of one of the first Revolutionary battles).
Boone, Wilderness Road and Transylvania Colony
In 1774, Boone's knowledge and the stories from his expeditions earned him a reputation attracting the attention of the Judge Richard Henderson of the Louisa Company. The Shawnee defeat in Lord Dunmore's War emboldened land speculators in North Carolina who believed that much of present-day Kentucky and Tennessee would soon be under British control. Richard Henderson learned from his friend Daniel Boone that the Cherokee were interested in selling a large part of their land on the trans-Appalachian frontier. In 1775, Henderson and Boone along with the investors of the Louisa Company reformed to become the Transylvania Company under North Carolina Patriot Law. Boone was made a "colonel" (land speculator) in the chartered company.
Henderson began negotiations with Cherokee leaders. Between March 14 and 17, 1775, Henderson, Boone, and several associates met at Sycamore Shoals with the Cherokee leaders Attakullakulla, Oconastota, Willanawaw, Doublehead, and Dragging Canoe. The Treaty of Sycamore Shoals authorizing the Transylvania Purchase was not recognized by Dragging Canoe, who tried unsuccessfully to reject Henderson's purchase of tribal lands outside the Donelson line, stating "it is bloody land, and will be dark and difficult to settle", then left the conference. His words turned out to be prophetic - Kentucky is now still referred to by the sardonic phrase, dark and bloody land. The rest of the negotiations went fairly smoothly, and the treaty was signed on March 17, 1775. At the conference, the Watauga and Nolichucky settlers negotiated similar purchases of their lands.
Beginning in March of that year, Boone with 35 axmen built the Wilderness Road enabling a direct, overland migration path which facilitated migration to Kentucky. The 200 mile road the trailblazer built ended at Fort Boonesborough (or Boone's Station) on the Kentucky River where Colonel Daniel Boone, Colonel Richard Henderson, Captain James Harrod and ten other colonial colonels founded the Colony of Transylvania when they met for the Transylvania Convention on May 23, 1775, to write the "Kentucke Magna Charta".
In 1778, the Treaty of Sycamore Shoals and the Transylvania land purchase were invalidated by the Virginia General Assembly; a portion of the Colony in Tennessee was invalidated by North Carolina in 1783.
During the American Revolutionary War, settlers began pouring into the region. Dragging Canoe responded by leading his warriors into the Cherokee–American wars (1776–1794), especially along the Holston River in present-day Tennessee. The Shawnee north of the Ohio River were also unhappy about the American settlement of Kentucky. Although some bands tried to be neutral, historian Colin G. Calloway notes that most Shawnees fought with the British against the Americans.
Kentucky was part of the western theater of the American Revolutionary War, and several sieges and engagements were fought there. Bryan's Station fort in the settlement of Lexington was built during the first year of the war for defense against the British and their Native American allies. The Battle of Blue Licks, one of the Revolution's last major battles, was an American defeat. Following the 1783 Treaty of Paris ending the Revolutionary War, there were no other major actions by the Cherokee and allied tribes in Kentucky through the end of the Cherokee-American wars. Kentucky's only fort, Fort Nelson was abandoned in 1784 pursuant to the signing of the Treaty of Paris (1783) ending the threat of foreign invasion.
By the Treaty of Holston (1791), the Cherokee Nation became a suzerainty Under the United States. The Cherokee-American wars were finally ended by the follow-on Treaty of Tellico Blockhouse, Nov. 1794.
Kentucky County and District of Kentucky
By act of the Virginia Assembly on Dec 31, 1776, effective 1777, Fincastle County was abolished, and the largest piece west of the Big Sandy River and Tug Fork, became Kentucky County with seat Harrod's Town. There was no mention in the act of the Transylvania claim. Colonel John Bowman had been appointed by Virginia governor Patrick Henry as Kentucky County's military governor. He arrived in spring of 1775 with 2 companies of militia totaling 100 men with a charter to establish a civilian government.
Colonel John Bowman was the first commissioned Kentucky Colonel in 1775. Daniel Boone, Richard Henderson and others, particularly land speculators, founders of distilleries, lawyers and other prominent businessmen on the frontier had previously come to be called colonels. The distinction was often unclear in the early years, because some, like Boone, held the military rank of colonel at some time. In some cases, historians have designated commissioned colonels as patriot colonels to distinguish military officers from land speculators. In Kentucky, military governors of counties held the rank of colonel, a practice that was copied later by other states, contributing to the iconic phrase, Kentucky Colonel. In 1895, Governor William O'Connell Bradley commissioned the first honorary Kentucky Colonels, though the civil honorific title had been long used since frontier times. It is the highest civic honor bestowed by the State of Kentucky. The most recognizable Kentucky Colonel is arguably Harlan Sanders, founder of Kentucky Fried Chicken who was commissioned an honorary Colonel in 1935 by Governor Ruby Laffoon. Sanders received a second commission in 1949. As of 2020, there have been approximately 350,000 commissioned honorary Kentucky Colonels.
Over the next 15 years, Kentucky County was subdivided into 9 counties, but continued to be administered as the District of Kentucky until its admission to the union as the state of Kentucky.
Louisville and the forts of the Ohio
Louisville was founded during the latter stages of the American Revolutionary War by Virginian soldiers under George Rogers Clark, first at Corn Island in 1778, then Fort-on-Shore and Fort Nelson on the mainland. The town was chartered in 1780 and named Louisville in honor of King Louis XVI of France.
Several factors contributed to the desire of Kentuckians to separate from Virginia. Traveling to the Virginia state capital from Kentucky was long and dangerous. The use of local militias against Indian raids required authorization from the governor of Virginia, and Virginia refused to recognize the importance of Mississippi River trade to Kentucky's economy. It forbade trade with the Spanish colony of New Orleans (which controlled the mouth of the Mississippi), important to Kentucky communities.
Problems increased with rapid population growth in Kentucky, leading Colonel Benjamin Logan to call a constitutional convention in Danville in 1784. Over the next several years, nine more conventions were held. During one, General James Wilkinson unsuccessfully proposed secession from Virginia and the United States to become a Spanish possession.
In 1788, Virginia consented to Kentucky statehood with two enabling acts, the second of which required the Confederation Congress to admit Kentucky into the United States by July 4, 1788. A committee of the whole recommended that Kentucky be admitted, and the United States Congress took up the question of Kentucky statehood on July 3. One day earlier, however, Congress had learned about New Hampshire's ratification of the proposed Constitution (establishing it as the new framework of governance for the United States). Congress considered it "unadvisable" to admit Kentucky "under the Articles of Confederation" but not "under the Constitution", and resolved:
That the said Legislature and the inhabitants of the district aforesaid [Kentucky] be informed, that as the constitution of the United States is now ratified, Congress think it unadviseable [sic] to adopt any further measures for admitting the district of Kentucky into the federal Union as an independent member thereof under the Articles of Confederation and perpetual Union; but that Congress thinking it expedient that the said district be made a separate State and member of the Union as soon after proceedings shall commence under the said constitution as circumstances shall permit, recommend it to the said legislature and to the inhabitants of the said district so to alter their acts and resolutions relative to the premisses [sic] as to render them conformable to the provisions made in the said constitution to the End that no impediment may be in the way of the speedy accomplishment of this important business.
Post-Revolutionary War patriot colonels that were given land bounties by Virginia, and chartered company colonels (land speculators) came together in 1791 to select their fellow, Colonel Isaac Shelby as the secessionist state governor who owned land claims in the Kentucky District dating back to 1775 when he worked as a surveyor for the Transylvania Company. Kentucky's final push for statehood (now under the US Constitution) began with an April 1792 convention, again in Danville. Delegates drafted the first Kentucky Constitution and submitted it to Congress. On June 1, 1792, Kentucky was admitted to the US as its fifteenth state.
Antebellum period (1792–1861)
General Scott and the Kentucky militia
The 1799 constitution
The portion of south of Ohio lands west of the Tennessee River had not been included in the cession of Iroquois lands in the Treaty of Fort Stanwix, 1768, because the Iroquois did not claim that area. Kentucky and Tennessee west of the Tennessee River were recognized by the United States as Chickasaw hunting grounds by the 1786 Treaty of Hopewell. The Chickasaw sold the land to the U.S. in 1818 via the Treaty of Tuscaloosa, signed under questionable circumstances due to bribes paid to the Chickasaw signatories. The Kentucky part of the region is still sometimes known as the Jackson Purchase for then General Andrew Jackson, one of the signers of the Treaty. The Tennessee portion is now West Tennessee.
The Walker Line, surveyed by Dr. Thomas Walker and party in 1779, forms the southern boundary of Kentucky with Tennessee, except for the portion bounding the subsequent Jackson purchase. It was an extension of the original boundary line between the colonies of Virginia and North Carolina westward to the Tennessee River, which was the then western boundary of Kentucky. It was supposed to be the parallel of latitude 36 degrees and 30 minutes north, but the surveyers made an error, not accounting for deflection of the needle (magnetic north is not geographic north) so the terminus on the Tennessee River was 17 miles north of the true parallel.
Kentucky discovered the error in 1803 and attempted to reclaim the sliver of land that included the settlement of Clarksville, then in Tennessee. The states disputed the boundary for many years, until in 1819, Kentucky appointed commissioners to survey and mark the true boundary along the parallel. Tennessee refused to allow settlement north of Kentucky's line until the matter should be settled. In 1818, Kentucky had dispatched two surveyers Robert Alexander and Luke Munsell, to survey the parallel west of the Tennessee River. In 1820, the states appointed a joint commission of the ablest lawyers and judges in each state to settle the treaty. They arrived at the compromise that the Alexander-Munsell survey line, which appeared on early maps as the Munsell Line, would be the boundary west of the Tennessee River to the Mississippi River (i.e. it partitioned the 1818 Jackson Purchase; the Tennessee portion became West Tennessee), and the Walker Line as originally surveyed, the boundary east of the Tennessee River. Inbetween, the boundary line followed the Tennessee River. So today, there is a noticeable zigzag in the western portion of the boundary on Kentucky and Tennessee maps.
Land speculation was an important source of income, as the first settlers sold their claims to newcomers for cash and moved further west. Most Kentuckians were farmers who grew most of their own food, using corn to feed hogs and distill into whiskey. They obtained cash from selling burley tobacco, hemp, horses and mules; the hemp was spun and woven for cotton bale making and rope. Tobacco was labor-intensive to cultivate. Planters were attracted to Kentucky from Maryland and Virginia, where their land was exhausted from tobacco cultivation. Plantations in the Bluegrass region used slave labor on a smaller scale than the cotton plantations of the Deep South.
Adequate transportation routes were crucial to Kentucky's economic success in the early antebellum period. The rapid growth of stagecoach roads, canals and railroads during the early 19th century drew many Easterners to the state; towns along the Maysville Road from Washington to Lexington grew rapidly to accommodate demand. Surveyors and cartographers such as David H. Burr (1803–1875), geographer for the U.S. House of Representatives during the 1830s and 1840s, prospered in antebellum Kentucky.
Kentuckians used horses for transportation, labor, breeding, and racing. Taxpayers owned 90,000 horses in 1800; eighty-seven percent of all householders owned at least one horse, and two-thirds owned two or more. Thoroughbreds were bred for racing in the Bluegrass region, and Louisville began hosting the Kentucky Derby at Churchill Downs in 1875.
Mules were more economical to keep than horses, and were well-adapted to small farms. Mule-breeding became a Kentucky specialty, with many breeders expanding their operations in Missouri after 1865.
Lexington and the Bluegrass region
Kentucky was mostly rural, but two important cities emerged before the American Civil War: Lexington (the first city settled) and Louisville, which became the largest. Lexington was the center of the Bluegrass region, an agricultural area producing tobacco and hemp. It was also known for the breeding and training of high-quality livestock, including horses. Lexington was the base of many prominent planters, most notably Henry Clay (who led the Whig Party and brokered compromises over slavery). Before the American West was considered to begin west of the Mississippi River, it began at the Appalachian Mountains. With its new Transylvania University Lexington was the region's cultural center, calling itself the "Athens of the West".
This central part of the state had the highest concentration of enslaved African Americans, whose labor supported the tobacco-plantation economy. Many families migrated to Missouri during the early nineteenth century, bringing their culture, slaves, and crops and establishing an area known as "Little Dixie" on the Missouri River.
Louisville, at the falls of the Ohio River, became Kentucky's largest city. The growth of commerce was facilitated by steamboats on the river, and the city had strong trade ties extending down the Mississippi to New Orleans. It developed a large slave market, from which thousands of slaves from the Upper South were sold "downriver" and transported to the Deep South in the domestic slave trade. In addition to river access, railroads helped solidify Louisville's place as Kentucky's commercial center and strengthened east and west trade ties (including the Great Lakes region).
In 1848, Louisville began to attract Irish and German Catholic immigrants. The Irish were fleeing the Great Famine, and German immigrants arrived after the German revolutions of 1848–1849. The Germans created a beer industry in the city, and both communities helped to increase industrialization. Both cities became Democratic strongholds after the Whig Party dissolved.
1855 Louisville riots
Nativists made the Irish and Germans unwelcome. They attacked on August 6, 1855. Protestant activists organized into the Know Nothing movement attacked German Irish and Catholic neighborhoods, assaulting individuals, burning and looting. The riots sprang from the bitter rivalry between the Democrats and the nativist Know Nothing party. Multiple street fights raged, leaving 22 to over 100 people dead, scores injured, and much property destroyed by fire. Five people were later indicted; none were convicted, however, and victims were never compensated.
Religion and the Great Awakening
The Second Great Awakening, based in part on the Kentucky frontier, rapidly increased the number of church members. Revivals and missionaries converted many people to the Baptist, Methodist, Presbyterian and Christian churches.
As part of what is now known as the "Western Revival", thousands of people led by Presbyterian preacher Barton W. Stone came to the Cane Ridge Meeting House in Bourbon County in August 1801. Preaching, singing and conversion went on for a week, until humans and horses ran out of food.
The Baptists flourished in Kentucky, and many had immigrated as a body from Virginia. The Upper Spottsylvania Baptist congregation left Virginia and reached central Kentucky in September 1781 as a group of 500 to 600 people known as "The Travelling Church". Some were slaveholders; among the slaves was Peter Durrett, who helped William Ellis guide the party. Owned by Joseph Craig, Durrett was a Baptist preacher and part of Craig's congregation in 1784.
He founded the First African Baptist Church in Lexington c. 1790: the oldest Black Baptist congregation in Kentucky and the third-oldest in the United States. His successor, London Ferrill, led the church for decades and was so popular in Lexington that his funeral was said to be second in size only to that of Henry Clay. By 1850, the First African Baptist Church was the largest church in Kentucky.
Many abolitionist Virginians moved to Kentucky, making the new state a battleground over slavery. Churches and friends divided over the morality of the issue; in Kentucky, abolitionism was marginalized politically and geographically. Abolitionist Baptists established their own churches in Kentucky around antislavery principles. They saw their cause as allied with Republican ideals of virtue, but pro-slavery Baptists used the boundary between church and state to categorize slavery as a civil matter; acceptance of slavery became Kentucky's dominant Baptist belief. Abolitionist leadership declined through death and emigration, and Baptists in the Upper South solidified their position.
Christian Church (Disciples of Christ)
During the 1830s, Barton W. Stone (1772–1844) founded the Christian Church (Disciples of Christ) when his followers joined those of Alexander Campbell. Stone broke with his Presbyterian background to form the new sect, which rejected Calvinism, required weekly communion and adult baptism, accepted the Bible as the source of truth, and sought to restore the values of primitive Christianity.
New Madrid earthquakes (1811–1812)
In late 1811 and early 1812, western Kentucky was heavily damaged by what became known as the New Madrid earthquakes; one was the largest recorded earthquake in the contiguous United States. The earthquakes caused the Mississippi River to change course.
War of 1812
Kentucky's enthusiasm for the Mexican–American War was somewhat mixed. Some citizens enthusiastically supported the war, at least in part because they believed that victory would bring new land for the expansion of slavery. Others, particularly Whig supporters of Henry Clay, opposed the war and refused to participate. Young people sought self-identity and a link with heroic ancestors, however, and the state easily met its quota of 2,500 volunteers in 1846 and 1847. Although the war's popularity declined with time, a majority supported it throughout.
Kentucky units won praise at the Battles of Monterey and Buena Vista. Although many soldiers became ill, few died; Kentucky units returned home in triumph. The war weakened the Whig Party, and the Democratic Party became dominant in the state during this period. The party was particularly powerful in the Bluegrass region and other areas with plantations and horse-breeding farms, where planters held the state's greatest number of slaves.
1848 mass slave escape
Edward James "Patrick" Doyle was an Irishman who sought to profit from slavery in Kentucky. Before 1848, Doyle had been arrested in Louisville and charged with attempting to sell free blacks into slavery. Failing in this effort, Doyle tried to make money by offering his services to runaway slaves; requiring payment from each slave, he agreed to guide runaways to freedom. In 1848, he attempted to lead a group of 75 African-American runaway slaves to Ohio. Although the incident has been categorized by some as "the largest single slave uprising in Kentucky history", it was actually an attempted mass escape. The armed runaway slaves went from Fayette County to Bracken County before being confronted by General Lucius Desha of Harrison County and his 100 white male followers. After an exchange of gunfire, 40 slaves ran into the woods and were never caught. The others, including Doyle, were captured and jailed. Doyle was sentenced to twenty years of hard labor in the state penitentiary by the Fayette Circuit Court, and the captured slaves were returned to their owners.
The 1850 constitution
Civil War (1861–1865)
By 1860, Kentucky's population had reached 1,115,684; twenty-five percent were slaves, concentrated in the Bluegrass region, Louisville and Lexington. Louisville, which had been a major slave market, shipped many slaves downriver to the Deep South and New Orleans for sale or delivery. Kentucky traded with the eastern and western US as trade routes shifted from the rivers to the railroads and the Great Lakes. Many Kentucky residents had migrated south to Tennessee and west to Missouri, creating family ties with both states. The state voted against secession and remained loyal to the Union, although individual opinions were divided.
Kentucky was a border state during the American Civil War, and the state was neutral until a legislature with strong Union sympathies took office on August 5, 1861; most residents also favored the Union. On September 4, 1861, Confederate General Leonidas Polk violated Kentucky neutrality by invading Columbus. As a result of the Confederate invasion, Union general Ulysses S. Grant entered Paducah. The Kentucky state legislature, angered by the Confederate invasion, ordered the Union flag raised over the state capitol in Frankfort on September 7. In November 1861, Southern sympathizers unsuccessfully tried to establish an alternative state government with the goal of secession.
On August 13, 1862, Confederate general Edmund Kirby Smith's Army of Tennessee invaded Kentucky; Confederate general Braxton Bragg's Army of Mississippi entered the state on August 28. This began the Kentucky Campaign, also known as the Confederate Heartland Offensive. Although the Confederates won the bloody Battle of Perryville, Bragg retreated because he was in an exposed position; Kentucky remained in Union hands for the remainder of the war.
Reconstruction to World War I (1865–1914)
Although Kentucky was a slave state, it had not seceded and was not subject to military occupation during the Reconstruction era. It was subject to Freedmen's Bureau oversight of new labor contracts and aid to former slaves and their families. A congressional investigation was begun because of issues raised about the propriety of elected officials. During the election of 1865, ratification of the Thirteenth Amendment was a major issue. Although Kentucky opposed the Thirteenth, Fourteenth, and Fifteenth Amendments, the state was obligated to implement them when they were ratified. The Democrats prevailed in the elections.
After the war, violence continued in the state. A number of chapters of the Ku Klux Klan formed as insurgent veterans sought to establish white supremacy by intimidation and violence against freedmen and free Blacks. Although the Klan was suppressed by the federal government during the early 1870s, the Frankfort Weekly Commonwealth reported 115 incidents of shooting, lynching, and whipping of blacks by whites between 1867 and 1871.[full citation needed] Historian George C. White documented at least 93 lynching deaths of blacks by whites in Kentucky this period, and thought it more likely that at least 117 had taken place (one-third of the state's total number of lynchings).
Northeastern Kentucky had relatively few African Americans, but its whites attempted to drive them out. In 1866, whites in the Gallatin County seat of Warsaw incited a race riot. Over more than a 10-day period in August, a band of more than 500 whites attacked and drove off an estimated 200 Blacks across the Ohio River. In August 1867, whites attacked and drove off blacks in Kenton, Boone, and Grant Counties. Some fled to Covington, seeking shelter at the city's of the Freedmen's Bureau offices. During the early 1870s, US Marshal Willis Russell of Owen County fought a KKK band which was terrorizing Black people and their white allies in Franklin, Henry and Owen Counties until he was assassinated in 1875. Similar attacks were made on African Americans in western Kentucky, particularly Logan County and Russellville, the county seat. Whites were especially hostile to Black Civil War veterans.
Racial violence increased after Reconstruction period, peaking in the 1890s and extending into the early 20th century. Two-thirds of the state's lynchings of blacks occurred at this time, marked by the mass hanging of four black men in Russellville in 1908 and a white mob's lynching all seven members of the David Walker family near Hickman (in Fulton County) in October of that year. Violence near Reelfoot Lake and the Black Patch Tobacco Wars also received national newspaper coverage.
Hatfield-McCoy and other feuds
Kentucky became internationally known in the late 19th century for its violent feuds, especially in the eastern Appalachian mountain communities. Men in extended clans were pitted against each other for decades, often using assassination and arson as weapons with ambushes, gunfights and prearranged shootouts. Some of the feuds were continuations of violent local Civil War episodes. Journalists often wrote about the violence in stereotypical Appalachian terms, interpreting the feuds as the inevitable product of ignorance, poverty, isolation and (perhaps) inbreeding. The leading participants were typically well-to-do local elites with networks of clients who fought at the local level for political power.
The Hatfield–McCoy feud involved two rural American families of the West Virginia–Kentucky border area along the Tug Fork of the Big Sandy River in the years 1878–1890. Some say the 1865 shooting of Asa McCoy as a "traitor" for serving with the Union, was a precursor event. There was a lapse of 13 years until it flared with disputed ownership of a pig that swam across the Tug Fork in 1878 and escalated to shootouts, assassinations, massacres, and a hanging. Approx. 60 Hatfield and McCoy family members, associates, neighbors, law enforcement and others were killed or injured. 8 Hatfields went to prison for murder and other crimes. The feud ended with the hanging of Ellison Mounts, a Hatfield, in Feb. 1890 after being sentenced to death.
Gilded Age (1870s to 1900)
During the Gilded Age, the women's suffrage movement took hold in Kentucky. Laura Clay, daughter of noted abolitionist Cassius Clay, was the most prominent leader. A prohibition movement also began, which was challenged by distillers (based in the Bluegrass) and saloon-keepers (based in the cities).
Kentucky's hemp industry declined as manila became the world's primary source of rope fiber. This led to an increase in tobacco production, already the state's largest cash crop.
Louisville was the first US city to use a secret ballot. The ballot law, introduced by A. M. Wallace of Louisville, was enacted on February 24, 1888. The act applied only to the city, because the state constitution required voice voting in state elections. The mayor printed the ballots, and candidates had to be nominated by 50 or more voters to have their name placed on the ballot. A blanket ballot was used, with candidates listed alphabetically by surname without political-party designations.
Assassination of Governor Goebel
From 1860 to 1900, German immigrants settled in northern Kentucky cities (particularly Louisville). The best-known late-19th-century ethnic-German leader was William Goebel (1856–1900). From his base in Covington, Goebel became a state senator in 1887, fought the railroads, and took control of the state Democratic Party in the mid-1890s. His 1895 election law removed vote-counting from local officials, giving it to state officials controlled by the (Democratic) Kentucky General Assembly.
The election of Republican William S. Taylor as governor was unexpected. The Kentucky Senate formed a committee of inquiry which was packed with Democratic members. As it became apparent to Taylor's supporters that the committee would decide in favor of Goebel, they raised an armed force. On January 19, 1900, more than 1,500 armed civilians took possession of the Capitol. For over two weeks, Kentucky slid towards civil war; the presiding governor declared martial law, and activated the Kentucky militia. On January 30, 1900, Goebel was shot by a sniper as he approached the Capitol. Mortally wounded, Goebel was sworn in as governor the next day and died three days later.
For nearly four months after Goebel's death, Kentucky had two chief executives: Taylor (who insisted that he was the governor) and Democrat J. C. W. Beckham, Goebel's lieutenant governor, who requested federal aid to determine Kentucky's governor. On May 26, 1900, the Supreme Court of the United States upheld the committee's ruling that Goebel was Kentucky's governor and Beckham his successor. After the court's decision, Taylor fled to Indiana. He was indicted as a conspirator in Goebel's assassination; attempts to extradite him failed, and he remained in Indiana until his death.
World wars and interwar period (1914–1945)
Although violence against blacks declined in the early 20th century, it continued – particularly in rural areas, which also experienced other social disruption. African Americans were remained second-class citizens in the state, and many left the state for better-paying jobs and education in Midwestern manufacturing and industrial cities as part of the Great Migration. Rural whites also moved to industrial cities such as Pittsburgh, Chicago and Detroit.
World War I and the 1920s
Like the rest of the country, Kentucky experienced high inflation during the war years. Infrastructure was created, and the state built many roads to accommodate the increasing popularity of the automobile. The war also led to the clear-cutting of thousands of acres of Kentucky timber. The tobacco and whiskey industries had boom years during the 1910s, although Prohibition (which began in 1920) seriously harmed the state's economy when the Eighteenth Amendment was enacted. German citizens had established Kentucky's beer industry; a bourbon-based liquor industry already existed, and vineyards had been established during the 18th century in Middle Tennessee. Prohibition resulted in resistance and widespread bootlegging, which continued into mid-century. Eastern Kentucky rural and mountain residents made their own liquor in moonshine stills, selling some across the state.
During the 1920s, progressives attacked gambling. The anti-gambling crusade sprang from religious opposition to machine politics led by Helm Bruce and the Louisville Churchmen's Federation. The reformers had their greatest support in rural Kentucky from chapters of the revived Ku Klux Klan and fundamentalist Protestant clergymen. In its revival after 1915, the KKK supported general social issues (such as gambling prohibition) as it promoted itself as a fraternal organization concerned with public welfare.
Congressman Alben W. Barkley became the spokesman of the anti-gambling group (nearly secured the 1923 Democratic gubernatorial nomination), and crusaded against powerful eastern Kentucky mining interests. In 1926, Barkley was elected to the United States Senate. He became the Senate Democratic leader in 1937, and ran for Vice President with incumbent president Harry S. Truman in 1948.
In 1927, former governor J. C. W. Beckham won the Democratic Party's gubernatorial nomination as the anti-gambling candidate. Urban Democrats deserted Beckham, however, and Republican Flem Sampson was elected. Beckham's defeat ended Kentucky's progressive movement.
The Great Depression
Like the rest of the country and much of the world, Kentucky experienced widespread unemployment and little economic growth during the Great Depression. Workers in Harlan County fought coal-mine owners to organize unions in the Harlan County War; unions were eventually established, and working conditions improved.
President Franklin D. Roosevelt's New Deal programs resulted in the construction and improvement of the state's infrastructure: rural roads, telephone lines, and rural electrification with the Kentucky Dam and its hydroelectric power plant in western Kentucky. Flood-control projects were built on the Cumberland and Mississippi Rivers, improving the navigability of both.
The 1938 Democratic Senate primary was a showdown between Barkley (liberal spokesman for the New Deal) and conservative governor Happy Chandler. Although Chandler was a gifted orator, Franklin D. Roosevelt's endorsement after federal investment in the state reelected Barkley with 56 percent of the vote. Farmers, labor unions, and cities contributed to Barkley's victory, affirming the New Deal's popularity in Kentucky. A few months later, Barkley appointed Chandler to the state's other Senate seat after the death of Senator M. M. Logan.
In January 1937, the Ohio River rose to flood stage for three months. The flood led to river fires when oil tanks in Cincinnati were destroyed. One-third of Kenton and Campbell Counties in Kentucky were submerged, and 70 percent of Louisville was underwater for over a week. Paducah, Owensboro, and other Purchase cities were devastated. Nationwide damage from the flood totaled $20 million in 1937 dollars. The federal and state governments made extensive flood-prevention efforts in the Purchase, including a flood wall in Paducah.
World War II
World War II stimulated Kentucky industry, and agriculture declined in relative importance. Fort Knox was expanded with the arrival of thousands of new recruits; an ordnance plant was built in Louisville, and the city became the world's largest producer of artificial rubber. Shipyards in Jeffersonville and elsewhere attracted industrial workers to skilled jobs. Louisville's Ford plant produced almost 100,000 Jeeps during the war. The war led to a greater demand for higher education, since technical skills were in demand. Rose Will Monroe, one of the models for Rosie the Riveter, was a native of Pulaski County.
Kentuckians in the war
Husband Kimmel of Henderson County commanded the Pacific Fleet. Sixty-six men from Harrodsburg were prisoners on the Bataan Death March. Edgar Erskine Hume of Frankfort Was the military governor of Rome after its capture by the Allies. Kentucky native Franklin Sousley was one of the men in the photograph of the raising of the flag on Iwo Jima. As a prisoner of war, Harrodsburg resident John Sadler witnessed the atomic bombing of Nagasaki. Seven Kentuckians received the Medal of Honor; 7,917 Kentuckians died during the war, and 306,364 served.
Federal construction of the Interstate Highway System helped connect remote areas of Kentucky. Democrat Lawrence W. Wetherby was governor from 1950 to 1955. Wetherby was considered progressive, solid, and unspectacular. As lieutenant governor under Earle Clements, he succeeded Clements, who was elected U.S. Senator in 1950 and was elected governor in 1951. Wetherby emphasized road improvements, increasing tourism and other economic development. He was one of the few Southern governors to implement desegregation in public schools after the Supreme Court's decision in Brown v. Board of Education (1954), which ruled that segregated schools were unconstitutional. Bert T. Combs, the Democratic primary-winning candidate for governor in 1955, was defeated by Happy Chandler.
Agriculture was replaced in many areas by industry, which stimulated urbanization. By 1970, Kentucky had more urban than rural residents. Tobacco production remained an important part of the state's economy, bolstered by a New Deal legacy which gave a financial advantage to holders of tobacco allotments.
Thirteen percent of Kentucky's population moved out of state during the 1950s, largely for economic reasons. Dwight Yoakam's song, "Readin', Rightin', Route 23", cites local wisdom about avoiding work in the coal mines; U.S. Route 23 runs north through Columbus and Toledo, Ohio, to Michigan's automotive centers.
African Americans in Kentucky pressed for civil rights, provided by the US Constitution, which they had earned with their service during World War II. During the 1960s, as a result of successful local sit-ins during the civil rights movement, the Woolworth store in Lexington ended racial segregation at its lunch counter and in its restrooms.
Democratic Governor Ned Breathitt took pride in his civil-rights leadership after being elected governor in 1963. In his gubernatorial campaign against Republican Louis Broady Nunn, civil rights and racial desegregation were major campaign issues; Nunn attacked the Fair Services Executive Order, signed by Bertram Thomas Combs and three other governors after conferring with President John F. Kennedy. The executive order desegregated public accommodations in Kentucky and required state contracts to be free of discrimination. On television, Nunn promised Kentuckians that his "first act [would] be to abolish" the order; The New Republic reported that he ran "the first outright segregationist campaign in Kentucky."[full citation needed] Breathitt, who said that he would support a bill to eliminate legal discrimination, won the election by 13,000 votes.
After Breathitt was elected governor, the state civil-rights bill was introduced to the General Assembly in 1964. Buried in committee, it was not voted on. "There was a great deal of racial prejudice existing at that time," said Julian Carroll. A rally in support of the bill attracted 10,000 Kentuckians and leaders and allies such as Martin Luther King Jr., Ralph Abernathy, Jackie Robinson, and Peter, Paul and Mary. At the urging of President Lyndon B. Johnson, Breathitt led the National Governors Association in supporting the Civil Rights Act of 1964. Johnson later appointed him to the "To Secure These Rights" commission, charged with implementing the act.
In January 1966, Breathitt signed "the most comprehensive civil rights act ever passed by any state south of the Ohio River in the history of this nation." Martin Luther King Jr. concurred with Breathitt's assessment of Kentucky's sweeping legislation, calling it "the strongest and most important comprehensive civil-rights bill passed by a Southern state." Kentucky's 1966 Civil Rights Act ended racial discrimination in bathrooms, restaurants, swimming pools, and other public places throughout the state. Racial discrimination was prohibited in employment, and Kentucky cities were empowered to enact local laws against housing discrimination. The legislature repealed all "dead-letter" segregation laws (such as the 62-year-old Day Law) on the recommendation of Rep. Jesse Warders, a Louisville Republican and the only Black member of the General Assembly. The act gave the Kentucky Commission on Human Rights enforcement power to resolve discrimination complaints. Breathitt has said that the civil-rights legislation would have passed without him, and thought his opposition to strip mining had more to do with the decline of his political career than his support for civil rights.
1968 Louisville riots
Two months after Martin Luther King Jr. was assassinated, riots occurred in Louisville's West End. On May 27, a protest against police brutality at 28th and Greenwood Streets turned violent after city police arrived with guns drawn and protesters reacted. Governor Louie B. Nunn called out the National Guard to suppress the violence. Four hundred seventy-two people were arrested, damage totaled $200,000, and African Americans James Groves Jr. (age 14) and Washington Browder (age 19) were killed. Browder was shot dead by a business owner; Groves was shot in the back after allegedly participating in looting.
Late 20th century to present
Martha Layne Collins was Kentucky's first woman governor from 1983 to 1987, and co-chaired the 1984 Democratic National Convention. A former schoolteacher, Collins had risen up the state's Democratic ranks and was elected lieutenant governor in 1979; in 1983, she defeated Jim Bunning for the governorship. Throughout her public life, Collins emphasized education and economic development; a feminist, she viewed all issues as "women's issues." Collins was proud of acquiring a Toyota plant for Georgetown, which brought a substantial number of jobs to the state.
In June 1989, federal prosecutors announced that 70 men, most from Marion County and some from adjacent Nelson and Washington Counties, had been arrested for organizing a marijuana-trafficking ring which stretched across the Midwest. The conspirators called themselves the "Cornbread Mafia".
Wallace G. Wilkinson signed the Kentucky Education Reform Act (KERA) in 1990, overhauling Kentucky's universal public-education system. The Kentucky legislature passed an amendment allowing the state's governor two consecutive terms. Paul E. Patton, a Democrat, was the first governor eligible to succeed himself; winning a close race in 1995, Patton benefited from economic prosperity and most of his initiatives and priorities were successful. After winning reelection by a large margin in 1999, however, Patton suffered from the state's economic problems and lost popularity from the exposure of an extramarital affair. Near the end of his second term, Patton was accused of abusing patronage and criticized for pardoning four former supporters who had been convicted of violating the state's campaign-finance laws. Patton's successor, Republican Ernie Fletcher, was governor from 2003 to 2007.
In 2000, Kentucky ranked 49th of the 50 U.S. states in the percentage of women in state or national political office. The state has favored "old boys" with political elites, incumbency, and long-entrenched political networks.
Democrat Steve Beshear was elected governor in 2007 and reelected in 2011. In 2015, Beshear was succeeded by Republican Matt Bevin. Bevin lost in 2019 to his predecessor's son and former state attorney general, Andy Beshear.
Kentucky was the first state in the U.S. to adopt Common Core, after the General Assembly passed legislation in April 2009 under Governor Steve Beshear which laid the foundation for the new national standards. In fall 2010, Kentucky's board of education voted to adopt the Common Core verbatim. As the first state to implement Common Core, $17.5 million was received by Kentucky from the Gates Foundation.
Affordable Care Act
Kentucky implemented Obamacare, expanding Medicaid and launching Kynect.com, in late 2013. "Kentucky is the only Southern state both expanding Medicaid and operating a state-based exchange," Governor Steve Beshear wrote in a New York Times op-ed outlining his case for the implementation of Obamacare in Kentucky. "It's probably the most important decision I will get to make as governor because of the long-term impact it will have," said Beshear.
On April 19, 2013, Kentucky legalized hemp when Governor Steve Beshear refused to veto Senate Bill 50; Beshear had been one of the last obstacles blocking SB50 from becoming law. Under federal law, hemp had been a Schedule 1 narcotic like PCP and heroin (although hemp typically has 0.3 percent THC, compared to the three to 22 percent usually found in marijuana). The Schedule 1 designation was exempted for Kentucky's pilot hemp research projects when the Agricultural Act of 2014 was passed. The state believes that the production of industrial hemp can benefit its economy.
- Timeline of Kentucky history
- Timeline of Lexington, Kentucky
- Timeline of Louisville, Kentucky
- Outline of Kentucky
- History of Louisville, Kentucky
- History of the Southern United States
- List of Kentucky women in the civil rights era
- History of African Americans in Kentucky
- History of the French in Louisville
- Ohio River#History
- Kentucke's Frontiers by Craig Thompson Friend
- History of education in Kentucky
- Dumenil, Lynn, ed. (2012). "Cumberland Gap". The Oxford Encyclopedia of American Social History. Oxford University Press. p. 241. ISBN 978-0-1997-4336-0. Retrieved October 15, 2014.
- Murphree, Daniel S., ed. (2012). "Kentucky". Native America: A State-by-State Historical Encyclopedia. Vol. I: Alabama – Louisiana. Greenwood. p. 436. ISBN 978-0--3133-8126-3. Retrieved October 15, 2014.
- "State of Kentucky Genealogy". Genealogy.com. Retrieved June 1, 2011.
- Pollack, David; Stottman, M. May (August 2005). Archaeological Investigation of the State Monument, Frankfort, Kentucky (PDF) (Report). Vol. KAS Report No. 104. Kentucky Archaeological Survey. Archived from the original (PDF) on April 13, 2015.
- Lewis, R. Barry, ed. (1996). Kentucky Archaeology. Lexington: University Press of Kentucky. p. 21. ISBN 978-0813119076.
- Tankersley, Kenneth B.; Waters, Michael R.; Strafford Jr., Thomas W. (July 2009). "Clovis and the American Mastodon at Big Bone Lick, Kentucky" (PDF). American Antiquity. 74 (3): 558–567. doi:10.1017/S0002731600048757. JSTOR 20622443. S2CID 160407384. Retrieved May 29, 2015.
- Webb, W. S.; Funkhouser, W. D. (October–December 1929). "The so-Called "Hominy-Holes" of Kentucky". American Anthropologist. New Series. 31 (4): 701–709. doi:10.1525/aa.1929.31.4.02a00090. JSTOR 661179.
- Webb, William S. (February 17, 2013). "Indian Knoll". Kentucky Archaeological Survey. Kentucky Heritage Council. Archived from the original on June 2, 2015.
- Harrison, Lowell H.; Klotter, James C. (1997). A New History of Kentucky. Lexington: University Press of Kentucky. pp. 7–8. Retrieved May 30, 2015.
- Lammlein, Dorothy; Overstreet, Joseph S.; Dott, Linda; et al., eds. (1996). History & Families Oldham County, Kentucky: The First Century, 1824–1924. La Grange, Kentucky: Oldham County Historical Society. p. 8.
- Harrison & Klotter 1997, p. 8.
- "Boone County: A Historic Overview". Boone County Kentucky. Retrieved October 27, 2015.
- "The Yuchi Indians". Carolina – The Native Americans. J.D. Lewis. Retrieved October 27, 2015.
- Swanton, John R. (2007) . The Indian Tribes of North America. Bulletin #145 (Genealogical Publishing reprint ed.). Smithsonian Institution, Bureau of American Ethnology. p. 117. ISBN 978-0-8063-1730-4.
- Hodge, Frederick Webb, ed. (1907). Handbook of American Indians North of Mexico. Bulletin #30. Washington D.C.: Smithsonian Institution, Bureau of American Ethnology.
- Beckner, Lucien (October 1932). "Eskippakithiki, The Last Indian Town in Kentucky". Filson Club History Quarterly. 6 (4).
- Belue, Ted Franklin (March 1, 2003). Hunters of Kentucky: A Narrative History of America's First Far West, 1750–1792. Mechanicsville, Pennsylvania: Stackpole Books. p. 259. ISBN 978-0-8117-4534-5.
- Harrison & Klotter 1997, p. 9.
- "e-WV | Batts and Fallam Expedition". www.wvencyclopedia.org.
- "The Journeys of James Needham and Gabriel Arthur". Cherokee Heritage Documentation Center. Retrieved October 27, 2015.
- Rice, Otis K.; Brown, Stephen W. (1993) . West Virginia: A History (Second ed.). Lexington: University Press of Kentucky. p. 13. ISBN 978-0-8131-3766-7.
- Drake, Richard B. (2003) . A History of Appalachia (Paperback ed.). Lexington, Kentucky: University Press of Kentucky. ISBN 9780813137933.
- "James Needham and Gabriel Arthur". Carolana.com. J.D. Lewis. Retrieved October 27, 2015.
- Ethridge, Robbie Franklyn (2010). From Chicaza to Chickasaw: The European Invasion. University of North Carolina Press. ISBN 978-0-8078-3435-0.
- Arthur, T.S.; Carpenter, W.H. (1869). The History of Kentucky From Its Earliest Settlement. Philadelphia, Pennsylvania: Claxton, Remsen & Haffelfinger. p. 21.
- The second charter in 1609 granted "lande, throughoute, from sea to sea, west and northwest;...". France ceded claims west of the Mississippi to Spain in the secret Treaty of Fountainbleau in 1762, and Spanish control of the Mississippi prempted Virginia Colony's figurative claims west of it.
- Lowell H. Harrison and James C. Klotter, A New History of Kentucky (1997) pp 19-20
- At that time, the western extent of Pennsylvania was not settled; the western boundary wouldn't be determined until 1784 as part of states' trans-Appalachian land cessions to the United States. The area was disputed by Virginia and Pennsylvania.
- Skinner, Constance Lindsey (1919). Pioneers of the Old Southwest: a Chronicle of the Dark and Bloody Ground. New Haven, Connecticut: Yale University Press.
- Kleber, John E., ed. (1992). "Harrod, James". The Kentucky Encyclopedia. Lexington: University Press of Kentucky. pp. 413–414. ISBN 978-0-8131-1772-0.
- "Old Fort Harrod State Park". Kentucky Department of Parks. Archived from the original on August 28, 2007. Retrieved July 19, 2007.
- Harrison, Douglas C. (2011). The Clarks of Kentucky. iUniverse. p. 4. ISBN 978-1-4620-5859-4. Retrieved June 21, 2015.
- Hurt, R.D. (2002). The Indian Frontier, 1763–1846. Albuquerque: University of New Mexico Press. p. 15. ISBN 978-0-8263-1966-1.
- Williams, Samuel Cole (1919). "Henderson and Company's Purchase Within the Limits of Tennessee". Tennessee Historical Magazine. Nashville, Tennessee: Tennessee Historical Society. 5 (1): 5–23.
- Calloway, Colin G. (Winter 1992). ""We Have Always Been the Frontier": The American Revolution in Shawnee Country". American Indian Quarterly. 16 (1): 39–52. doi:10.2307/1185604. JSTOR 1185604.
- Aubrey, E. Lynn; Burton, Sheila Mason; Crofts, Joyce Neel; et al. (February 2003). "Constitutional Background" (PDF). Kentucky Government: Informational Bulletin No. 137. Frankfort, Kentucky: Kentucky Legislative Research Commission. pp. 11–12.
- Kesavan, Vasan (December 1, 2002). "When Did the Articles of Confederation Cease to Be Law". Notre Dame Law Review. 78 (1): 70–71. Retrieved October 31, 2015.
- Fred S. Rolater. "Treaties". Tennessee Encyclopedia. Retrieved September 5, 2021.
- "Surveyors Error In Drawing 'Walker Line' Kept Tennessee, Kentucky At Odds For Many Years". www.tngenweb.org.
- "Walker'". sites.rootsweb.com.
- Eslinger, Ellen (Winter 2009). "Farming on the Kentucky Frontier". Register of the Kentucky Historical Society. 107 (1): 3–32. JSTOR 23387135.
- Ottesen, Ann I. (1985). "A Reconstruction of the Activities and Outbuildings at Farmington, an Early Nineteenth-Century Hemp Farm". Filson Club History Quarterly. 59 (4): 395–425.
- Axton, W. F. (2009) . Tobacco and Kentucky. Lexington: University Press of Kentucky. p. 32. ISBN 978-0-8131-9340-3.
- Barnett, Todd H. (1999). "Virginians Moving West: The Early Evolution of Slavery in the Bluegrass". Filson Club History Quarterly. 73 (3): 221–248.
- Raitz, Karl; O'Malley, Nancy (2012). Kentucky's Frontier Highway: Historical Landscapes along the Maysville Road. Lexington: University Press of Kentucky. ISBN 978-0-8131-3664-6.
- Burr, David H. (1839). "Map of Kentucky and Tennessee". World Digital Library. London: John Arrowsmith. Retrieved July 1, 2013.
- Soltow, Lee (July 1981). "Horse Owners in Kentucky in 1800". Register of the Kentucky Historical Society. 79 (3): 203–210. JSTOR 23379469.
- Hollingsworth, Kent (2009) . The Kentucky Thoroughbred. Lexington: University Press of Kentucky. p. 12. ISBN 978-0-8131-9189-8.
- Hillenbrand, Laura (May–June 1999). "The Derby". American Heritage. 50 (3): 98–107.
- Sawers, Larry (Winter 2004). "The Mule, the South, and Economic Progress". Social Science History. 28 (4): 667–690. doi:10.1215/01455532-28-4-667. JSTOR 40267861.
- "Lexington, Kentucky: The Athens of the West". National Park Service. Archived from the original on May 15, 2021. Retrieved September 5, 2021.
- Niels H. Sonne, Liberal Kentucky, 1780-1828 (1939) online.
- James C. Klotter, and Daniel Bruce Rowland, eds Bluegrass Renaissance: The History and Culture of Central Kentucky, 1792-1852 (University Press of Kentucky, 2012).
- Aron, Stephen (2012). "Putting Kentucky in its Place". In Klotter, James C.; Rowland, Daniel (eds.). Bluegrass Renaissance: The History and Culture of Central Kentucky, 1792–1852. Lexington: University Press of Kentucky. p. 46. ISBN 978-0-8131-3607-3.
- Bates, Alan L. (2001). "Steamboats". In Kleber, John E. (ed.). The Encyclopedia of Louisville. Lexington: University Press of Kentucky. pp. 849–851.
- O'Brien, Mary Laurence Bickett (2001). "Slavery in Louisville, 1820–1860". In Kleber, John E. (ed.). The Encyclopedia of Louisville. Lexington: University Press of Kentucky. pp. 825–826.
- Castner, Charles B. (2001). "Railroads". In Kleber, John E. (ed.). The Encyclopedia of Louisville. Lexington: University Press of Kentucky. pp. 744–746.
- Yater, George H. (2001). "Bloody Monday". In Kleber, John E. (ed.). The Encyclopedia of Louisville. Lexington: University Press of Kentucky. p. 97. ISBN 978-0-8131-4974-5.
- Conkin, Paul K. (1990). Cane Ridge: America's Pentecost. Madison: University of Wisconsin Press. ISBN 978-0-299-12720-6.
- Ranck, George W. (1910). "The Travelling Church": An Account of the Baptist Exodus from Virginia to Kentucky in 1781 under the Leadership of Rev. Lewis Craig and Capt. William Ellis. Louisville, Kentucky: Mrs. George W. Ranck (self-published). p. 22 (and footnote). Retrieved January 16, 2017.
- Nutter, H. E. (1940). A Brief History of the First Baptist Church (Black) Lexington, Kentucky. pp. 9–15. Retrieved August 22, 2010.
- "First African Baptist Church". Lexington, Kentucky: The Athens of the West. National Park Service. Retrieved August 21, 2010.
- Najar, Monica (Summer 2005). ""Meddling with Emancipation": Baptists, Authority, and the Rift over Slavery in the Upper South". Journal of the Early Republic. 25 (2): 157–186. doi:10.1353/jer.2005.0041. JSTOR 30043307. S2CID 201792209.
- Ardery, Philip (October 1987). "Barton Stone and the Drama of Cane Ridge". Register of the Kentucky Historical Society. 85 (4): 308–321. JSTOR 23380884.
- The Enigma of the New Madrid Earthquakes of 1811–1812. Johnston, A. C. & Schweig, E. S. Annual Review of Earth and Planetary Sciences, Volume 24, 1996, pp. 339–384. Available on SAO/NASA Astrophysics Data System (ADS)
- James Russell Harris, "Kentuckians in the War of 1812: A Note on Numbers, Losses, and Sources." Register of the Kentucky Historical Society 82.3 (1984): 277-286.
- James W. Hammack Jr, Kentucky and the Second American Revolution: The War of 1812 (University Press of Kentucky, 2015).
- Eubank, Damon (1998). "A Time for Heroes, a Time for Honor: Kentucky Soldiers in the Mexican War". Filson Club History Quarterly. 72 (2): 174–192.
- Leming, John E. Jr. (June 2000). "The Great Slave Escape of 1848 Ended in Bracken County". The Kentucky Explorer: 25–29.
- Aptheker, Herbert (1983) . American Negro Slave Revolts. International Publishers. p. 338. ISBN 978-0-7178-0605-8.
- James M. Prichard, "This Priceless Jewell – Liberty: The Doyle Conspiracy of 1848." Paper Delivered at the 14th Annual Ohio Valley History Conference, October 23, 1998.
- Harrison, Lowell H. (2009) . The Civil War in Kentucky. Lexington: University Press of Kentucky. ISBN 978-0-8131-9247-5.
- Harrison, Lowell H. (January 1978). "The Civil War in Kentucky: Some Persistent Questions". The Register of the Kentucky Historical Society. 76 (1): 1–21. JSTOR 23378644.
- Broadwater, Robert P. (2005). The Battle of Perryville, 1862: Culmination of the Failed Kentucky Campaign. McFarland & Company. ISBN 978-0-7864-6080-9.
- McDonough, James Lee (1994). War in Kentucky: From Shiloh to Perryville. Knoxville: University of Tennessee Press. ISBN 978-0-87049-847-3.
- Ross A. Webb, Kentucky in the Reconstruction Era (University Press of Kentucky, 2014).
- Aaron Astor, Rebels on the Border: Civil War, Emancipation, and the Reconstruction of Kentucky and Missouri (LSU Press, 20120.
- Wright, George C. (1996). Racial Violence In Kentucky: Lynchings, Mob Rule, and "Legal Lynchings". LSU Press. p. 42. ISBN 978-0-8071-2073-6.
- Wright (1996), pp. 39–42
- Otterbein, Keith F. (June 2000). "Five Feuds: An Analysis of Homicides in Eastern Kentucky in the Late Nineteenth Century". American Anthropologist. 102 (2): 231–43. doi:10.1525/aa.2000.102.2.231.
- Billings, Dwight B.; Blee, Kathleen M. (Summer 1996). ""Where the Sun Set Crimson and the Moon Rose Red": Writing Appalachia and the Kentucky Mountain Feuds". Southern Cultures. 2 (3/4): 329–352. doi:10.1353/scu.1996.0005. S2CID 145456941.
- Cline, Cecil L. (1998). The Clines and Allied Families of The Tug River Valley. Baltimore, Maryland: Gateway Press.
- "HATFIELD-M'COY FEUD HAS HAD 60 VICTIMS; It Started 48 Years Ago Over a Pig That Swam the Tug River. TOM HATFIELD DIED LATELY Found Tied to a Tree -- Governors of Kentucky and West Virginia Have Been Involved in Mountain War". The New York Times. February 24, 1908 – via NYTimes.com.
- Alther, Lisa. Blood Feud: The Hatfields And The Mccoys: The Epic Story Of Murder And Vengeance. Lyons Press; First Edition (May 22, 2012). ISBN 978-0762779185
- Ludington, Arthur Crosby (1911). "Kentucky". American Ballot Laws, 1888–1910. Albany: University of the State of New York. p. 28.
- Evans, Eldon Cobb (1917). Wikisource. (PhD Thesis). University of Chicago Press. p. 19 – via
- Klotter, James C. (2009) . William Goebel: The Politics of Wrath (paperback ed.). Lexington: University Press of Kentucky. ISBN 978-0-8131-9343-4.
- Wright (1996), Racial Violence, pp. 99-100
- David J. Bettez (, Kentucky and the Great War: World War I on the Home Front (2016) excerpt
- Robert Kirschenbaum, KLAN AND COMMONWEALTH: THE KU KLUX KLAN AND POLITICS IN KENTUCKY 1921-1928 (2005) online.
- James K. Libbey, Alben Barkley: A Life in Politics (2016) excerpt
- Sexton, Robert F. (1976). "The Crusade Against Pari-mutuel Gambling in Kentucky: a Study of Southern Progressivism in the 1920s". Filson Club History Quarterly. 50 (1): 47–57.
- Hixson, Walter L. (Summer 1982). "The 1938 Kentucky Senate Election: Alben W. Barkley, 'Happy' Chandler, and the New Deal". Register of the Kentucky Historical Society. 80 (3): 309–329. JSTOR 23379498.
- Richard Holl, Committed to Victory: The Kentucky Home Front during World War II (2022).
- Kleber, John E. (October 1986). "As Luck Would Have It: An Overview of Lawrence W. Wetherby as Governor, 1950–1955". Register of the Kentucky Historical Society. 84 (4): 397–421. JSTOR 23380946.
- Vance, J.D. (2001). Hillbilly Elegy. New York City: HarperCollins. p. 28. ISBN 978-0-06-230054-6.
- "Downtown Lexington's Next Loss: Woolworth's". Preservation Magazine. August 2004. Archived from the original on June 28, 2014. Retrieved March 7, 2009.
- Harrison & Klotter 1997, p. 390.
- "4 Governors Act". The Washington Afro American. July 2, 1963. p. 12.
- Wheatley, Kevin (March 5, 2014). "Legislators Recall Martin Luther King Jr. March". State Journal. Frankfort, Kentucky.
- Harrell, Kenneth E., ed. (1984). Derby Statement, Frankfort / May 4, 1967. p. 437. ISBN 978-0-8131-0603-8.
- Johnson, John; Mier, Maria (January 20, 2013). "Ky. voices: Kentucky led South in civil rights, what about now?". Lexington Herald-Leader.
- Williams, Horace Randall; Beard, Ben (2009). October 13, 1961 – Kentucky Civil Rights Commission Fights the Good Fight. p. 311. ISBN 978-1-58838-241-2.
- "Welcome! Kentucky Law Requires" (PDF). Kentucky Commission on Human Rights. Retrieved October 28, 2015.
- Brinson, Betsey; Williams, Kenneth H.; Breathitt, Ned (January 2001). "An Interview with Governor Ned Breathitt on Civil Rights: "The Most Significant Thing That I Have Ever Had a Part in."". Register of the Kentucky Historical Society. 99 (1): 5–51. JSTOR 23384876.
- Kleber, John, ed. (2001). "Civil Disturbances of 1968". The Encyclopedia of Louisville. Lexington: University Press of Kentucky. pp. 189–190. ISBN 978-0-8131-2100-0.
- Fraas, Elizabeth (July 2001). ""All Issues Are Women's Issues": An Interview With Governor Martha Layne Collins on Women in Politics". Register of the Kentucky Historical Society. 99 (3): 213–248. JSTOR 23384604.
- Blanchard, Paul (Winter 2004). "Governor Paul E. Patton". Register of the Kentucky Historical Society. 102 (1): 69–87. JSTOR 23386347.
- Miller, Penny M. (July 2001). "The Slow and Unsure Progress of Women in Kentucky Politics". Register of the Kentucky Historical Society. 99 (3, Special Issue on Kentucky Women in Government and Politics): 249–284. JSTOR 23384605.
- Butrymowicz, Sarah (October 15, 2013). "What Kentucky Can Teach The Rest of the US About Common Core". The Atlantic.
- Porter, Caroline (May 8, 2015). "In an Early Adopter, Common Core Faces Little Pushback". The Wall Street Journal.
- Lawrence, Jill (December 6, 2013). "How Steve Beshear Became Kentucky's Democrat Whisperer". The Daily Beast.
- Wing, Nick (April 19, 2013). "Kentucky Hemp Bill Becomes Law". Huffington Post.
- "Kentucky CBD: Back to the Future with Industrial Hemp". SFGate.com. San Francisco Chronicle. May 12, 2015.
Surveys and reference
- Abramson, Rudy; Haskell, Jean, eds. (2006). Encyclopedia of Appalachia. Nashville: University of Tennessee Press. ISBN 978-1-57233-456-4.
- Bodley, Temple and Samuel M. Wilson. History of Kentucky 4 vols. (1928)
- Channing, Steven. Kentucky: A Bicentennial History (1977); popular overview
- Clark, Thomas Dionysius. A History of Kentucky (many editions, 1937–1992); long the standard textbook
- Collins, Lewis. History of Kentucky (1880); old but highly detailed online edition
- Connelley, William Elsey, and Ellis Merton Coulter. History of Kentucky. Ed. Charles Kerr. (5 vol. 1922), vol 1 to 1814 online.
- Ford, Thomas R. ed. The Southern Appalachian Region: A Survey. (1967); includes highly detailed statistics
- Kleber, John E. Thomas D. Clark, Lowell H. Harrison and James C. Klotter, eds, The Kentucky Encyclopedia (1992) online
- Klotter, James C. Our Kentucky: A Study of the Bluegrass State (2000); high school text
- Klotter, James C. Kentucky: Portrait in Paradox, 1900–1950 (2006), a major scholarly survey online
- Klotter, James C. and Freda C. Klotter. A Concise History of Kentucky (2008)
- Klotter, James C. and Craig Thompson Friend. A New History of Kentucky (2nd ed. University Press of Kentucky, 2019) ISBN 0813176514, a standard scholarly history.
- Lucas, Marion Brunson and Wright, George C. A History of Blacks in Kentucky 2 vols. (1992)
- McVey, Frank L. The Gates Open Slowly: A History of Education in Kentucky (University Press of Kentucky 2014).
- Morse, Jedidiah (1797). "Kentucky". The American Gazetteer. Boston, Massachusetts: At the presses of S. Hall, and Thomas & Andrews. OL 23272543M.
- Ramage, James A., and Andrea S. Watkins. Kentucky Rising: Democracy, Slavery, and Culture from the Early Republic to the Civil War (2011), a standard scholarly history 1800 to 1865
- Share, Allen J. Cities in the Commonwealth: Two Centuries of Urban Life in Kentucky (1982)
- Smith, John David. "Whither Kentucky Civil War and Reconstruction Scholarship?." Register of the Kentucky Historical Society 112.2 (2014): 223–247. online
- Sonne, Niels H. Liberal Kentucky, 1780-1828 (1939) online, focus on Transylvania U.
- Tapp, Hambleton, and James C. Klotter. Kentucky: Decades of Discord, 1865–1900 (2008), a major scholarly survey
- Wallis, Frederick A. and Hambleton Tapp. A Sesqui-Centennial History of Kentucky 4 vols. (1945)
- Ward, William S., A Literary History of Kentucky (1988) (ISBN 0-87049-578-X)
- WPA, Kentucky: A Guide to the Bluegrass State (1939); classic guide from the Federal Writers Project; covers main themes and every town online
- Yater, George H. (1987). Two Hundred Years at the Fall of the Ohio: A History of Louisville and Jefferson County (2nd ed.). Filson Club, Incorporated. ISBN 978-0-9601072-3-0.
Specialized scholarly studies
- Aron, Stephen A. How the West Was Lost: The Transformation of Kentucky from Daniel Boone to Henry Clay (1996)
- Aron, Stephen A. "The Significance of the Kentucky Frontier," Register of the Kentucky Historical Society 91 (Summer 1993), 298–323.
- Bakeless, John. Daniel Boone, Master of the Wilderness (1989) online
- Blakey, George T. Hard Times and New Deal in Kentucky, 1929–1939 (1986)
- Clark, Thomas D. (January 1938). "Salt, A Factor in the Settlement of Kentucky". Filson Club History Quarterly. 12 (1). Archived from the original on May 2, 2012. Retrieved November 29, 2011.
- Coulter, E. Merton. The Civil War and Readjustment in Kentucky (1926)
- Davis, Alice. "Heroes: Kentucky's Artists from Statehood to the New Millennium" (2004)
- Eller, Ronald D. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880–1930 1982
- Ellis, William E. The Kentucky River (2000)
- Eslinger, Ellen. "Farming on the Kentucky Frontier," Register of the Kentucky Historical Society, 107 (Winter 2009), 3–32.
- Faragher, John Mack. Daniel Boone (1993)
- Fenton, John H. Politics in the Border States: A Study of the Patterns of Political Organization, and Political Change, Common to the Border States: Maryland, West Virginia, Kentucky, and Missouri (1957)
- Flannery, Michael A. "The significance of the frontier thesis in Kentucky culture: a study in historical practice and perception." Register of the Kentucky Historical Society 92.3 (1994): 239-266. online
- Heidler, David S., and Jeanne T. Heidler. Henry Clay: The Essential American (2010); scholarly biography
- Hoskins, Patricia. "'The Old First is With the South:' The Civil War, Reconstruction, and Memory in the Jackson Purchase Region of Kentucky." (Ph dissertation Auburn U. 2009). online
- Ireland, Robert M. The County in Kentucky History (1976)
- Kephart, Horace (1922). Our Southern Highlanders: A Narrative of Adventure in the Southern Appalachians and a Study of the Life Among the Mountaineers (New and revised ed.). Macmillan. ISBN 978-0-87049-203-7.
- Klotter, James C. and Daniel Rowland, eds. Bluegrass Renaissance: The History and Culture of Central Kentucky, 1792–1852 (Lexington: University Press of Kentucky, 2012),
- Klotter, James C.; Harrison, Lowell; Ramage, James; Roland, Charles; Taylor, Richard; Bush, Bryan S; Fugate, Tom; Hibbs, Dixie; Matthews, Lisa; Moody, Robert C.; Myers, Marshall; Sanders, Stuart; McBride, Stephen (2005). Rose, Jerlene (ed.). Kentucky's Civil War 1861–1865. Clay City, Kentucky: Back Home In Kentucky, Inc. ISBN 978-0-9769231-1-4.
- Klotter, James C., ed. The Athens of the West: Kentucky and American Culture, 1792–1852 (University Press of Kentucky, 2012)
- Klotter, James C. "Moving Kentucky History into the Twenty-first Century: Where Should We Go From Here?." Register of the Kentucky Historical Society 97.1 (1999): 83-112. online
- Marshall, Anne E. Creating a Confederate Kentucky: The Lost Cause and Civil War Memory in a Border State (University of North Carolina Press; 2010)
- Moore, Arthur K. The frontier mind: a cultural analysis of the Kentucky frontier man (1957), emphasizes anti-intellectualism. online
- Pearce, John Ed. Divide and Dissent: Kentucky Politics, 1930–1963 (1987)
- Pudup, Mary Beth, Dwight B. Billings, and Altina L. Waller, eds. Appalachia in the Making: The Mountain South in the Nineteenth Century. (1995)
- Ramage, James, and Andrea S. Watkins. Kentucky Rising: Democracy, Slavery, and Culture from the Early Republic to the Civil War (University Press of Kentucky, 2011).
- Reid, Darren R. (ed.) Daniel Boone and Others on the Kentucky Frontier: Autobiographies and Narratives, 1769–1795 (2009) ISBN 978-0-7864-4377-2
- Remini, Robert V. Henry Clay: Statesman for the Union (1991); scholarly biography
- Sonne, Niels Henry. Liberal Kentucky, 1780–1828 (1939)
- Townsend, William H. Lincoln and the Bluegrass: Slavery and Civil War in Kentucky (1955);
- Waldrep, Christopher Night Riders: Defending Community in the Black Patch, 1890–1915 (1993); tobacco wars
- Cantrell, Doug; Holl, Richard E.; Maltby, Lorie; et al. (2009). Kentucky Through The Centuries: A Collection Of Documents And Essays. Kendall Hunt Publishing Company. ISBN 978-0-7575-4387-6.
- Chandler, Albert B. (1989). Heroes, Plain Folks, and Skunks: The Life and Times of Happy Chandler. Bonus Books. |
Using next() to Iterate through a Generator. Python Iterators, Generators And Decorators Made Easy. Python next() Function | Iterate Over in Python Using next. code. It is the same as the lambda function which creates an anonymous function; the generator's expressions create an anonymous generator function. It is as easy as defining a normal function, but with a yield statement instead of a return statement.. Note: This generator function not only works with strings, but also with other kinds of iterables like list, tuple, etc. The same kind of approach applies to many producer/consumer functions. This is done to notify the interpreter that this is an iterator. 03:46 Calling next() on f() like this is going to create a new generator each time. Following is an example to implement a sequence of power of 2 using an iterator class. As per the name “Generator”, is a function that generates the values (more than one or series of values). What are Python Generator Functions? Python had been killed by the god Apollo at Delphi. Create Generators in Python. Python’s for statement operates on what are called iterators.An iterator is an object that can be invoked over and over to produce a series of values. Python generator functions are a simple way to create iterators. In this article, we will use Python to process next-generation sequencing datasets. All the work we mentioned above are automatically handled by generators in Python. This website aims at providing you with educational material suitable for self-learning. Generators provide a space efficient method for such data processing as only parts of the file are handled at one given point in time. All of the state, like the values of local variables, is recovered and the generator contiues to execute until the next call to yield. To go inside, you have to call next() on that generator object, and you have to actually save this into a variable, and then call next(). The procedure to create the generator is as simple as writing a regular function.There are two straightforward ways to create generators in Python. Let's take an example of a generator that reverses a string. Generators are excellent mediums to represent an infinite stream of data. A normal function to return a sequence will create the entire sequence in memory before returning the result. If the default parameter is omitted and the iterator is exhausted, it raises StopIteration exception. They allow programmers to make an iterator in a fast, easy, and clean way. When to use yield instead of return in Python? Run these in the Python shell to see the output. The yield keyword converts the expression given into a generator function that gives back a generator object. This is an overkill, if the number of items in the sequence is very large. Python yield returns a generator object. Here is an example to illustrate all of the points stated above. But normally you shouldn't check for existence of next value. Many Standard Library functions that return lists in Python 2 have been modified to return generators in Python 3 because generators require fewer resources. Return Value from next () The next () function returns the next item from the iterator. If a function contains at least one yield statement (it may contain other yield or return statements), it becomes a generator function. There is a lot of overhead in building an iterator in python. One final thing to note is that we can use generators with for loops directly. An iterator is an object that can be iterated upon, meaning that you can traverse through all the values. Generators are functions that return an iterable generator object. The generator function can generate as many values (possibly infinite) as it wants, yielding each one in its turn. another thing you can do is: Prerequisites: Yield Keyword and Iterators There are two terms involved when we discuss generators. When called, a generator function returns a generator object, which is a kind of iterator – it has a next() method. >>> gen = (i for i in ) >>> next(gen) Traceback (most recent call last): File "
Generators are simple functions which return an iterable set of items, one at a time, in a special way. Generators in Python are created just like how you create normal functions using the ‘def’ keyword. Python provides us with different objects and different data types to … Basically, we are using yield rather than return keyword in the Fibonacci function. By using our site, you |
Using the Hubble Space Telescope and the Spitzer Space Telescope, a team of astronomers details the atmospheres of ten Jupiter-sized exoplanets and was able to discover why some of these worlds seem to have less water than expected — a long-standing mystery. The results are published in “Nature”.
To date, astronomers have discovered nearly 2000 planets orbiting other stars. Some of these planets are known as hot Jupiters — hot, gaseous planets with characteristics similar to those of Jupiter. They orbit very close to their stars, making their surface hot, and the planets tricky to study in detail without being overwhelmed by bright starlight.
Due to this difficulty, Hubble has only explored a handful of hot Jupiters in the past, across a limited wavelength range. These initial studies have found several planets to hold less water than expected.
This video shows an artist’s impression of the ten hot Jupiter exoplanets studied by David Sing and his colleagues. From top left to lower left these planets are WASP-12b, WASP-6b, WASP-31b, WASP-39b, HD 189733b, HAT-P-12b, WASP-17b, WASP-19b, HAT-P-1b and HD 209458b.
Now, an international team of astronomers has tackled the problem by making the largest ever study of hot Jupiters, exploring and comparing ten such planets in a bid to understand their atmospheres. Only three of these planetary atmospheres had previously been studied in detail; this new sample forms the largest ever spectroscopic catalog of exoplanet atmospheres.
The team used multiple observations from both the NASA/ESA Hubble Space Telescope and NASA’s Spitzer Space Telescope. Using the power of both telescopes allowed the team to study the planets, which are of various masses, sizes, and temperatures, across an unprecedented range of wavelengths.
“I’m really excited to finally ‘see’ this wide group of planets together, as this is the first time we’ve had sufficient wavelength coverage to be able to compare multiple features from one planet to another,” says David Sing of the University of Exeter, UK, lead author of the new paper. “We found the planetary atmospheres to be much more diverse than we expected.”
All of the planets have a favorable orbit that brings them between their parent star and Earth. As the exoplanet passes in front of its host star, as seen from Earth, some of this starlight travels through the planet’s outer atmosphere. “The atmosphere leaves its unique fingerprint on the starlight, which we can study when the light reaches us,” explains co-author Hannah Wakeford, now at NASA Goddard Space Flight Center, USA.
These fingerprints allowed the team to extract the signatures from various elements and molecules — including water — and to distinguish between cloudy and cloud-free exoplanets, a property that could explain the missing water mystery.
The team’s models revealed that, while apparently cloud-free exoplanets showed strong signs of water, the atmospheres of those hot Jupiters with faint water signals also contained clouds and haze — both of which are known to hide water from view. Mystery solved!
“The alternative to this is that planets form in an environment deprived of water — but this would require us to completely rethink our current theories of how planets are born,” explained co-author Jonathan Fortney of the University of California, Santa Cruz, USA. “Our results have ruled out the dry scenario, and strongly suggest that it’s simply clouds hiding the water from prying eyes.”
The study of exoplanetary atmospheres is currently in its infancy, with only a handful of observations taken so far. Hubble’s successor, the James Webb Space Telescope, will open a new infrared window on the study of exoplanets and their atmospheres.
Publication: David K. Sing, et al., “A continuum from clear to cloudy hot-Jupiter exoplanets without primordial water depletion,” Nature, 2015; doi:10.1038/nature16068 |
In geology, a slab window is a gap that forms in a subducted oceanic plate when a mid-ocean ridge meets with a subduction zone and plate divergence at the ridge and convergence at the subduction zone continue, causing the ridge to be subducted. Formation of a slab window produces an area where the crust of the over-riding plate is lacking a rigid lithospheric mantle component and thus is exposed to hot asthenospheric mantle (for a diagram of this, see the link below). This produces anomalous thermal, chemical and physical effects in the mantle that can dramatically change the over-riding plate by interrupting the established tectonic and magmatic regimes. In general, the data used to identify possible slab windows comes from seismic tomography and heat flow studies.
As a slab window develops, the mantle in that region becomes increasingly hot and dry. The decrease in hydration causes arc volcanism to diminish or stop entirely, as magma production in subduction zones generally results from hydration of the mantle wedge due to de-watering of the subducting slab. Slab-window magmatism may then replace this melting, and can be produced by multiple processes, including increased temperatures, mantle circulation producing interaction of supra- and sub-slab mantle, partial melting of subducted slab edges and extension in the upper plate. Mantle flowing upward through the slab window in order to compensate for the decreased lithospheric volume can also produce decompression melting. Slab window melts are distinguished from calc-alkaline subduction-related magmas by their different chemical compositions. The increase in temperature caused by the presence of a slab window can also produce anomalous high temperature metamorphism in the region between the trench and the volcanic arc.
The geometry of a slab window depends primarily on the angle the ridge intersects the subduction zone and the dip angle of the down-going plate. Other influential factors include the rates of divergence and subduction as well as heterogeneities found within specific systems.
There are two end-member scenarios in terms of the geometry of a slab window: the first is when the subducted ridge is perpendicular to the trench, producing a V-shaped window, and the second is when the ridge is parallel to the trench, causing a rectangular window to form.
The North American Cordillera is a well-studied plate margin that provides a good example of the effects a slab window can have on an over-riding continental plate. Beginning in the Cenozoic, the fragmentation of the Farallon Plate as it subducted caused slab windows to open that then generated anomalous features in the North American Plate. These effects include distinct fore-arc volcanism and extension in the plate which may be a contributing factor to the formation of the Basin and Range Province. The northward younging of Pemberton Belt volcanism in southwestern British Columbia, Canada was probably related to a northward moving slab window edge under North America 29 to 6.8 million years ago.
In addition to the fossil slab windows of the Cenozoic seen in North America, there are other regions along the Pacific Rim (e.g. in California, Mexico, Costa Rica, Patagonia and the Antarctic Peninsula) that exhibit active ridge subduction producing slab windows.
The Bahama Banks are the submerged carbonate platforms that make up much of the Bahama Archipelago. The term is usually applied in referring to either the Great Bahama Bank around Andros Island, or the Little Bahama Bank of Grand Bahama Island and Great Abaco, which are the largest of the platforms, and the Cay Sal Bank north of Cuba. The islands of these banks are politically part of the Bahamas. Other banks are the three banks of the Turks and Caicos Islands, namely the Caicos Bank of the Caicos Islands, the bank of the Turks Islands, and wholly submerged Mouchoir Bank. Further southeast are the equally wholly submerged Silver Bank and Navidad Bank north of the Dominican Republic.Challis Arc
The Challis Arc was an Eocene volcanic field that stretched from southwestern British Columbia through Washington to Idaho, United States. The volcanic field extended between 42 and 49 degrees north latitude and was about 1500 kilometers in length. It exhibited volcanic activity for about 10 million years. Remnants of the Challis Arc are found as granitic plutons in the North Cascades, the Okanagan Highlands and in southcentral Idaho.
It was first theorized in 1979 that the volcanic field formed as a result of subduction of the eastern block of the Kula Plate between 57 and 37 million years ago. More recent publications argue that the Challis Arc was formed by more complex tectonic interactions. One proposed model theorizes that the Farallon plate underwent subduction and imbrication beneath the North American plate to form the Challis Arc. Another model suggests that intracontinental rifting and igneous activity between the Pacific and North American plates formed the Challis arc. By definition, a volcanic arc is formed via subduction, so the Challis Arc's naming as a volcanic arc is a matter of debate among geologists. The current limited availability of historical geochemical data prevents any of the proposed theories from being confirmed or falsified, so there is still no consensus on the Challis Arc's formation.Coquihalla Mountain
Coquihalla Mountain is an extinct stratovolcano in Similkameen Country, southwestern British Columbia, Canada, located 10 km (6.2 mi) south of Falls Lake and 22 km (14 mi) west of Tulameen between the Coquihalla and Tulameen rivers. With a topographic prominence of 816 m (2,677 ft), it towers above adjacent mountain ridges. It is the highest mountain in the Bedded Range of the northern Canadian Cascades with an elevation of 2,157 m (7,077 ft) and lies near the physiographic boundaries with the Coast Mountains on the west and the Interior Plateau on the east.Farallon Plate
The Farallon Plate was an ancient oceanic plate that began subducting under the west coast of the North American Plate—then located in modern Utah—as Pangaea broke apart during the Jurassic period. It is named for the Farallon Islands, which are located just west of San Francisco, California.
Over time, the central part of the Farallon Plate was completely subducted under the southwestern part of the North American Plate. The remains of the Farallon Plate are the Juan de Fuca, Explorer and Gorda Plates, subducting under the northern part of the North American Plate; the Cocos Plate subducting under Central America; and the Nazca Plate subducting under the South American Plate.The Farallon Plate is also responsible for transporting old island arcs and various fragments of continental crustal material rifted off from other distant plates and accreting them to the North American Plate.
These fragments from elsewhere are called terranes (sometimes, "exotic" terranes). Much of western North America is composed of these accreted terranes.Franklin Glacier Complex
The Franklin Glacier Complex is a deeply eroded volcano in the Waddington Range of southwestern British Columbia, Canada. Located about 65 km (40 mi) northeast of Kingcome, this sketchily known complex resides at Franklin Glacier near Mount Waddington. It is over 2,000 m (6,600 ft) in elevation and because of its considerable overall altitude, a large proportion of the complex is covered by glacial ice.
Magmatic activity of the Franklin Glacier Complex spanded roughly four million years from the Late Miocene to the Early Pleistocene, with the most recently identified volcanic eruption having taken place around 2.2 million years ago. The existence of thermal springs near the complex implies that magmatic heat is still present. It has therefore been of interest to geothermal exploration.Fueguino
Fueguino is a volcanic field in Chile. The southernmost volcano in the Andes, it lies on Tierra del Fuego's Cook Island and also extends over nearby Londonderry Island. The field is formed by lava domes, pyroclastic cones, and a crater lake.
Volcanic activity at Fueguino is part of the Austral Volcanic Zone, which is formed by the subduction of the Antarctic Plate beneath the South America Plate. The subducting plate has not reached a depth sufficient for proper volcanic arc volcanism, however.
The field bears no trace of glacial erosion on its volcanoes, and reports exist of volcanic activity in 1712, 1820 and 1926.Mendocino Triple Junction
The Mendocino Triple Junction (MTJ) is the point where the Gorda plate, the North American plate, and the Pacific plate meet, in the Pacific Ocean near Cape Mendocino in northern California. This triple junction is the location of a change in the broad plate motions which dominate the west coast of North America, linking convergence of the northern Cascadia subduction zone and translation of the southern San Andreas Fault system. The Gorda plate is subducting, towards N50ºE, under the North American plate at 2.5 – 3 cm/yr, and is simultaneously converging obliquely against the Pacific plate at a rate of 5 cm/yr in the direction N115ºE. The accommodation of this plate configuration results in a transform boundary along the Mendocino Fracture Zone, and a divergent boundary at the Gorda Ridge.Due to the relative plate motions, the triple junction has been migrating northwards for the past 25–30 million years, and assuming rigid plates, the geometry requires that a void, called slab window, develop southeast of the MTJ. At this point, removal of the subducting Gorda lithosphere from beneath North America causes asthenospheric upwelling. This instigates different tectonic processes, which include surficial uplift, crustal deformation, intense seismic activity, high heat flow, and even the extrusion of volcanic rocks. This activity is centred on the current triple junction position, but evidence for its migration is found in the geology all along the California coast, starting as far south as Los Angeles.Mount Barr
Mount Barr is a mountain in the Skagit Range of the Cascade Mountains of southern British Columbia, Canada, located on the northeast side of Wahleach Lake and just southwest of Hope. It is a ridge highpoint with an elevation of 1,907 m (6,257 ft).
Mount Barr is one of several magmatic features just north of the Chilliwack batholith. It is part of
a large circular igneous intrusion that was placed along the Fraser Fault 16 to 21 million years ago. The intrusion is part of the Pemberton Volcanic Belt, an eroded volcanic belt that formed as a result of subduction of the Farallon Plate starting 29 million years ago.Northern Cordilleran Volcanic Province
The Northern Cordilleran Volcanic Province (NCVP), formerly known as the Stikine Volcanic Belt, is a geologic province defined by the occurrence of Miocene to Holocene volcanoes in the Pacific Northwest of North America. This belt of volcanoes extends roughly north-northwest from northwestern British Columbia and the Alaska Panhandle through Yukon to the Southeast Fairbanks Census Area of far eastern Alaska, in a corridor hundreds of kilometres wide. It is the most recently defined volcanic province in the Western Cordillera. It has formed due to extensional cracking of the North American continent—similar to other on-land extensional volcanic zones, including the Basin and Range Province and the East African Rift. Although taking its name from the Western Cordillera, this term is a geologic grouping rather than a geographic one. The southmost part of the NCVP has more, and larger, volcanoes than does the rest of the NCVP; further north it is less clearly delineated, describing a large arch that sways westward through central Yukon.
At least four large volcanoes are grouped with the Northern Cordilleran Volcanic Province, including Hoodoo Mountain in the Boundary Ranges, the Mount Edziza volcanic complex on the Tahltan Highland, and Level Mountain and Heart Peaks on the Nahlin Plateau. These four volcanoes have volumes of more than 15 km3 (3.6 cu mi), the largest and oldest which is Level Mountain with an area of 1,800 km2 (690 sq mi) and a volume of more than 860 km3 (210 cu mi). Apart from the large volcanoes, several smaller volcanoes exist throughout the Northern Cordilleran Volcanic Province, including cinder cones which are widespread throughout the volcanic zone. Most of these small cones have been sites of only one volcanic eruption; this is in contrast to the larger volcanoes throughout the volcanic zone, which have had more than one volcanic eruption throughout their history.
The Northern Cordilleran Volcanic Province is part of an area of intensive earthquake and volcanic activity around the Pacific Ocean called the Pacific Ring of Fire. However, the Northern Cordilleran Volcanic Province is commonly interpreted to be part of a gap in the Pacific Ring of Fire between the Cascade Volcanic Arc further south and the Aleutian Arc further north. But the Northern Cordilleran Volcanic Province is recognized to include over 100 independent volcanoes that have been active in the past 1.8 million years. At least three of them have erupted in the past 360 years, making it the most active volcanic area in Canada. Nevertheless, the dispersed population within the volcanic zone has witnessed few eruptions due to remoteness and the infrequent volcanic activity.Pali-Aike volcanic field
Pali-Aike volcanic field is a volcanic field in Argentina which straddles the border with Chile. It is part of a province of back-arc volcanoes in Patagonia, which formed from processes involving the collision of the Chile Rise with the Peru–Chile Trench. It lies farther east than the Austral Volcanic Zone, the volcanic arc which forms the Andean Volcanic Belt at this latitude.
Pali-Aike formed over a Jurassic basin starting from the late Miocene as a consequence of regional tectonic events and local extension. It consists of an older plateau basalt formation and younger volcanic centres in the form of pyroclastic cones, scoria cones, maars and associated lava flows. These vents often form local alignments along lineaments or faults. The volcanic field is noteworthy for the presence of large amounts of xenoliths in its rocks and because the maar Laguna Potrok Aike is located here. The field was active starting from 3.78 million years ago. The latest eruptions occurred during the Holocene, as indicated by the burial of archeological artifacts; Laguna Azul maar formed about 3,400 years before present.Río Murta (volcano)
Río Murta is a volcano in Chile.
The volcano consists of a complex of lava flows along the valleys at the Río Murta. These flows display columnar joints, lava tubes and pillow lavas, and have volumes of less than 1 cubic kilometre (0.24 cu mi). These landforms along with the presence of palagonite indicate that the eruptions happened beneath glaciers.Volcanic activity in the region is in part influenced by the Chile Triple Junction, the point where the Chile Rise is subducted into the Peru-Chile Trench. This point forms a gap in the Andean Volcanic Belt, with Southern Volcanic Zone volcanism north of the gap generated by the fast subduction of the older and colder Nazca Plate beneath the South America Plate and Austral Volcanic Zone volcanism south of the gap formed by the slow subduction of the younger and warmer Antarctic Plate. In between these two subduction processes, a slab window opened up and allowed the rise of alkali basalt magmas.Río Murta rocks are basalts with a low content of potassium. They contain phenocrysts of clinopyroxene, olivine and plagioclase. The chemical composition is unlike that of other regional basaltic volcanoes, and reflects the influence of oceanic asthenosphere.The basement in the region is formed by various Paleozoic to Mesozoic sediments and volcanic rocks. The plutons of the Northern Patagonian Batholith were intruded into this basement and may have an origin in the subduction of the Nazca Plate-Farallon Plate.The age of these flows is controversial. Potassium-argon dating has yielded ages of 900,000 - 850,000 years before present, some flows are too young to date and the relatively well conserved appearance suggest a Holocene age. 40 kilometres (25 mi) northwest of Río Murta lies Cerro Hudson, an active arc volcano.Seal Nunataks
The Seal Nunataks are a group of 16 islands called nunataks emerging from the Larsen Ice Shelf east of Graham Land, Antarctic Peninsula. The Seal Nunataks have been described as separate volcanic vents of ages ranging from Miocene to Pleistocene. There are unconfirmed reports of Holocene volcanic activity.Siletzia
Siletzia is the massive formation of early to middle Eocene epoch marine basalts and interbedded sediments in the forearc of the Cascadia subduction zone; this forms the basement rock under western Oregon and Washington and the southern tip of Vancouver Island. It is now fragmented into the Siletz and Crescent terranes.Siletzia corresponds geographically to the Coast Range Volcanic Province (or Coast Range basalts), but is distinguished from slightly younger basalts that erupted after Siletzia accreted to the continent and differ in chemical composition. The Siletzia basalts are tholeiitic, a characteristic of mantle-derived magma erupted from a spreading ridge between plates of oceanic crust. The younger basalts are alkalic or calc-alkaline, characteristic of magmas derived from a subduction zone. This change of composition reflects a change from marine to continental volcanism that becomes evident around 48 to 42 Ma (millions of years ago), and is attributed to the accretion of Siletzia against the North American continent.Various theories have been proposed to account for the volume and diversity of Siletzian magmatism, as well as the approximately 75° of rotation, but the evidence is insufficient to determine Siletzia's origin; the question remains open.The accretion of Siletzia against the North American continent approximately 50 million years ago (contemporaneous with the initiation of the bend in the Hawaiian-Emperor seamount chain) was a major tectonic event associated with a reorganization of the earth's tectonic plates. This is believed to have a caused a shift in the subduction zone, termination of the Laramide orogeny that was uplifting the Rocky Mountains, and major changes in tectonic and volcanic activity across much of western North America.Slab (geology)
In geology, a slab is the portion of a tectonic plate that is being subducted.Slabs constitute an important part of the global plate tectonic system. They drive plate tectonics – both by pulling along the lithosphere to which they are attached in a processes known as slab pull and by inciting currents in the mantle (slab suction). They cause volcanism due to flux melting of the mantle wedge, and they affect the flow and thermal evolution of the Earth's mantle. Their motion can cause dynamic uplift and subsidence of the Earth's surface, forming shallow seaways and potentially rearranging drainage patterns.Geologists have imaged slabs down to the seismic discontinuities between the upper and lower mantle and to the core–mantle boundary. About 100 slabs have been described at depth, and where and when they subducted. Slab subduction is the mechanism by which lithospheric material is mixed back into the Earth's mantle.Slesse Mountain
Slesse Mountain, usually referred to as Mount Slesse, is a mountain just north of the US-Canada border, in the Cascade Mountains of British Columbia, near the town of Chilliwack. It is notable for its large, steep local relief. For example, its west face drops over 1,950 m (6,398 ft) to Slesse Creek in less than 3 km (2 mi). It is also famous for its huge Northeast Buttress; see the climbing notes below. The name means "fang" in the Halkomelem language. Notable nearby mountains include Mount Rexford and Canadian Border Peak in British Columbia, and American Border Peak, Mount Shuksan, and Mount Baker, all in the US state of Washington.Undersea mountain range
Undersea mountain ranges are mountain ranges that are mostly or entirely underwater, and specifically under the surface of an ocean. If originated from current tectonic forces, they are often referred to as a mid-ocean ridge. In contrast, if formed by past above-water volcanism, they are known as a seamount chain. The largest and best known undersea mountain range is a mid-ocean ridge, the Mid-Atlantic Ridge. It has been observed that, "similar to those on land, the undersea mountain ranges are the loci of frequent volcanic and earthquake activity".Wave base
The wave base, in physical oceanography, is the maximum depth at which a water wave's passage causes significant water motion. For water depths deeper than the wave base, bottom sediments and the seafloor are no longer stirred by the wave motion above.Wells Gray-Clearwater volcanic field
The Wells Gray-Clearwater volcanic field, also called the Clearwater Cone Group, is a potentially active monogenetic volcanic field in east-central British Columbia, Canada, located approximately 130 km (81 mi) north of Kamloops. It is situated in the Cariboo Mountains of the Columbia Mountains and on the Quesnel and Shuswap Highlands. As a monogenetic volcanic field, it is a place with numerous small basaltic volcanoes and extensive lava flows.Most of the Wells Gray-Clearwater volcanic field is encompassed within a large wilderness park called Wells Gray Provincial Park. This 5,405 km2 (2,087 sq mi) park was established in 1939 to protect Helmcken Falls and the unique features of the Clearwater River drainage basin, including this volcanic field. Five roads enter the park and provide views of some of the field's volcanic features. Short hikes lead to several other volcanic features, but some areas are accessible only by aircraft. |
2. Inside the ARM
In the previous chapter, we started by considering instructions executed by a mythical processor with mnemonics like ON and OFF. Then we went on to describe some of the features of an actual processor - the ARM. This chapter looks in much more detail at the ARM, including the programmer's model and its instruction types. We'll start by listing some important attributes of the CPU:
The ARM's word length is 4 bytes. That is, it's a 32-bit micro and is most at home when dealing with units of data of that length. However, the ability to process individual bytes efficiently is important - as character information is byte oriented - so the ARM has provision for dealing with these smaller units too.
When addressing memory, ARM uses a 26-bit address value. This allows for 226 or 64M bytes of memory to be accessed. Although individual bytes may be transferred between the processor and memory, ARM is really word-based. All word-sized transfers must have the operands in memory residing on word-boundaries. This means the instruction addresses have to be multiples of four.
Input and output devices are memory mapped. There is no concept of a separate I/O address space. Peripheral chips are read and written as if they were areas of memory. This means that in practical ARM systems, the memory map is divided into three areas: RAM, ROM, and input/output devices (probably in decreasing order of size).
The register set, or programmer's model, of the ARM could not really be any simpler. Many popular processors have a host of dedicated (or special-purpose) registers of varying sizes which may only be used with certain instructions or in particular circumstances. ARM has sixteen 32-bit registers which may be used without restriction in any instruction. There is very little dedication - only one of the registers being permanently tied up by the processor.
As the whole philosophy of the ARM is based on 'fast and simple', we would expect the instruction set to reflect this, and indeed it does. A small, easily remembered set of instruction types is available. This does not imply a lack of power, though. Firstly, instructions execute very quickly, and secondly, most have useful extras which add to their utility without detracting from the ease of use.
2.1 Memory and addressing
The lowest address that ARM can use is that obtained by placing 0s on all of the 26 address lines - address &0000000. The highest possible address is obtained by placing 1s on the 26 address signals, giving address &3FFFFFF. All possible combinations between these two extremes are available, allowing a total of 64M bytes to be addressed. Of course, it is very unlikely that this much memory will actually be fitted in current machines, even with the ever-increasing capacities of RAM and ROM chips. One or four megabytes of RAM is a reasonable amount to expect using today's technology.
Why allow such a large address range then? There are several good reasons. Firstly, throughout the history of computers, designers have under-estimated how much memory programmers (or rather their programs) can actually use. A good maxim is 'programs will always grow to fill the space available. And then some.' In the brief history of microprocessors, the addressing range of CPUs has grown from 256 single bytes to 4 billion bytes (i.e. 4,000,000,000 bytes) for some 32-bit micros. As the price of memory continues to fall, we can expect 16M and even 32M byte RAM capacities to become available fairly cheaply.
Another reason for providing a large address space is to allow the possibility of using virtual memory. Virtual memory is a technique whereby the fast but relatively expensive semiconductor RAM is supplemented by slower but larger capacity magnetic storage, e.g. a Winchester disc. For example, we might allocate 16M bytes of a Winchester disc to act as memory for the computer. The available RAM is used to 'buffer' as much of this as possible, say 512K bytes, making it rapidly accessible. When the need arises to access data which is not currently in RAM, we load it in from the Winchester.
Virtual memory is an important topic, but a detailed discussion of it is outside the scope of this book. We do mention some basic virtual memory techniques when talking about the memory controller chip in Chapter Seven.
The diagram below illustrates how the ARM addresses memory words and bytes.
The addresses shown down the left hand side are word addresses, and increase in steps of four. Word addresses always have their least two significant bits set to zero and the other 24 bits determine which word is required. Whenever the ARM fetches an instruction from memory, a word address is used. Additionally, when a word-size operand is transferred from the ARM to memory, or vice versa, a word address is used.
When byte-sized operands are accessed, all 26 address lines are used, the least significant two bits specifying which byte within the word is required. There is a signal from the ARM chip which indicates whether the current transfer is a word or byte-sized one. This signal is used by the memory system to enable the appropriate memory chips. We will have more to say about addressing in the section on data transfer instructions.
The first few words of ARM memory have special significance. When certain events occur, e.g. the ARM is reset or an illegal instruction is encountered, the processor automatically jumps to one of these first few locations. The instructions there perform the necessary actions to deal with the event. Other than this, all ARM memory was created equal and its use is determined solely by the designer of the system.
For the rest of this section, we give brief details of the use of another chip in the ARM family called the MEMC. This information is not vital to most programmers, and may be skipped on the first reading.
A topic which is related to virtual memory mentioned above, and which unlike that, is within the scope of this book, is the relationship between 'physical' and 'logical' memory in ARM systems. Many ARM-based machines use a device called the Memory Controller - MEMC - which is part of the same family of devices as the ARM CPU. (Other members are the Video Controller and I/O Controller, called VIDC and IOC respectively.)
When an ARM-based system uses MEMC, its memory map is divided into three main areas. The bottom half - 32M bytes - is called logical RAM, and is the memory that most programs 'see' when they are executing. The next 16M bytes is allocated to physical RAM. This area is only visible to system programs which use the CPU in a special mode called supervisor mode. Finally, the top 16M bytes is occupied by ROM and I/O devices.
The logical and physical RAM is actually the same thing, and the data is stored in the same RAM chips. However, whereas physical RAM occupies a contiguous area from address 32M to 32M+(memory size)-1, logical RAM may be scattered anywhere in the bottom 32M bytes. The physical RAM is divided into 128 'pages'. The size of a page depends on how much RAM the machine has. For example, in a 1M byte machine, a page is 8K bytes; in a 4M byte machine (the maximum that the current MEMC chip can handle) it is 32K bytes.
A table in MEMC is programmed to control where each physical page appears in the logical memory map. For example, in a particular system it might be convenient to have the screen memory at the very top of the 32M byte logical memory area. Say the page size is 8K bytes and 32K is required for the screen. The MEMC would be programmed so that four pages of physical RAM appear at the top 32K bytes of the logical address space. These four pages would be accessible to supervisor mode programs at both this location and in the appropriate place in the physical memory map, and to non-supervisor programs at just the logical memory map position.
When a program accesses the logical memory, the MEMC looks up where corresponding physical RAM is and passes that address on to the RAM chips. You could imagine the address bus passing through the MEMC on its way to the memory, and being translated on the way. This translation is totally transparent to the programmer. If a program tries to access a logical memory address for which there is no corresponding physical RAM (remember only at most 4M bytes of the possible 32M can be occupied), a signal called 'data abort' is activated on the CPU. This enables attempts to access 'illegal' locations to be dealt with.
As the 4M byte limit only applies to the current MEMC chip, there is no reason why a later device shouldn't be able to access a much larger area of physical memory.
Because of the translation performed by MEMC, the logical addresses used to access RAM may be anywhere in the memory map. Looked at in another way, this means that a 1M byte machine will not necessarily appear to have all of this RAM at the bottom of the memory map; it might be scattered into different areas. For example, one 'chunk' of memory might be used for the screen and mapped onto a high address, whereas another region, used for application programs say, might start at a low address such as &8000.
Usually, the presence of MEMC in a system is if no consequence to a program, but it helps to explain how the memory map of an ARM-based computer appears as it does.
2.2 Programmer's model
This section describes the way in which the ARM presents itself to the programmer. The term 'model' is employed because although it describes what the programmer sees when programming the ARM, the internal representation may be very different. So long as programs behave as expected from the description given, these internal details are unimportant.
Occasionally however, a particular feature of the processor's operation may be better understood if you know what the ARM is getting up to internally. These situations are explained as they arise in the descriptions presented below.
As mentioned above, ARM has a particularly simple register organisation, which benefits both human programmers and compilers, which also need to generate ARM programs. Humans are well served because our feeble brains don't have to cope with such questions as 'can I use the X register as an operand with the ADD instruction?' These crop up quite frequently when programming in assembler on certain micros, making coding a tiresome task.
There are sixteen user registers. They are all 32-bits wide. Only two are dedicate; the others are general purpose and are used to store operands, results and pointers to memory. Of the two dedicated registers, only one of these is permanently used for a special purpose (it is the PC). Sixteen is quite a large number of registers to provide, some micros managing with only one general purpose register. These are called accumulator-based processors, and the 6502 is an example of such a chip.
All of the ARM's registers are general purpose. This means that wherever an instruction needs a register to be specified as an operand, any one of them may be used. This gives the programmer great freedom in deciding which registers to use for which purpose.
The motivation for providing a generous register set stems from the way in which the ARM performs most of its operations. All data manipulation instructions use registers. That is, if you want to add two 32-bit numbers, both of the numbers must be in registers, and the result is stored in a third register. It is not possible to add a number in memory to a register, or vice versa. In fact, the only time the ARM accesses memory is to fetch instructions and when executing one of the few register-to-memory transfer instructions.
So, given that most processing is restricted to using the fast internal registers, it is only fair that a reasonable number of them is provided. Studies by computer scientists have shown that eight general-purpose registers is sufficient for most types of program, so 16 should be plenty.
When designing the ARM, Acorn may well have been tempted to include even more registers, say 32, using the 'too much is never enough' maxim mentioned above. However, it is important to remember that if an instruction is to allow any register as an operand, the register number has to be encoded in the instruction. 16 registers need four bits of encoding; 32 registers would need five. Thus by increasing the number of registers, they would have decreased the number of bits available to encode other information in the instructions.
Such trade-offs are common in processor design, and the utility of the design depends on whether the decisions have been made wisely. On the whole, Acorn seems to have hit the right balance with the ARM.
There is an illustration of the programmer's model overleaf.
In the diagram, 'undedicated' means that the hardware imposes no particular use for the register. 'Dedicated' means that the ARM uses the register for a particular function - R15 is the PC. 'Semi-dedicated' implies that occasionally the hardware might use the register for some function (for storing addresses), but at other times it is undedicated. 'General purpose' indicates that if an instruction requires a register as an operand, any register may be specified.
As R0-R13 are undedicated, general purpose registers, nothing more needs to be said about them at this stage.
|R0||Undedicated, general purpose|
|R1||Undedicated, general purpose|
|R2||Undedicated, general purpose|
|R3||Undedicated, general purpose|
|R4||Undedicated, general purpose|
|R5||Undedicated, general purpose|
|R6||Undedicated, general purpose|
|R7||Undedicated, general purpose|
|R8||Undedicated, general purpose|
|R9||Undedicated, general purpose|
|R10||Undedicated, general purpose|
|R11||Undedicated, general purpose|
|R12||Undedicated, general purpose|
|R13||Undedicated, general purpose|
|R14||Semi-dedicated, general purpose (link)|
|R15||Dedicated, general purpose (PC)|
Being slightly different from the rest, R14 and R15 are more interesting, especially R15. This is the only register which you cannot use in the same way as the rest to hold operands and results. The reason is that the ARM uses it to store the program counter and status register. These two components of R15 are explained below.
Register 14 is usually free to hold any value the user wishes. However, one instruction, 'branch with link', uses R14 to keep a copy of the PC. The next chapter describes branch with link, along with the rest of the instruction set, and this use of R14 is explained in more detail there.
The program counter
R15 is split into two parts. This is illustrated below:
Bits 2 to 25 are the program counter (PC). That is, they hold the word address of the next instruction to be fetched. There are only 24 bits (as opposed to the full 26) because instructions are defined to reside on word boundaries. Thus the two lowest bits of an instruction's address are always zero, and there is no need to store them. When R15 is used to place the address of the next instruction on the address bus, bits 0 and 1 of the bus are automatically set to zero.
When the ARM is reset, the program counter is set to zero, and instructions are fetched starting from that location. Normally, the program counter is incremented after every instruction is fetched, so that a program is executed in sequence. However, some instructions alter the value of the PC, causing non-consecutive instructions to be fetched. This is how IF-THEN-ELSE and REPEAT-UNTIL type constructs are programmed in machine code.
Some signals connected to the ARM chip also affect the PC when they are activated. Reset is one such signal, and as mentioned above it causes the PC to jump to location zero. Others are IRQ and FIQ, which are mentioned below, and memory abort.
The remaining bits of R15, bits 0, 1 and 26-31, form an eight-bit status register. This contains information about the state of the processor. There are two types of status information: result status and system status. The former refers to the outcome of previous operations, for example, whether a carry was generated by an addition operation. The latter refers to the four operating modes in which the ARM may be set, and whether certain events are allowed to interrupt its processing.
Here is the layout of the status register portion of R15:
|Result||31||N||Negative result flag|
|30||Z||Zero result flag|
|28||V||Overflowed result flag|
|System||27||IRQ||Interrupt disable flag|
|26||FIQ||Fast interrupt disable flag|
|Status||1||S1||Processor mode 1|
|0||S0||Processor mode 0|
The result status flags are affected by the register-to-register data operations. The exact way in which these instructions change the flags is described along with the instructions. No other instructions affect the flags, unless they are loaded explicitly (along with the rest of R15) from memory.
As each flag is stored in one bit, it has two possible states. If a flag bit has the value 1, it is said to be true, or set. If it has the value 0, the flag is false or cleared. For example, if bits 31 to 28 of R15 were 1100, the N and Z flags would be set, and V and C would be cleared.
All instructions may execute conditionally on the result flags. That is to say, a given instruction may be executed only if a given combination of flags exists, otherwise the instruction is ignored. Additionally, an instruction may be unconditional, so that it executes regardless of the state of the flags.
The processor mode flags hold a two-bit number. The state of these two bits determine the 'mode' in which the ARM executes, as follows:
|0||1||FIQ or fast interrupt|
|1||0||IRQ or interrupt|
|1||1||SVC or supervisor|
The greater part of this book is concerned only with user mode. The other modes are 'system' modes which are only required by programs which will have generally already been written on the machine you are using. Briefly, supervisor mode is entered when the ARM is reset or certain types of error occur. IRQ and FIQ modes are entered under the interrupt conditions described below.
In non-user modes, the ARM looks and behaves in a very similar way to user mode (which we have been describing). The main difference is that certain registers (e.g. R13 and R14 in supervisor mode) are replaced by 'private copies' available only in that mode. These are called R13_SVC and R14_SVC. In user mode, the supervisor mode's versions of R13 and R14 are not visible, and vice versa. In addition, S0 and S1 may not be altered in user mode, but may be in other modes. In IRQ mode, the extra registers are R13_IRQ and R14_IRQ; in FIQ mode there are seven of them - R8_FIQ to R14_FIQ.
Non-user modes are used by 'privileged' programs which may have access to hardware which the user is not allowed to touch. This is possible because a signal from the ARM reflects the state of S0 and S1 so external hardware may determine if the processor is in a user mode or not.
Finally, the status bits FIQ and IRQ are used to enable or disable the two interrupts provided by the processor. An interrupt is a signal to the chip which, when activated, causes the ARM to suspend its current action (having finished the current instruction) and set the program counter to a pre-determined value. Hardware such as disc drives use interrupts to ask for attention when they require servicing.
The ARM provides two interrupts. The IRQ (which stands for interrupt request) signal will cause the program to be suspended only if the IRQ bit in the status register is cleared. If that bit is set, the interrupt will be ignored by the processor until it is clear. The FIQ (fast interrupt) works similarly, except that the FIQ bit enables/disables it. If a FIQ interrupt is activated, the IRQ bit is set automatically, disabling any IRQ signal. The reverse is not true however, and a FIQ interrupt may be processed while an IRQ is active.
As mentioned above, the supervisor, FIQ and IRQ modes are rarely of interest to programmers other than those writing 'systems' software, and the system status bits of R15 may generally be ignored. Chapter Seven covers the differences in programming the ARM in the non-user modes.
2.3 The instructions set
To complement the regular architecture of the programmer's model, the ARM has a well-organised, uniform instruction set. In this section we give an overview of the instruction types, and defer detailed descriptions until the next chapter.
There are certain attributes that all instructions have in common. All instructions are 32-bits long (i.e. they occupy one word) and must lie on word boundaries. We have already seen that the address held in the program counter is a word address, and the two lowest bits of the address are set to zero when an instruction is fetched from memory.
The main reason for imposing the word-boundary restriction is one of efficiency. If an instruction were allowed to straddle two words, two accesses to memory would be required to load a single instruction. As it is, the ARM only ever has to access memory once per instruction fetch. A secondary reason is that by making the two lowest bits of the address implicit, the program address range of the ARM is increased from the 24 bits available in R15 to 26 bits - effectively quadrupling the addressing range.
A 32-bit instruction enables 232 or about 4 billion possible instructions. Obviously the ARM would not be much of a reduced instruction set computer if it used all of these for wildly differing instructions. However, it does use a surprisingly large amount of this theoretical instruction space.
The instruction word may be split into different 'fields'. A field is a set of (perhaps just one) contiguous bits. For example, bits 28 to 31 of R15 could be called the result status field. Each instruction word field controls a particular aspect of the interpretation of the instruction. It is not necessary to know where these fields occur within the word and what they mean, as the assembler does that for you using the textual representation of instruction.
One field which is worth mentioning now is the condition part. Every ARM instruction has a condition code encoded into four bits of the word. Four bits enable up to 16 conditions to be specified, and all of these are used. Most instructions will use the 'unconditional' condition, i.e. they will execute regardless of the state of the flags. Other conditions are 'if zero', 'if carry set', 'if less than' and so on.
There are five types of instruction. Each class is described in detail in its own section of the next chapter. In summary, they are:
This group does most of the work. There are sixteen instructions, and they have very similar formats. Examples of instructions from this group are ADD and CMP, which add and compare two numbers respectively. As mentioned above, the operands of these instructions are always in registers (or an immediate number stored in the instruction itself), never in memory.
Load and save
This is a smaller group of two instructions: load a register and save a register. Variations include whether bytes or words are transferred, and how the memory location to be used is obtained.
Multiple load and save
Whereas the instructions in the previous group only transfer a single register, this group allows between one and 16 registers to be moved between the processor and memory. Only word transfers are performed by this group.
Although the PC may be altered using the data operations to cause a change in the program counter, the branch instruction provides a convenient way of reaching any part of the 64M byte address space in a single instruction. It causes a displacement to be added to the current value of the PC. The displacement is stored in the instruction itself.
This one-instruction group is very important. The abbreviation stands for 'SoftWare Interrupt'. It provides the way for user's programs to access the facilities provided by the operating system. All ARM-based computers provide a certain amount of pre-written software to perform such tasks as printing characters on to the screen, performing disc I/O etc. By issuing SWI instructions, the user's program may utilise this operating system software, obviating the need to write the routines for each application.
The first ARM chips do not provide any built-in support for dealing with floating point, or real, numbers. Instead, they have a facility for adding co-processors. A co-processor is a separate chip which executes special-purpose instructions which the ARM CPU alone cannot handle. The first such processor will be one to implement floating point instructions. These instructions have already been defined, and are currently implemented by software. The machine codes which are allocated to them are illegal instructions on the ARM-I so system software can be used to 'trap' them and perform the required action, albeit a lot slower than the co-processor would.
Because the floating point instructions are not part of the basic ARM instruction set, they are not discussed in the main part of this book, but are described in Appendix B. |
Fragile X syndrome is a condition that causes a spectrum of developmental and behavioral problems which tend to be more severe in males. The condition is also more common among men than women. It is the most common form of inherited intellectual disability.
Fragile X syndrome typically causes moderate intellectual disability in males, although the severity of intellectual impairment varies from person to person. A small number of males do not have intellectual disability, defined as an IQ below 70. About a third of women with fragile X syndrome have no cognitive impairment, while the remainder have some degree of cognitive, behavioral, or social difficulties. Some females with fragile X syndrome have mild intellectual disability.
As infants, children with fragile X syndrome may display poor muscle tone, gastric reflux, and frequent ear infections. Their motor and mental milestones, as well as their speech, tend to be delayed.
Children with fragile X syndrome often have behavioral problems such as anxiety, hyperactivity, hand-flapping, biting, and temper tantrums. About one-third of males with fragile X syndrome have autism or autism-like behavior.
In females, who often have milder symptoms, behavioral problems may appear as depression, shyness, and avoidance of social situations.
Some people with the condition have attention deficit disorder, with an inability to sustain focused attention on a specific task.
As they become adolescents and young adults, people with fragile X syndrome, particularly males, may lack impulse control, make poor eye contact, and/or be easily distracted. They often have unusual responses to being touched or to sights and sounds.
Males with fragile X syndrome often share characteristic physical features such as a long, narrow face with a prominent jaw and forehead, a large head, flexible joints, and large ears. These symptoms tend to be milder or absent in females with the condition. After puberty, males with fragile X syndrome typically have enlarged testicles. Roughly 15 percent of males and 5 percent of females with fragile X syndrome will experience seizures. While some experience heart murmurs from a condition called mitral valve prolapse, it is usually harmless and may not require treatment.
Men and women with a premutation (please see below for a description) do not have fragile X syndrome, but may experience certain physical symptoms. While they are intellectually normal, they are thought to be more vulnerable to anxiety and depression.
The key risks for carriers of a premutation are fragile X-associated tremor/ataxia syndrome (FXTAS) and premature ovarian failure (POF).
About 40% of men over the age of 50 with a fragile X premutation will develop FXTAS. (The percentage of women premutation carriers affected by FXTAS is unknown, but known to be lower.) FXTAS causes an inability to coordinate muscle movements (ataxia) that worsens over time, tremors, memory loss, dementia, a loss of feeling and weakness in the lower legs, and some mental and behavioral changes.
Often symptoms of FXTAS begin around age 60 with a tremor, followed several years later by ataxia. One study of 55 men with FXTAS found that from the time symptoms begin, additional life expectancy ranged from 5 to 25 years.
About 20 percent of women with a premutation experience premature ovarian failure (POF), in which their menstrual periods stop by age 40. Only 5 to 10 percent of women with POF will be able to have children. One study found that 21 percent of women with a premutation experienced POF, compared to 1 percent in the general population. In general, women with premutations larger than 80 repeats (see below for an explanation) were at lower risk for POF when compared to women with smaller premutations. Women with full mutations are not at increased risk for POF.
Fragile X syndrome is inherited in a complex way that is different from many other genetic diseases. If you have any questions about fragile X syndrome, a healthcare professional can help explain this condition and your risk of transmitting it to the next generation.
Fragile X syndrome is inherited in an X-linked dominant pattern. The gene associated with the disease is located on the X chromosome. The X and Y chromosomes determine gender. Women have two X chromosomes (XX) while men have one X chromosome and one Y chromosome (XY). Girls receive one X chromosome from their mother and one from their father. Boys receive one X chromosome from their mother and a Y chromosome from their father. Typically, unaffected female full mutation carriers of fragile X syndrome are at risk of transmitting the condition to their children.
Fragile X syndrome is among a group of diseases called "trinucleotide repeat disorders." At its most basic level, these diseases are caused by a sequence of DNA that is repeated over and over in the same gene. While everyone has these repeats, it is the number of times that it is repeated which determines whether or not a person has the disease or can pass it on to future generations.
Fragile X syndrome is caused by a mutation in the FMR1 gene, which is located on the X chromosome. This gene contains a segment of DNA called the "CGG repeat," in which a particular section of DNA is repeated a certain number of times in a row. By counting the number of CGG repeats that each parent has, we can determine the likelihood that his or her child will have fragile X syndrome.
The CGG repeat in the FMR1 gene falls into one of four categories, each of which are explained below:
|Category||FMR1 CGG repeat size|
|Normal||5 to 44 repeats|
|Premutation||55 to 200 repeats|
|Full mutation||More than 200 repeats|
A FMR1 gene with 5 to 44 CGG repeats is considered normal. Someone with this size FMR1 CGG repeat is very unlikely to pass on fragile X syndrome to his or her children. You can think of these genes as "stable": they usually pass from parent to child with the same number of repeats. For example, if the parent's gene has 15 CGG repeats, his or her child is also very likely to have a gene with 15 CGG repeats.
Someone with 45 to 54 repeats is not at substantial risk for passing on fragile X syndrome to his or her child, however the number of repeats transmitted to the next generation may increase slightly. For example, a parent with 45 CGG repeats could have a child with 50 CGG repeats. If the number of repeats continues to increase in subsequent generations, future generations (i.e. grandchildren or great-grandchildren) may be at risk for inheriting fragile X.
Those with 55 to 200 CGG repeats have a "premutation." They themselves do not have symptoms of fragile X syndrome, although they are at increased risk for FXTAS and POF (see above). However, depending which parent has the premutation, future children may be at risk.
You can think of premutations as "unstable." When passed from mother to child, premutations may expand into full mutations in the child. If the number of CGG repeats exceeds 200 in the child, that child will have fragile X syndrome.
Men who have a premutation will pass on that premutation unchanged to their daughters, who would then also have a premutation. While a daughter is not at risk for fragile X syndrome, her future children may be at risk. Also, daughters of men with a premutation will be at risk for POF. (Men pass Y chromosomes to their sons, so this disease is not transmitted from father to son.)
Someone with more than 200 repeats has a full mutation, and likely has symptoms of fragile X syndrome.
Because symptoms of the disease are often milder in women, some women with a full mutation can have children. Those children each have a 50% chance of having fragile X syndrome. Men who have a full mutation generally do not reproduce.
Full mutations cause the FMR1 gene to malfunction, shutting down its ability to produce a protein called fragile X mental retardation 1 protein. The function of this protein is not well understood, but scientists believe that it plays a role in the proper functioning of the nervous system.
If a mother has a full mutation, 50% of her children will have fragile X syndrome. Men who have full mutations typically do not reproduce.
Premutations are more complicated. When the parent has a premutation, the risk of a child developing fragile X syndrome depends on the answers to several questions, each of which are detailed below:
If a women is a premutation carrier, then she is at risk of having children with fragile X syndrome. Premutations inherited from the mother are unstable and may expand to become full mutations in the child.
Premutations pass more or less identically from father to child; the CGG repeats do not expand in number. Therefore, men with premutations are not at risk of having children with fragile X syndrome.
If the father has a premutation on his X chromosome, all of his daughters will have that same premutation. These daughters are generally not at risk of having fragile X syndrome themselves, but their future children (the grandchildren of the original premutation carrier) will be at risk. Fathers pass a Y chromosome to their sons instead of an X, so fragile X premutations cannot be passed from father to son.
If the mother has a premutation on one of her X chromosomes, 50% of her children will inherit that abnormal gene and 50% will inherit the normal gene. Only children who inherit the abnormal gene would be at risk for fragile X syndrome.
If a mother has a gene with a premutation and that abnormal gene gets passed to her children, there are two possibilities:
The greater the number of CGG repeats a woman has, the more unstable the gene is and the more likely it will expand to a full mutation in her children. The smallest allele yet observed to expand to a full mutation in a single generation is 56 repeats.
|Number of Maternal Premutation CGG Repeats||Percentage (total women) which expanded to full mutations|
Nolin SL, Brown WT, Glicksman A, Houck GE Jr, Gargano AD, Sullivan A, et al. (2003). Expansion of the fragile X CGG repeat in females with premutation or intermediate alleles. American Journal of Human Genetics, 72(2):454-64.
Nolin SL, Glicksman A, Ding X, Ersalesi N, Brown WT, Sherman SL, Dobkin C. (2011). Fragile X analysis of 1112 prenatal samples from 1991 to 2010. Prenatal Diagnosis, 31(10):925-31.
An estimated 1 in 4000 males and 1 in 8000 females is affected by fragile X.
Based on a review of the literature and Counsyl's internal data, approximately 1 in 225 women carries a premutation and 1 in 45 carries an intermediate allele.
There is no cure for fragile X syndrome, however children with the condition can be treated and supported in many ways, depending on their particular symptoms and the severity of those symptoms. They may benefit from educational support like early developmental intervention, special education classes in school, speech therapy, occupational therapy, and behavioral therapies. A physician may also prescribe medication for their behavioral issues such as aggression, anxiety, or hyperactivity.
A small number of these children experience seizures which can be controlled with medication. While some experience heart murmurs from a condition called mitral valve prolapse, it is usually harmless and may not require treatment.
While many of the children with fragile X syndrome have learning and behavioral problems, they generally do not have major medical problems and can live a normal life span.
A non-profit group run by parents which aims to find treatments and a cure for fragile X syndrome.
45 Pleasant St.
Newburyport, MA 01950
Phone: (978) 462-1866
Explanations of an extensive number of genetic diseases written for everyday people by the U.S. government's National Institutes of Health.
A non-profit that provides educational and emotional support to the fragile X syndrome community, promotes public and professional awareness of the disease, and helps advance research toward finding a cure.
Phone: (925) 938-9300
Secondary Phone: (800) 688-8765
A division of the National Institutes of Health, the NICHD conducts and supports research on topics related to the health of children, adults, families, and populations.
P.O. Box 3006
Rockville, MD 20847
Phone: (800) 370-2943 |
A geometric shape is formed by joining various line segments together and the study of these geometric shapes comes under the section of Geometry in Mathematics. When a geometric shape is formed, it consists of different measurements such as the side length, angle measure, area, perimeter etc. These measurements are very helpful in analyzing the given geometric structure accordingly. There are 2-dimensional geometric shapes such as square, rectangle, triangle etc. and 3-dimensional shapes such as cube, sphere, cylinder, pyramid, prism etc.
Example 1: Calculate the area of a triangle which has base side of length 6 cm and height of 9 cm?
Triangle is a geometric shape which has 3 sides.
The area of the triangle is = 1/2 *(base)* (height)
Given: Base length of the triangle, b = 6cm
Height of the triangle, h = 9cm
This gives, Area of the triangle, A = 1 /2 * 6 cm * 9 cm = 27 cm2.
Therefore, area of the given triangle is 27 cm2
Example 2: Calculate the area and perimeter of the rectangle with dimensions 4m and 3m?
Rectangle is geometric shape with 4 sides.
The given rectangle has dimensions 4m and 3m respectively.
Given: Length, l = 4m; Width, w = 3m.
Area of rectangle = length * width
The area of rectangle is = 4m * 3m = 12m2.
The perimeter of the rectangle = 2(length + width)
The perimeter of the rectangle = 2(4 + 3) = 2 * 7 = 14m.
Hence, area = 12m2 and perimeter = 14m. |
KS2 Maths lesson plan and worksheets on problem solving. Youll find addition word problems, subtraction word problems, multiplication word problems and division word problems, all starting with simple easy-to-solve.
Problem solving multiplication and division ks2 links KS2 Ma2 1a Make connections in mathematics and appreciate the. Number: Multiplication and Division with Reasoning MULTIPLICATION.
Multiply and divide fractions. Problem solving. Model your word problems, draw a divisino, and organize. Derive and recall x÷. 3. Checking. Number Types. 5. Solving Problems. KS2 and KS3 classes.
These are the. Fun KS2 Maths revision quizzes to teach students in Year 3, Year 4, Year 5. Year sokving solve problems involving multiplication and division, using materials. Use multiplication and division within 100 to solve word problems in situations involving.
Includes division, multiplication, addition and subtraction. The variety of question types and multiplicafion solving questions in the Reasoning papers. CATS or for other problem solving situations that students. Can she use multiplication facts to help with this division problem?
How to write a comparative essay intro
We also learn how to solve multiplication and division word problems by. Feb 21, 2017. To appreciate how fluency, reasoning and problem solving skills are.
Marketing plan essay sample
Multiplying by 10, 100 and 1000 Year 5 Multiplication and Division Reasoning and. Do it to both sides. 4x=8 divide left and right by 4.
The 11 Times Trick We all know the trick when multiplying by ten – add 0 to the end. Math Fractions, Multiplication, Teaching Math, Ks2 Maths, Numeracy. Great way for upper KS2 to investigate number. A fun competition designed to engage KS2/KS3 pupils in their times tables..
Bachelor thesis wageningen
Two-step are multiplication first, second and all multiplication.. Aug 25, 2016. You will also sometimes find overlap with different operations such as “total” in addition and multiplication or “per” in multiplication and division. Use these 25 word based problem solving questions, complete with. Compare, describe and solve practical problems for: lengths and heights.
Curriculum vitae upc
Multiplication Year 5. Ⅰ Multiply and divide numbers mentally drawing upon known facts. Multiplication and Division Word Problems I This printable math worksheet features.
Personal statement for postgraduate application
Multiplication 35 So many sacks Division 36 Weighing the presents Measure. Solve multiplication and division problems in context including.. In Year 2 they may be asked to solve a word problem like this one:. Challenging multiplication and division (KS2 resources).
Essay on your aim in life for class 8
Here is part of a multiplication grid. Multiplying and dividing. The Chuckle Brothers use division with remainders to work out how many tables of. KS1 SATs and the KS2 SATs and you. Solve problems with Thinking Blocks, Jake and Astro, IQ and more.
KS2 Parent Workshop. Curricu Focus on: - Fluency. Space Chase” is an interactive eleven chapter long problem solving maths adventure. Mar 16, 2015. Use place value, known and derived facts to multiply and divide mentally. Maths Tutor includes 8 KS2 curriculum-based Maths topics|Multiplication|Problem solving|Time|Division|Fractions|Addition|Subtraction|Percentages & More! |
Basic Earth Science Projects For Kids
How to make a volcano? Hurl cosmic material into space, have it collect into a planet sized object (like earth for example), put it in orbit around a sun, give it a few million years for the surface to cool to a hard crust, and poof – you have the basic ingredients needed. If hot molten magma under great pressure then manages to escape through weak spots in that crust, we have a volcano.
It’s a truly fascinating subject, and this project attempts to frame the question of how to make a volcano within that larger context of basic earth science. The topic area is rich enough to support projects at all grade levels, but this experiment is listed as a 3rd grade science project since I believe it is the first age group that can perform the steps needed with very little supervision.
Under the right circumstances, it could also be used as a 4th grade science project, or possibly even as late as 5th grade. However, I actually performed this volcano project with two youngsters, one in pre-school and the other a kindergarten student. Granted, I was always there, and they won’t get the larger part of the earth science equation yet, but they (and I) had a great time from start to finish.
Since our goal is to help young students tie this exercise to the larger earth science topic, additional earth science projects will be added in solar power, earthquake, tsunami, hydro energy, wind power and other related areas as time allows.
But first – let’s answer the question on how to make a volcano using things we probably have around the house. It’s a fun science project that can be done several ways. Here is the first: papier-mâché (forever to be known here as paper-mache).
The “how to make a volcano” science project is designed to help young students learn more about earth science by looking specifically at volcanoes. We’ll also learn how common household items can be used to build useful models, with an element of creativity required to make the model realistic. Hopefully we’ll discover a few new science terms along the way as well. The experiment is done in two steps. First, we figure out how to make a volcano, then we look at fun ways to make it erupt.
Materials needed Make a Volcano with Paper-Mache
– 1 newspaper
– 1-2 cups flour, depending on the volcano size desired
– 1-2 cups water
– 1 medium size bowl
– 1 fork or spoon to stir with
– 1 pair of scissors
– 1 roll scotch or masking tape
– 1 small plastic bag
– 1 pencil or marker
– 1 plastic or glass bottle
– 1 medium size box
– 1 medium size paint brush, (a couple more if you have several helpers)
– Rocks, sticks, tips of pine trees or shrubs and anything else you would like to use to decorate the volcano with to make it more realistic.
Some notes on the above materials. First, just about any drink bottle will work, but keep in mind that bottle size will determine volcano height. That’s why the amount of flour and water is shown as variable. Second, having sides around the volcano helps keep the “lava” in part 2 of the project contained. However, if having sides is not desired, then substitute a flat piece of cardboard, or even some thin plywood for the box as a stable base for the model volcano. Finally, any paint will do, but a water based acrylic is recommended for easy clean up. They also dry quickly with little need to vent paint fumes. Green, blue, yellow, red, white and black or brown should provide plenty of variety. We cut the bottom out of a small Styrofoam glass to use for mixing colors and dispensing the paint.
Other than gathering the materials, no advance preparation to make a volcano is needed.
If done as a demonstration for 2nd, 1st grade or even kindergarten science projects, then a single set of materials is all that’s needed. Gather the students around, let them take turns helping as time allows and follow the directions shown below. It will take at least one class period for them to make a volcano with paper-mache, another to paint and a third to add final decorations and make it erupt. At least one full day will be needed between these steps to allow for drying time.
If done as an in-class 3rd grade science project, split the class into smaller groups as materials allow. It will still take the same amount of time to complete the project, but in this scenario, each group of students gets to make a volcano of their own. Then they can decorate it and decide what to use as simulated lava to make it erupt.
As a final comment, a “facts about volcanoes” page will be added soon as a ready reference for information on how things work in real volcanoes. While the paper-mache volcano project was designed to be a fun science project for kids, the volcano information sheet will help tie it all together to the earth science topic.
Step1 – Get a medium size box and mark where you want to cut the sides.
Step 2 – Cut the box, but do not discard the sides. Place the bottle in the box and draw a circle around its base big enough for the bottle to slip through.
Step 3 – Cut the box sides into about 1 inch strips. Yup, those are my lab rats!
Step 4 – Cut the hole and make sure the box fits over the bottle.
Step 5 – Cover the bottle with a small plastic bag to keep building materials from sticking to the side of the bottle.
Make a volcano structure around the bottle with 1 inch cardboard strips that were left over from the cutoff sides of the box. Staples can be used to hold the strips together if desired, but be sure to put plenty of tape around the crater of the volcano, and make sure not to cover the top of the bottle up.
If you prefer a stronger structure, chicken wire does great … but you’ll need to supervise that, (and it really isn’t needed).
Step 6 – Mix about a cup of flour with enough water to make homemade paste. It should be about the consistency of elmers glue.
Cut or tear several dozen 1 inch strips of newspaper, but leave at least a sheet or two to put under the box to make the cleanup part easy.
Holding one end of a newspaper strip, drag it through the paste and gently squeegee off any excess glue with fingers on your other hand. The goal is for the paper to be wet, but not dripping with glue.
Add each glue-soaked strip of newspaper to the volcano support structure, gently smoothing each down as you go. If the forming mountainsides get too much glue on them (you’ll know), just add some dry strips to soak it up.
Step 7 – Continue until there are several layers of newspaper strips over the entire mountain, and on the bottom of the box.
If you picked a larger bottle, you may need to mix more paste and cut more paper strips to get to this point, but when done, it is time to clean up for the day and let the model volcano dry.
Step 8 – Add some paint for effect. Green makes a great start for grass, trees, etc – and if the volcano is tall, only rocks can be seen near the top. We used brown for that. Sky is blue … etc. Paint the volcano to make it as realistic as you can.
You can go to the next step if desired, but it would be best to let the paint dry first if you can afford the time.
Step 9 – Now decorate the volcano to make it look even more realistic. Add rocks, sticks for fallen tree-trunks, bushes, maybe even houses from a monopoly set, etc. (Then discuss why it might not be a good idea to live near a volcano).
That takes care of how to build a volcano using paper-mache. It’s a bit messy, but easy enough to do. I don’t think we’ll need to explore these here, but several other construction methods you might want to ask the students to do might include …
And possibly a couple others they come up with on their own. Let the imagination run wild … as they say. Let them be creative!
In the meantime, part 2 of any volcano-making project just has to be about making it erupt. And that’s next!
Making The Eruption
There are many recipes for volcanic eruptions using baking soda and vinegar, and different reasons to use each of the ingredients. There are a couple other chemical recipes I’m not going to add for safety reasons, but diet coke and mentos is sure fun. It is rather dramatic, very messy, and therefore quite popular! Bring plenty of paper towels, but the kids love it. Did I say messy?
Anyway, you decide which simulated volcanic eruption you would like to try. They are all fun, and if you are doing it as an in-class project, perhaps assigning a different one to each group to see which is more realistic would be fun. Here are the links for part 2 for how to make a volcano erupt. Enjoy!
What just happened?
For the teacher
Hopefully, everybody had a great time. But also hidden in the project was some learning on how to make a volcano out of common household items. This may not sound like a major event, but modeling what we see in nature so that we may attempt to better understand it under conditions we can control is part of the scientific process at its most fundamental level. Making a small-scale model volcano, and figuring out to simulate volcano eruptions may be a fun way to do science … but science it is.
Do be sure to discuss some of the items on the volcano information sheet (coming soon), or better yet, provide a copy to the students to review during the project. If there are more than 2 or 3 to a group, this is a great way to focus on learning more about volcanoes during times when they cannot directly participate. When introducing the project, ask them if they can figure out where magma comes from, how it gets to the surface and what happens when it does. That should bring up a number of new earth science terms for discussion as well.
We’ll cover what causes the simulated volcanic eruptions in part 2’s project discussions.
For the students
At this grade level, we won’t assume T-shirts will go by the way of lab coats, and each of them striving to become the next Nobel prize winning scientists. Having fun while learning some basic skills in scientific discovery is plenty at this point. Reading the volcano information sheet (or doing a report on their own) should provide more than enough information on volcanoes, as well as a fun introduction to basic earth science.
Once the model volcano is built (and the paper-mache mess is cleaned up), summarize this part of the earth science project with a few facts about volcanoes. Answering the above questions about magma as a group is also a great lead in for part 2 – making the volcano erupt.
For the Teacher (with less mess)
You can find several volcano kits on Amazon that are pretty much out of the box ready to go. Enjoy!!
As an additional resource and fascinating reading on real eruptions, we recommend: Eruptions that Shook the World. It is a Spellbinding exploration of the history’s greatest volcanic events and their impacts on the history of humankind. |
Edaphology is the science of how soil impacts living things. It also looks at the ways we use soil and how that use alters a soil’s ability to support life.
There’s no denying that a lot is going on under our feet. Within the soil, one might find worms, spiders, tiny beetle eggs, or a colony of ground-dwelling bees or ants. You might also find billions of bacteria and trillions of fungi and other microorganisms. But it is the condition of the soil that makes life possible for all those living things.
Edaphology is the study of how soil is used in agriculture (agrology) and how soil impacts the local environment (environmental soil science). As gardeners, we are all amateur edaphologists, to one degree or another.
By studying soil biology, physics, and chemistry, edaphologists have learned a lot about how soil can be used and improved for growing food and other plants. Soil chemistry refers to the mineral nutrients found in the soil that are used by plants as food. Levels of these nutrients, and their availability, often dictate which plants can be grown and how they will perform, as well as what needs to be added or reduced. It also refers to the presence of toxins, such as lead. Only a lab-based soil test can give you that information.
Drainage and irrigation
Agricultural edaphology also looks at irrigation and drainage. Different soils hold and release different amounts of water at different rates. While sand holds very little water, drainage is excellent. Clay holds tightly to a significant amount of water, but drainage tends to be poor. For plants to really thrive, a middle ground is ideal.
Edaphology studies ways to reduce erosion and soil degradation. Bare earth is vulnerable to erosion. Ask anyone who lived through the Dust Bowl of the 1930s. Wind, water, animals, tools and equipment can all degrade or erode soil, one way or another. Soil degradation also refers to the depletion of water-soluble nutrients used by plants. Climate and vegetation play major roles in soil degradation. As temperatures rise (or fall), different sets of plants, microorganisms, and other life forms thrive or suffer, impacting your soil.
Edaphology has found ways to improve soil fertility and structure, along with its cation exchange capacity and its water holding capacity by adding soil amendments. Soil amendments include fertilizers and soil conditioners. Soil conditioners are used to improve or rebuild damaged soil. Bone meal, blood meal, coffee grounds, compost, manure, and vermiculite are just a few of the soil amendments used to improve soil quality. These materials help reduce soil compaction and improve nutrient levels and accessibility. Adding organic materials, such as aged compost, can significantly improve water retention and drainage.
Edaphologists have researched the various properties of soil in relation to plant production. Their research has taught us better soil husbandry methods. Soil husbandry is the art and science of caring for soil so that it can continue to be used to grow the plants we want. Soil husbandry includes protecting soil from erosion and degradation with mulch and cover crops, improving soil with soil amendments, and employing green manures and crop rotation to keep soil healthy. Whatever is growing on top of a soil has a profound impact on the health, structure, and fertility of that soil.
The bottom line for gardeners is that edaphology teaches us how healthy soil creates healthy plants and healthy plants help maintain healthy soil.
Now we know.
Those deliciously crisp snow peas in your stir fry can be grown at home.
The story behind pea evolution is fascinating. Even more intriguing is why more people don’t grow their own snow peas at home.
Snow pea plants
Snow peas are flat-podded peas that are eaten whole while unripe. Like sugar peas, snow peas are indehiscent, which means the ripe pods do not open on their own. Shelling peas, which are grown to be dried and used in cooking, have a much tougher pod that is dehiscent.
All pea plants are legumes, which means they play host to beneficial rhizobia bacteria in their roots. These bacteria help plants fix atmospheric nitrogen into a form plants can use.
How to grow snow peas
Like fat-podded sugar peas and shelling peas, snow peas are a cool season crop. In fact, that’s how snow peas got their name, being grown in winter. Seeds should be planted 1-2” deep and 5” apart in loose, nutrient-rich soil. Snow peas use tendrils to climb supports, such as stock panels and trellising.
Harvest pods as they form to make the vines keep producing. Once plants sense that they have completed their reproductive cycle, pod production stops.
Snow peas are so easy to grow.!You can add them to your stir fry garden, salad garden, or just grow them! Did you know that the immature leaves and stems are also edible?
Now you know!
There is a reason why Belgian endive [on-DEEV] is so expensive in the stores. But odds are pretty good that you can grow your own.
Blanching is a method of growing in which seedlings are covered with soil or other materials to block photosynthesis.. Belgian endive is grown commercially in dark rooms. The lack of chlorophyll in the leaves makes them white. It also gives them a more delicate flavor and tender texture. Blanching is also used on asparagus and celery. To grow your own Belgian endive, you need to learn a few tricks.
How to grow Belgian endive
Belgian endive is one of the few crops that is grown twice. First you grow the root and then you grow the head. If you simply put a Belgian endive seed in the ground and water it, you will get what looks like several other green chicory plants. In the case of Belgian endive, a seed is planted and allowed to grow normally. Then the top portion is removed, the root is refrigerated [vernalized] and then replanted, and grown in the dark. This “forces” the plant to believe it has gone through a winter and the head it produces is very tightly wrapped, pale, and tender.
Three to four weeks later, you will have your very own Belgian endive crop. Simply snap the head off and there you have it!
Problems associated with growing Belgian endive
If you plant seeds too early, bolting may occur. While that’s a great way to get seeds for next year, you won’t have any harvestable heads. And if too much nitrogen is present, your plants will focus on leafy growth rather than root development. And you need healthy roots to get harvestable heads.
Each root only produces one head, so the old root can be fed to your chickens or added to the compost pile.
Sugarloaf chicory is a surprising leafy green perennial.
In most parts of the country, sugarloaf chicories are planted in late spring and early summer for autumn and early winter harvests. Here in California, sugarloaf chicory is an excellent winter crop, as long as the soil does not stay soggy. Sugarloaf chicories are very drought tolerant plants. They grow best in loose soil with good drainage, making them an excellent choice for raised beds. Plants mature in 80 days, on average.
Sugarloaf chicory can be harvested on a leaf-by-leaf, as needed basis, or the head can be allowed to mature and then cut the whole thing off just above the soil. The plant will, in most cases, regrow. Sugarloaf chicory can also be forced and blanched the same way Belgian endive is managed.
Sugarloaf chicory pests and diseases
Aphids, darkling beetles, flea beetles, leaf miners, loopers, and thrips are the most common pests, along with slugs and snails. Fungal diseases, such as anthracnose, bottom rot, downy mildews, fusarium wilt, septoria blight, and white mold can often be avoided by ensuring proper drainage and good spacing between plants. Damping-off disease and bacterial soft rot may also occur.
How deep do roots go? Rooting depth is dependent on a lot of different conditions. After reading this post, you may never look at your garden plants the same way again. I know I don't!
Thrive or survive?
We’ve all seen examples of tenacious, wind-battered trees growing impossibly out of rocks, but we want life for our garden plants to be better than that, don't we? We don’t want our plants to simply survive, we want them to thrive! This is where rooting depth becomes so important. Plants will make do with whatever they have available to them. By providing enough loose, healthy soil, our plants will be more productive and less likely to get sick.
How deep plant roots go depends on several variables: species, soil structure, soil health, soil moisture levels, and probably a thousand other things. Roots will go where they need to to find water and nutrients. Imagine carefully digging up specimens of your garden plants to see what their root systems really look like. Ends up, it’s already been done. Back in 1927, a couple of researchers, Weaver and Bruner, dug up a bunch of vegetable plants to examine rooting depth and structure. We can use what they learned to make sure we put our plants where they will grow best.
And that lawn, its root system is pitiful compared to native plants.
In my case, I have to assume that plants installed directly in the ground are going to have a tougher time moving around in the soil, for at least another year or two. Because of this, I try to remember to install shallow-rooted plants in those places. I generally use my 12” deep raised beds for plants with moderately deep roots and my 24” deep bed for the plants with deeper roots, though not always. Because my raised beds are open to the native soil at the bottom, rooting will get progressively deeper as the preexisting soil improves.
Strawberries are classified as shallow-rooted, but my deep bed has newly built netted panels that keep birds away, so that’s where my strawberries live. [As you read this post, you will learn that strawberries are not nearly as shallow-rooted as many, myself included, once believed.]
Listed below are categories of minimum rooting depths, under ideal conditions, for many common home gardening plants. Remember, these numbers are bare minimums, assuming nutrient-rich, loose soil. Your results may vary.
Shallow rooted plants
These plants are your best choices for containers, towers, and compacted soil. Basil, chives, cilantro, endive, escarole, ginger, lettuce, oregano, parsley, radish, scallions, spinach, summer savory, tarragon, and thyme can all be grown in less than 12” of soil. Of course, more is generally better. In one study, biologists found that “doubling plant pot size makes plants grow over 40 percent larger.” And look what happens to a lettuce plant, given its freedom to grow! [By the way, the squares in all of these illustrations are one foot by one foot.]
Slightly deeper growing plants, arugula, bok choy, celery, fennel, garlic, Jerusalem artichokes, mint, onions, rosemary, shallots, strawberries, and Swiss chard need at least 12” of soil but perform better in 18” or more soil.
Moderately deep rooted plants
Your cabbage, carrots, chiles, okra, peas, beans, beets, blueberries, broccoli, collards, Brussels sprouts, cauliflower, cucumbers, eggplant, horseradish, kale, leeks, kohlrabi, mustard greens, Napa cabbage, peppers, potatoes, rutabagas, sweet potatoes, and turnips need at least 18" of soil to grow properly.
Artichokes, cantaloupe, cardoon, cereal grains, citrus, figs, lima beans, melons, parsnips, peaches, pumpkins, sage, squash, tomatoes, and watermelon need 24" of loose, nutrient-rich soil.
Deep rooted plants
Your asparagus, cherries, fava beans, hops, olives, pears, prunes, rhubarb, and spring wheat will ultimately go down 3 feet or more. Alfalfa, almonds, apricot, and corn may have roots that are 4 feet long, while walnut trees and winter wheat may reach 5 feet. The roots under your grape vines may be 20 feet long.
Remember what I said earlier about strawberries? Well, if you are like me, strawberry pots have never seemed to work out. Believe me, I’ve tried! Experts all say strawberry plants have a minimum rooting depth of 12”. What they don’t tell you is that mature plant roots might go down 3 feet! And what about that containerized horseradish? A 10-year old horseradish plant may have roots as deep as 14 feet!
Rooting depth depends on species, soil, and several other variables. Knowing more about rooting depth can help you select plants suited to your soil, container size, or planting beds.
I have been asked several times what I would plant in a survival garden, so here it is.
Assuming you are talking about a total social breakdown situation and not a Robinson Crusoe deserted island situation, a survival garden (like any other garden) has to be designed around your soil, microclimate, and personal tastes. On a deserted island, you would probably have to focus on fish and coconuts. In a social breakdown survival situation, you would probably want to focus on high nutrient foods that are easy to store. And you would need access to water or none of this will matter.
High nutrient, easy to store foods include legumes, such as beans, peas, peanuts, and lentils. These plants have the advantage of being able to fix atmospheric nitrogen into a form plants can use and they can be dried for long term storage. Other good choices for a survival garden include members of the squash family, especially winter squashes, such as easy to store butternut squash and pumpkins.
Beets, carrots, fennel, onions, parsnips, potatoes, sweet potatoes and other root vegetables are also good choices for a survival garden.
Perennials, such as fruit and nut trees, grapes, and raspberry and blackberry canes, take longer to become productive, but they can be game changers in the long run. Other perennials to consider include asparagus and rhubarb. Cereals, such as millet, wheat, rye, and oats, might have a place in your survival garden, as well.
You can also grow many common annual (or grown as annual) edible plants, such as tomatoes, peppers, lettuces, chard, garlic, spinach, sunflowers, and kale. As you harvest these crops, always leave some behind, to go to seed naturally. This allows seeds to fall where they will. Very often, these seeds will grow where they are best suited without any effort on your part.
Herbs and teas
Your food will taste better with the addition of these perennial and/or self-seeding herbs and other flavorings: chives, cilantro/coriander, dill, ginger, horseradish, lemongrass, oregano, parsley, rosemary, sage, summer savory, tarragon, thyme, and turmeric. Some of these aromatic plants also help keep away common garden pests. Even tender basil can be grown and allowed to go to seed.
Teas will be the hot beverage of choice in a survival situation, so you will want to add chamomile, licorice, and mint to the mix. You could also use leaves from your raspberry and blackberry plants. Since medicine is beyond my skill set, you would have to talk with someone else about medicinal plants.
Making a survival plan
Living in earthquake country, my family has a collection of supplies, just in case. A survival garden takes that possibility to an entirely different level. If you believe that a survival situation is possible, it is a good idea to get started right away, to give everything time to get established. Before you can plant any seeds, however, you need to take your soil, local climate, and sun exposure into consideration.
Your soil should be tested by a reputable lab first. Many universities offer this inexpensive and valuable service. A soil test will tell you what nutrients are present in your soil, what is needed, and what is in excess. It will also tell you the pH of your soil. Armed with this valuable information, you can amend your soil in ways that will help, rather than hinder your plants. Note: too much of a good thing can be a bad thing.
Your location will dictate which plants you can grow. Identify your Hardiness Zone. You will also need to determine how much sun each area of your garden gets. Most fruits and vegetables need a full day of sun. Anything less than that and you will have to choose plants based on the available sunlight. Finally, if you decide to plant fruit and nut trees, you will need to determine the number of chilling hours your property gets each winter to ensure you select varieties that will actually produce food. Depending on where you live, almond, apple, citrus, fig, and walnut trees can produce a lot of food that is easy to store. Again, you have to select plants that will grow well where you are.
Other considerations for a survival situation include chickens and bees. Horses, sheep, goats, and pigs might also come into play. You should also put some thought into how you will protect these important assets in difficult times.
Let’s hope it never comes to this. Unless it does, let’s just call all this farming.
If you have a peaked roof, roof gardens may not be not for you. But what about garden plants on your shed, chicken coop, or other flat-topped structure?
We are not going to get into the details of how to install a roof garden here because that would mean talking about moisture barriers, structural integrity, and a bunch of other topics beyond my skill set or interest level. You can check out this article for more of that information.
Instead of learning all the technical stuff needed to safely build a large-scale roof garden, we are simply going to explore roof gardens and learn a little about what they have to offer. Before we get started, we need to clarify the difference between green roofs and roof gardens. Roof gardens incorporate container plantings, seating areas, and outdoor living space, while green roofs are living blankets of plants installed primarily to improve insulation. Sod roofs are a type of green roof.
Roof gardens and rainwater
In cities around the world, rain falls on buildings and concrete, collecting car exhaust, trash, dust, grime, and who knows what else. This polluted water is then carried to our lakes, steams, oceans, and aquifers. Not good. Roof gardens reduce that run-off by absorbing the water and using it to provide for plants.
Roof gardens as habitat
Let’s face it - city dwellers rarely have access to enough natural surroundings. Roof gardens can offset that lack. Wildlife benefits in similar ways. Roof gardens provide habitat for a wide variety of native birds, animals, and beneficial insects. A series of roof gardens can also create a corridor for migratory birds and insects
Basic roof garden design
If you are sold on the roof garden idea and want to move forward, here are things you need to consider:
Rooftop garden plant selection
Rooting depth is particularly important when gardening on a roof. Check this list of plants and their minimum rooting depths to help you select the right plants for your roof garden:
Plants that can withstand a lot of wind and sun are also good choices. Succulents and most herbs certainly qualify. Remember, installing a roof garden can reduce summer energy costs by 25% to 80%.
Plus you get fresh herbs and vegetables!
Put aside images of a serene, manicured Japanese tea garden and imagine, instead, growing your own tea.
There’s nothing like a hot cup of tea to put your mind at ease or boost your spirits and there’s no reason why you can’t grow some of your own.
Tea is second only to water as the world’s most popular beverage. Unfortunately, commercially produced teas can contain pesticides, fungicides, and even heavy metals, such as mercury and arsenic.
For me, that’s reason enough to start growing my own.
Traditionally, tea is made by pouring boiling water over the cured leaves of tea plants (Camellia sinensis). Tea plants can be grown outdoors in Zones 8 - 12, or indoors year round. Tea plants are evergreen shrubs native to East Asia. Tea plants can reach 6 feet in height and they have a deep taproot. Tea plants use a lot of water. Their native regions get 50” of water a year.
Tea leaves and terminal buds, known as flushes, are typically harvested while young. This is generally done by hand twice a year, up to every week or two, depending on the local climate. High quality teas are picked by hand. Leaves are then allowed to wilt before they are “disrupted” or “macerated”. This process bruises or tears the leaves to allow enzymes to start the oxidation process. Leaves may be rolled between between a person’s hands, or crushed by machinery. Finally, the leaves are heated to halt oxidation. There’s more to it than that, but you get the idea.
If you love tea, you know that you can also enjoy herbal teas. Herbal teas generally do not contain the caffeine found in regular tea. Many herbal tea plants are lovely to look at and they tend to be pretty resilient. Much of that resiliency is from the essential oils that gives these plants their flavor. Apparently, bugs and pathogens don’t enjoy them the way we do!
There are several traditional plants to choose from for your tea garden: bergamot, German chamomile, lavender, lemon balm, and mint. But you might also want to consider blackcurrants, borage, coriander, dill, elderberries, giant hyssop, ginger, hibiscus, jasmine, lemongrass, lemon thyme, licorice, oregano, raspberry and blackberry leaves, rose hips, or rosemary. Most edible flowers and even dandelions can be used to make tea. [And homegrown tea makes lovely gifts!]
Tea garden design
You can certainly intersperse your tea plants throughout your garden, grow them in containers on your patio or balcony, or you can create a lovely display dedicated to tea. You can build an elegant parterre, an artistic knot garden, a rustic cottage garden style, or something else entirely. Honestly, that’s one of the things I love most about gardening. You can try just about anything. It won’t always work, but you’re bound to learn something in the process. And you just might discover something amazing about your plants or yourself. Back to the tea.
Harvesting and storing tea
Fresh tea leaves or herbs should be cleaned of dust and bugs and then hung or laid out to dry, out of the sun. Placing leaves in an old pillowcase laid flat works well. Once they are completely dry, your tea leaves need to be kept away from light, moisture, air, and heat. Air-tight tins and storage jars kept in cabinets work well for storing tea and you can find a great selection at yard sales and thrift stores.
How to make a proper pot of tea
Being raised in an age of microwaves, take-out, and instant everything, few of us have actually learned how to make a proper pot of tea. Different varieties of tea need to be handled differently, but they all start with a kettle of boiling water. You want to use the water as soon as it starts to boil. Let it go too long and the water will taste flat.
While you wait for your kettle to boil, prepare the tea leaves. Generally speaking, one heaping teaspoon per cup is recommended. You can put the leaves into a tea sock, an infuser, or use a tea ball. The trick is to make sure the tea leaves can expand. You can also put the leaves directly into your teapot, but you will want to warm your teapot with some of the boiling water first. This will help keep your tea warm.
Some people prefer their tea strong and dark, while others, like my mother, simply wave a teabag at the hot water. Both are fine. The idea is to soak, or steep, the leaves in the hot water long enough to extract the flavor you prefer. Traditionally, steeping times vary by tea type:
Once the preferred taste has been attained, remove the leaves. If the leaves stay in the water for too long, your tea can taste bitter. Wrap your teapot in a cozy to keep it warm and enjoy!
Which plants would you like to include in your tea garden?
Stop getting rid of soil mites!
There are certainly plenty of bad mites: dryberry mites, Eriophyid mites, plum bud gall mites, and two-spotted spider mites are just a few. But not all mites are bad. Like their predatory cousins, European red mites, soil mites are your helpers.
Soil mites are extremely beneficial when it comes to releasing nutrients into the soil and controlling pest populations.
Conduct an online search for ‘soil mites’ and you’ll see dozens [millions] of sites telling you how to get rid of these pencil-point size arachnids. But getting rid of them is the last thing you should do. So, what’s so great about soil mites? Let’s find out!
What are soil mites?
Mites are arthropods. This means they have an external skeleton, a segmented body, and jointed legs. They are also arachnids, like ticks and spiders, but very tiny. If you were to take a sample of soil that weighed about the same as a bar of soap, 100 g give or take, you might have 500 mites from 100 different genera in that sample. These buggers are really tiny. With the naked eye, they might look like nothing more than little brown or white dots. But these little guys are important.
While there are over 20,000 known soil mite species, with an estimated 80,000 total, it is easier to categorize soil mites by what they eat. They can be herbivores or carnivores.
Plant-eating soil mites
To something as small as a ballpoint pen tip, fungi make a great meal. So do bacteria and lichen. These scavengers are abundant in most soils and they help plants gain access to nutrients. As these soil mites graze on the fungi and bacteria that grow on root surfaces, they poop out those meals in the form of plant food. They also shred decaying plant material as they feed on the bacteria and fungi clinging to those plant surfaces. Fungal feeding mites (Oribatei) look like little orbs. Also known as turtle mites, moss mites, and beetle mites, these soil mites are very tiny. Let’s call them moss mites. Moss mites range in size from 0.2 to 1.4 mm long. This means you could fit 10-90 of them across a dime, end-to-end, depending on the species.
Insect-eating soil mites
Other soil mites are predators. Predatory soil mites feed on microscopic garden pests, such as nematodes, fungus gnat and thrips pupae, springtails, other mites, and the eggs and larvae of other insects. Most predatory soil mites are 0.5 mm long, brown, and found in the top 1/2” of the soil. [Unfortunately, I could not find any freely available photos of predatory soil mites.]
While not all mites are good, soil mites are your friends in the garden.
Let them be, and be glad they’re around!
Growing leafy greens and other edibles in toxic soil can make you very sick. In some cases, it can kill you. Toxic soil contains heavy metals and other poisons. Often found under landfills, junkyards, and factories, toxic soil is increasingly found in urban areas.
What makes soil toxic?
Healthy soil contains a balance of organic matter, air, water, and minerals that plants use as food. Some of those helpful minerals, such as boron or molybdenum, can reach toxic levels in the soil. Heavy metals can also make soil toxic. So can organic pollutants, such as creosote, excessive fertilizer, herbicides, industrial solvents, pesticides, explosives, and petroleum products. In some cases, radioactive materials, such as radon and certain forms of plutonium, can be in your soil. It ends up that fill dirt used to be brought in from questionable locations when building homes. [Hopefully, that doesn’t happen any more.] The problem is, without soil testing, you don’t know what you have.
Soil is the earth’s filtering system. Like our kidneys, it can only handle so much. Heavy metals and other toxins in the soil often leach into groundwater. They can also become part of the dust that you inhale and the foods you eat. Toxins can be absorbed through your skin and may even coat produce you grow or buy at the store. [Always rinse off your leafy greens and root vegetables, and wash your hands frequently, just in case.]
Is your soil toxic?
The first step to learning whether or not you have toxic soil is a soil test. Not those cheap plastic things. A real, lab-based soil test. They are inexpensive and extremely valuable. Especially if your soil is toxic.
If your soil test results indicate heavy metals, such as lead contamination, or other toxins, there are steps you can take to remove those dangerous materials. Traditionally, that meant simply digging up the toxic soil and burying it somewhere else. Today, many researchers are looking to plants for a solution.
Put plants to work!
As plants absorb water and nutrients, they also take up nonessential elements, such as cadmium, lead, and mercury, which can contaminate soil. Using plants to remove toxins from soil is called phytoremediation. Phytoremediation uses plants to contain, remove, or render toxic contaminants harmless.
Phytoremediation plants can be classified as accumulators or hyperaccumulators. Accumulators (A) are plants that pull toxins out of the soil and up into their aboveground tissues. Beets, for example, will absorb and accumulate radioactive particles found in the soil. Hyperaccumulators (H) collect toxins particularly well, absorbing up to 100 times the toxins of accumulator plants. Sorghum is a hyperaccumulator of arsenic.
How does phytoremediation work?
Accumulators and hyperaccumulators can reduce toxins in the soil through several different processes:
There are advantages to using plants to clean toxins from soil: it’s inexpensive; it doesn’t harm the environment; and it preserves valuable topsoil. The disadvantage is that this is a slow process. It can take years.
Several studies have demonstrated that specific varieties of certain plants are very good at dealing with toxic soil. While I understand that Latin plant names can feel tedious at times, different cultivars behave differently, so getting the proper plant makes a big difference. For example, not all willow species are useful at cleaning soil. Studies have shown that Salix matsudana and S. x reichardtii are far more effective than other willow species.
Many trees, including American sweet gum, larch, red maple, spruce, Ponderosa pine, and tulip trees are able to accumulate radioactive particles (radionuclides), such as radon and plutonium.
Which plants remove which toxins?
I created the chart below from information provided by several studies on toxic soil and phytoremediation.
You can email me if you would like a larger version of this chart,
Keep in mind that, just because a plant will absorb toxins, does not mean it is something suited to your garden or your region. Some nasty invasives have become firmly established that way.
Did you know that some companies extract these toxic and sometimes valuable minerals from plants? This is called phytomining.
Now you know.
Healthy soil is teeming with microscopic life. Most soil organisms are beneficial, but some of them carry disease. The more you know about soil borne diseases, the better you can protect your plants.
The biggest problem with soil borne diseases is knowing they are there. You can’t see the pathogens. Damage can be done before you know anything is wrong. Also, symptoms of soil borne diseases can look a lot like nutrient imbalances, chemical overspray, and poor environmental conditions. This is why it is so important to monitor your plants regularly.
Fungi and nematodes are behind most soil borne diseases, but there are other players and some of them are relatively new discoveries.
Nematodes are microscopic, unsegmented worms. Some of them are beneficial and some carry disease. Beneficial nematodes kill cutworms and corn earworm moths. Disease-carrying nematodes include needle nematodes, root-knot nematodes, and stubby root nematodes. The real problem with nematodes is that there are so many of them. It is estimated that, for every person on earth, there are 60 billion nematodes. [Thank goodness they aren’t all bad!]
There is another class of soil borne disease carriers called Phytomyxea [FI-toe-muh-kia]. Scientists used to think they were a type of slime mold, but genetic testing and electron microscopes have taught us that they are their own group. Phytomyxea are plant parasites that can cause clubroot in cruciferous vegetables and powdery scab in potatoes.
Bacterial diseases are less likely to be soil borne because bacteria have a hard time surviving in the soil. Also, they need a wound or natural opening to get inside your plants. That being said, the following soil borne diseases can occur in your garden:
Soil borne viral diseases are rare. In most cases, they are transmitted by nematodes and certain fungi. Soil borne viral diseases include lettuce necrotic stunt and wheat mosaic, which causes stunting and mosaics in wheat, barley, and rye.
How to prevent soil borne disease
In nature, plant diseases rarely get out of hand. Soil pathogens are usually kept in check by other organisms and plants’ defense mechanisms. However, as we select plants, spray chemicals, and stir up the soil, we interrupt those protections. The main cause of soil borne diseases taking hold is an imbalance in soil populations. Reduced biodiversity gives pathogens the upper hand.
One way to reintroduce that biodiversity is by top dressing with aged compost. Research has shown that top dressing with aged compost is very effective at suppressing soil borne diseases in greenhouses, though less so in the field. In both situations, the more compost was added, the more effective it was. Interestingly enough, if the compost was sterilized beforehand, it was less effective. I think we can assume the effect is at least partially biological.
As with most diseases, three factors must be present for a problem to occur: the host plant, the pathogen, and the right environmental conditions. This is called the disease triangle. Remove any one of the three and the disease is prevented or controlled. Crop rotation is an excellent way to break this disease triangle. Your rotation schedule will vary depending on the disease.
While you can, in some cases, apply treatments directed toward specific pathogens, they don’t always work. Most of these treatments consist of other microorganisms that prey on the pathogens. These only work if your soil already has everything the introduced microorganisms need. Funny thing is, if all those things were already there, so would most of the predators. Biodiversity is your friend. In fact, mycorrhizal fungi (good guys) often create protective mats which contain antibiotics and pathogenic toxins around plant roots, all while helping your plants absorb nutrients.
Use these tips to prevent soil borne diseases in your garden:
Finally, as tempting as they may be, chemical treatments are rarely a good choice for backyard gardeners. Pathogens are developing resistance to these treatments which means stronger chemicals must be used. Whenever possible, use some other method of controlling soil borne diseases.
For every acre of garden, there is the equivalent of two mature cows, by weight, of soil bacteria living there. Ponder that a moment.
Your average cow weighs about one ton. Two cows weigh about the same as a car. That’s a lot of soil bacteria! For a different view, you could fit 15 trillion bacteria into a single tablespoon, if nothing else was there.
What are all those one-celled creatures doing in your soil?
Truth be told, much of your garden soil is made up of dead bacteria. Affectionately known as ‘bio bags of fertilizer’, soil bacteria are important players in nutrient cycling and decomposition. While still alive, their excretions improve soil structure by binding particles together into aggregates. This improved soil structure results in better water infiltration rates and it increases your soil’s water holding capacity. As bacteria breath, they release carbon dioxide into the soil. Plants love carbon dioxide.
Soil bacteria are most commonly found in the film of water that coats soil particles. Bacteria can’t move very far on their own. They generally move with water, though they sometimes hitchhike on passing worms, spiders, and insects. This is called phoresy.
Under ideal conditions, a single bacteria can produce 16 million copies of itself every 24 hours, doubling its population every 15-30 minutes. Conditions are rarely ideal, so bacteria reproduce as much as they can, whenever they can.
There are four basic groups of soil bacteria: decomposers, mutualists, lithotrophs, and pathogens. Most soil bacteria are beneficial. Pathogens are the troublemakers.
The majority of soil bacteria are decomposers that break down plant and animal debris into simple compounds which plants and other living things then use as food. This makes soil bacteria an important part the soil food web. Some decomposers can break down pesticides and pollutants. Decomposers also store a lot of nutrients in their bodies. When they die, those nutrients become available to your tomato plants. [Soil bacteria are 10-30% nitrogen.]
Mutualists have working arrangements with plants that benefit both sides of the equation. The most commonly known mutualists are the rhizobia bacteria which convert atmospheric nitrogen into a form useable by plants. Very often, these mutualists live on or in the roots of legumes, such as peas and beans. Other mutualistic soil bacteria are able to convert atmospheric nitrogen without the help of plants, but the plants still benefit.
You don’t hear much about lithotrophs, but this group is unique in that they don’t eat carbon compounds, the way other bacteria do. Instead, they manufacture their own carbohydrates, without photosynthesis, and feed on chemicals, such as hydrogen, iron, nitrogen, and sulfur. This group is also known as chemoautotrophs. These soil bacteria help break down pollutants and are an important part of nutrient cycling.
These are the disease-causing bacteria. They can cause fireblight, bacterial wilts, cankers, galls, and soft rot. The beneficial soil bacteria are always at war with these germs, competing for food, space, air, and moisture.
Killing bacteria is difficult. Most often, if conditions become difficult, a bacteria will simply enter a dormant stage. This is why many Quick Fix treatments don’t work. They don’t kill the bacteria, they just send it on a temporary hiatus. There are some soil bacteria (Streptomycetes) that actively protect plants from bad bacteria.
Why do soil bacteria matter to gardeners?
Most soil bacteria are valuable members of your team. They provide a huge benefit to your soil and plants. And you need to know what the bad bacteria look like when they start to set up housekeeping. The earlier you break those disease triangles, the faster your can return to harvesting your delicious crops.
Most bacteria are aerobic, which means they need oxygen. This is why turning your compost pile makes everything decompose faster. You are providing the decomposer bacteria with the air they need. If you don’t, the anaerobic, non-air breathing bacteria take over. Those are the ones associated with rot and purification. [Ew!]
Did you know that soil bacteria will consume more water than they can hold, causing their bodies to burst? Yet another argument against over-watering...
Sand slips through your fingers. Clay clods defy your shovel. And somewhere in-between is the sweet spot with bits and pieces of soil just the right size for plant roots. Whatever the size, these chunks are called soil aggregates.
To learn about soil aggregates, you will need a scoop of dry soil from your garden. Put the soil in a bowl. Are there a lot of different sized pieces or are they mostly the same? If you look closely at the photo below, you can see a clear line between the old clay layer and all the decomposing mulch and compost that I have been putting on top. Over time, those organic materials work their way down, into the clay, reducing compaction and improving drainage. These improvements will occur because of soil aggregates.
Take another look at your soil. Stir it around a bit. Pick some of it up and roll it around in your hand. Rub it with your fingers. Does it feel gritty? Or powdery? Do the clumps mostly hold together? Do they crumble completely or do they feel like rocks? Soil aggregates, also known as ‘peds’, are the clumps that tend to stay together when you work the soil.
Why do soil aggregates matter?
Healthy soil has a variety of aggregate sizes, with plenty of large spaces (macropores) between the aggregates and tiny spaces (micropores) inside the aggregates. These spaces are used by roots and gases to move through the soil. These spaces are also what allow water to soak in, increasing your soil’s water holding capacity. And plant nutrients stick to these clumps.
In some cases, aggregates are not as important. Sand, for example, has no aggregates, but there are so many spaces between grains of sand that plant roots, water, and gases have no trouble moving around. [Hanging on to water and nutrients is something else altogether!] Soils with low bulk density are another case where aggregates don’t matter as much. For the rest of us, the soil aggregates in our gardens have a huge impact on plant health, especially tender seedlings. If your soil’s aggregates are unstable, seedlings can suffocate.
Aggregates are described according to their stability. If your soil crumbles into dust, you probably have a lot of clay or silt and that can mean your soil has low aggregate stability. Low aggregate stability increases problems with erosion, gas exchanges, root development, and permeability. More immediately, as rain, irrigation, or sprinkler water strikes the soil surface, flimsy aggregates can be broken. Those tiny broken bits clog the spaces in the soil, making life difficult for plant roots, worms, and soil microorganisms. It also causes crusting which can kill seedlings before they get a chance to grow.
How do soil aggregates form?
Healthy soil aggregates are held together by clay, organic matter, and glomalin. Glomalin is a protective fungal excretion that helps the fungi feed your plants and binds soil aggregates together. Bacteria have similar excretions which are not as effective.
Organic materials in the soil usually mean decomposition is taking place. Decomposition means fungi, worms, bacteria, and microorganisms are present. Those life forms excrete coatings and other materials that help soil aggregates form and stabilize. Finally, as clay particles become moist, they act as a cement, holding molecules and particles together into aggregates.
Test your soil for aggregates
Returning to your soil sample, select a few particularly large clods and gently set them aside to dry completely. Once they are really dry, dip them into a glass of water. If they break up quickly it means your soil has low aggregate stability. If the clods retain their shape for 30 minutes or more, your soil’s stability is high. Because my soil contains so much clay, it pretty much dissolves immediately. As more organic material is incorporated, my soil will breath better, hold its shape better, and provide plenty of pores for roots, water, and microorganisms.
How can you improve the aggregates in your soil?
Start by taking a look at your soil test. If your soil contains a lot of calcium or iron, it probably already has good aggregates. If your soil holds too much salt, aggregates are harder to come by. The biggest indicator of good soil aggregates is the amount of organic matter found in the soil. By mulching and top dressing your soil with manure and aged compost, you are encouraging all the life forms that help soil build healthy aggregates. This is why no-dig gardening has become so important. We learned that excessive digging, plowing, and rototilling disrupt the soil dwelling populations that create and maintain good soil aggregates.
If your soil aggregates are unsatisfactory, use these tips to encourage better soil structure in your garden and landscape:
How did your dip test turn out? Let us know in the Comments!
Why is beach sand mostly white and tan while rich farmland is practically black? What does soil color tell you about your soil?
Soil occurs in many different colors. Iron-rich soil tends to be reddish orange or green, while peat can be practically purple!
Go outside and collect a handful of your soil and put it in a clear container. Shake it around a little bit. Is it wet or dry? What color is it? Brown? Maybe. But I’ll bet it’s not that simple.
What does soil color tell us?
Each layer of your garden soil has a unique color. The deeper you dig, the lighter those colors tend to get. Soil color tells us which minerals are present and the level of decaying organic material. It can also tell you how old a soil is, which processes are occurring, and about local water behavior.
We are not going to explore soil age or the chemical processes that take place in soil, but you can use soil color to make better decisions about irrigating and fertilizing your garden.
Soil moisture levels
We all know that soil looks darker when it is wet. But soil color can tell you how long the soil stays wet. Soil that does not drain well and stays wet for much of the year tends to be a dull yellow or grey. Wet soil contains less oxygen than dry soil. Oxygen causes some minerals to oxidize, or rust. Iron-rich soil that contains a lot of moisture most of the time will turn grey or greenish, while drier soils expose iron to more oxygen, turning the soil red or yellow.
Soils that stay wet often have more complex color patterns, while arid soils are more uniform. If your soil colors are uniform, you know that the water table is lower and you will probably have to irrigate more often. If your soil is reddish, you will probably never need to amend with iron. Remember, the minerals found in soil are plant food.
Minerals make a difference
Other minerals in your soil can also affect its color. Knowing what these colors mean can help you select the best soil amendments and irrigation schedules.
What color is your soil?
Take a closer look at your soil sample. What do you see? Is it yellowish-brown or dark brown? Or something else entirely? When we first moved to our San Jose, California house, the soil was a pale, tan color and as hard as concrete.
For many of us, identifying a specific color can be tricky. Brown is brown, right? But soil can be all sorts of shades of brown, along with a bunch of other colors. To help you get really specific information about the color of your soil, you may want to go to the library and check out their copy of the Munsell soil color book.
Munsell’s color book
Soil color is so important that a system of soil color classification has been developed. This classification method is called the Munsell soil color system. A Munsell book is the gardener’s equivalent of a paint chip book, containing 199 color chips. Its pages are heavy card stock and they are organized by color. Underneath each color chip is a hole in the card stock that lets you hold a soil sample underneath for comparison. On the opposite page tells you the universally accepted name for that color. This is a coding system used around the world by soil scientists, farmers, and gardeners like you!
You artists out there know a lot more about this than I do, but let me give it a shot. According to my Munsell book, colors are described using hue, value, and chroma. Hue is the wavelength we see as color. Munsell’s book gives codes for red (R), yellow (Y), green (G), blue (B), and purple (P). Those wavelengths are measured in graduations of purity, ranging from 2.5, 5. 7.5 and 10. A pure hue is rated at 5. Numbers above 5 indicate the presence of other hues. Value indicates lightness or darkness. A value of zero indicates pure black, while 10 is white. Finally, chroma refers to a color’s strength or intensity, ranging from greyed out (/0) to full intensity (/14).
A Munsell soil color rating is written with the hue letter first, followed by a space and then the number value, a forward slash (or virgule), and then the chroma number. Decimals can be used to provide greater clarity.
Looking at a photo of my soil when we bought our place in 2012, I see that the color most closely matches 5YR 7/1. According to Munsell, that soil would be called 'light grey'. As noted earlier, this indicates high calcium carbonate, gypsum, magnesium, sand, and/or salt levels. It can also indicate too much moisture. Funny thing, the previous owner loved to apply fertilizer and overwater the property. According to my 2015 soil test, soil organic matter was at 3.5% and all the nutrients, except iron, were through the roof! Iron was extremely low.
Seven and a half years later, after adding lots of mulch and compost, a little nitrogen, appropriate watering and nothing else, my soil has been transformed to 2.5YR 3/0, with 7.6% organic matter and nutrient levels (slowly) dropping to where they should be. [These changes never happen overnight. When they do, beware! Something is very wrong.]
The new color is 'very dark grey' which goes along with all the chicken bedding, wood chips, and other organic materials I've been adding. And my iron levels are still way too low, which is why the chroma numbers have stayed low.
So, take another look at your soil sample. Does it tell you more than it did? If you live nearby, feel free to bring a soil sample by so we can take a look in my Munsell book together. If not, head to the library.
Did you know that carpet manufacturers use the Munsell soil color system to match local soil colors with carpet dyes so that their carpets will look cleaner longer?
Now you know.
When your house was built, the soil was significantly altered. Construction soil can be severely compacted and rocky. This problem persists for many years, long after the bulldozers have moved on.
What can you do to transform construction soil into friable garden soil?
What is construction soil?
When a house is built, no one wants it to fall down. Around 500 B.C., a man named Pythagoras figured out the correct angle for walls to be built to reduce the likelihood of collapse. Well, the soil under those walls is equally important for building stability.
Building sites are scraped flat, removing much of the nutrient-rich topsoil, and then mechanically compacted. This is great for your house and terrible for the soil. And if the local soil isn’t stable enough for building, nutrient-poor fill dirt is brought in, mixed in and compacted, until builders have the surface they need. [You can learn more about different types of construction soil at Barclay Earth Depot.] After construction is complete, sod is installed, a few trees and shrubs popped into place, and a cosmetic planting of annual flowers makes everything look lovely. But that appearance can be misleading.
The soil under new construction is reeling in shock. Heavy equipment, trucks, materials, and foot traffic have been crushing the soil, plant roots, microorganisms, insects, and worms for weeks or months of building. Simply adding an attractive top dressing of plants does not correct the problems.
What can you do about construction soil?
Of course, over time, most plants and lawns manage to push roots into the soil and grow. But they could be far healthier and easier to care for if the construction soil they are trying to grow in was transformed into something loose, nutrient-rich, and populated with helpful microorganisms.
You can make that happen with these tips:
If you do not currently compost kitchen and yard waste, you can easily start a compost pile wherever your least healthy soil is. Simply drop equal parts brown and green materials into a pile, water it and flip it every few days, and within a few weeks (depending on the season) you will have a nice batch of aged compost and that spot will be super-charged with nutrients, microorganisms, worms, and other soil beneficials. If you have a few chickens, adding their bedding and manure to the pile makes it even better!
Finally, get your soil tested by a local lab. Over-the-counter kits are not accurate enough to be useful. Inexpensive lab-based soil tests tell you which nutrients are needed, which are present in excess, and if you have lead-contaminated soil.
Even if you have lived in your home for decades, the effects of construction soil may still be present. Creating healthy soil means that your plants will be better able to defend themselves against pests and disease, along with frost and drought damage. In other words, healthy soil gives you more time to relax!
The government might know more about your soil than you do. Did you know you can access the USDA’s soil map of your property? You can and I am going to show you how.
What are soil maps?
Soil maps, also known as soil surveys, are used by architects and engineers to determine a soil’s ability to support roads or structures. Farmers use soil surveys to help them decide the best use for their land and you can, too. Your soil map can help you with plant selection, irrigation, and other gardening decisions.
Soil maps are the combined information collected by various government agencies on different types of soil. Soil surveys used to be printed in book form by every county. We don’t do that anymore. [Thank goodness!] Now, all the information is found online.
How to access a soil survey
All of the information the U.S. government has about your soil is available at the USDA’s Web Soil Survey page. Because this page isn’t exactly intuitive to use, we will work through it together. Once you open the webpage, click on the green Start WSS button to begin.
Once you are in, you will see five tabs. Those tabs are:
You will automatically be on the Area of Interest page. This is where we will begin.
Area of Interest
Before you can access any useful information, you have to set an area of interest (AOI). To do that, follow these steps:
*If your AOI is too small, you will get a warning. If this happens, make sure you are on the AOI tab, under Area of Interest and AOI Properties, and click the Clear AOI button and start again at step #7, using a larger area.
Your map will have orange lines and reddish-orange numbers and letters marking various soil series, which will be listed on the left. You can click on the soil type links for a surprising amount of information, including:
If you need help, as I did, with some of the terminology, try the USDA Soil Glossary. Now we get into the nitty gritty information. Click on the Soil Data Explorer tab.
Soil Date Explorer
This tab has sub-tabs you can investigate. Under Intro to Soils, you can get the equivalent of a college education on soil, free for the reading. The next sub-tab, Suitabilities and Limitations for Use, returns you to your map with a ton of informational categories on the left. While you probably don’t care about Building Site Development, you still might find it interesting reading. If you are short on time, go straight to the Land Management heading and click on the double arrows to expand that category. [Be sure to check out some of the other headings, as well.]
A list of several sub-categories will open up and you can expand any of them. For each of these sub-categories, you can click on View Description or View Rating. In many cases, you may see “Not rated”. I have to assume that means it is either not relevant, or that it has not been considered worth the investment.
Speaking of investments, were you surprised to learn that our tax dollars are spent on this sort of information collection?
Download Soils Data
The next tab is labelled Download Soils Data. While you can certainly try using it, I had no luck. Apparently, I do not have the proper software to open the downloadable files.
Shopping Cart (Free)
This tab allows you to download a 30-page or so document with all the general information about your soil, if you want it. Personally, I find just playing around on the website gives me more of the information I can use in my garden than the report. If you want your report, click on the Check Out button and then decide if you want it now or later, and click OK.
Since most of this information is collected for farmers, builders, emergency response, and military use, it can be far more than you need in the home garden. But it sure makes for some interesting reading!
Your garden can be a bright, cheery, busy place, or it can provide the tranquility, rest, and refuge you need after a stressful day.
Just as color schemes, lighting, furniture placement, and window treatments all play significant roles in creating an interior refuge, the view outside those windows has a double impact. Whether you are looking at your garden through a window or as you walk through it, a chaotic garden can be anything but tranquil.
These tips will help you transform your garden into a tranquil refuge, with minimal effort.
1. Keep it simple
Clutter in the garden only adds to your already busy schedule by reminding you of jobs that need doing. Clearing away yard debris is more conducive to rest and relaxation. If you have plants that are not thriving, or that do not add joy to the space, get rid of them. The same is true of outdoor furniture. If it is junk, throw it away. If it is still useable, donate it to charity or hold a yard sale. Use decorations that are simple and pleasant. Leave fences and lawns clean and empty of distractions.
2. Live in the color of calm
Reds and yellows are great colors, but they will not help you to relax. Far more calming are blues and earth tones. Design your landscape around blue flowers and various shades of greens and browns, for a beautifully relaxing view.
3. Just add water
The sound of running water has a profound effect on mood. Fountains, waterfalls, even the sound of a bird bathing in a birdbath are sounds that bring us back to nature in the way a campfire does, but without the mess or risks. Water features like these also provide for equally stressed birds, reptiles, and amphibians. Taking the time to watch these creatures in your garden or landscape is sure to improve your mood.
7. Gentle sounds
The sounds of traffic, machinery, and neighbors can destroy the tranquility of your refuge in a matter of moments. You can reduce the impact of these intrusions by planting trees and shrubs around your property line to block the noise. Good fencing can also block sound while adding privacy and security.
8. Create an herb garden
Edible herbs require only minimal care and most of them are perennial plants that come back every year. Besides adding flavor to your meals, fresh herbs such as thyme, oregano, and rosemary, add color and fragrance to your garden, while helping deter many common pests.
9. Delightful lighting
The way you illuminate your yard can impact the way you feel. Garish lamps and brilliant spotlights will not help you relax. Instead, design for relaxation in the garden by using gentle, soft-colored solar lights along paths, around garden beds, and in seating areas.
10. Make the time to enjoy it
One of the biggest problems faced today is our unwillingness to simply make time to relax. Busy schedules, television and social media, obligations to family and friends whittle away at our spare moments until there aren’t any. Schedule some time for yourself to enjoy your garden, without chores and to-do lists.
Simply stop and smell those roses. You’ve earned it.
Silt refers to minerals larger than clay and smaller than sand. Silt is commonly moved by water and deposited as sediment. Silt is what makes the alluvial soil surrounding rivers so fertile. Silt is also fine enough to be carried surprisingly long distances on the wind as dust.
How silt is formed
As rocks and regolith are eroded by weather, frost, and other processes, larger particles are ground down into smaller, rounded bits. Those smaller pieces become silt. Silt typically measures 0.05-0.002 mm and is usually composed of quartz and feldspar. Because silt moves so easily in water, construction and clear-cutting often result in silt levels that pollute waterways. This type of pollution is called siltation. In home gardens, over-watering can cause similar leaching problems and urban-drool. But silt is good for your plants.
Silt in garden soil
Sandy garden soil loses water and nutrients too quickly, while clay soil holds on too tightly. Loamy soil, in the middle, is ideal for garden plants. Loam consists of 40% sand, 20% clay, and 40% silt.
Silt particles tend to be round, so they can retain a lot of water. This high water holding capacity is made even better because silt particles cannot hold on to the water very tightly. The same is true for mineral nutrients. Roots and microorganisms have an easy time pulling water and food from silty soil. Silt can be beige to black, depending on how much organic material it contains.
Silt is prone to compaction, but not nearly as much as clay. If your soil feels slippery when it is wet, it contains a lot of silt.
Loam is the best type of soil for your garden or landscape. It feels loose and crumbly in your hand and its dark color tells you that it is filled with nutrients for your plants.
On average, loam consists of 40% sand, 40% silt, and 20% clay. Those numbers can vary, but you get the general idea. This mix of different sized materials provides plenty of spaces for roots, water, air, and garden tools to move through. Loam is easy to till and it contains more accessible nutrients, water, and organic material than sand or clay. Loam is just porous enough, providing excellent drainage and water infiltration rates. Loam is the stuff of gardeners’ dreams. And you can make it happen in your garden.
Start where you are
You can’t get somewhere else if you don’t know where you are. Start improving your soil by learning more about what it contains now. This is done with a lab-based soil test and a DIY ribbon test. I wish I could say that those colorful plastic over-the-counter soil tests worked, but they don’t. Not well enough, anyway. Send a sample to a local lab. It’s one of the best things you can do for your garden.
Next, you can conduct a ribbon test to see how much loam is already present. This is a simple test that costs nothing.
It is a good idea to use several different samples to get a more accurate feel for your soil. [Sorry, I couldn’t resist!] If you cannot form a ribbon, your soil is predominantly sand. If your ribbon is longer than 1.5”, you have clay. If your ribbon is less than 1.5” long, you have loam.
In the soil textural triangle, you will see several different types of loam. Each type is described according to the presence of more sand, silt, or clay. Those types are called clay loam, sandy clay loam, sandy loam, silty clay loam, silty loam, and simply loam.
How to build loam
Your soil is the result of ancient bedrock beneath your feet, weather conditions, and hundreds of other variables you have no control over. Changing soil texture does not happen overnight, but there are several steps you can take to improve your soil’s texture. And there is one thing you should never do: if you have clay soil, do not add sand. You will simply create a bigger problem that looks and feels a lot like concrete. Instead, use these tips to improve your soil:
Each of these actions increase the amount of organic material in your soil. Organic material (living things and dead things) are what makes your soil nutrient rich and friable. These steps need to be done on a regular basis. The wood chips, compost, and other materials will eventually break down and become nutrients that are attached to soil particles and absorbed by your plants.
Every 3 to 5 years, send out soil samples for testing, to see just how well your soil is improving!
Victory gardens were planted during WWI and WWII to reduce demand during war time. Today, we are fighting against physical inactivity, environmental harm, and tasteless fruits and vegetables. Growing a victory garden in your yard can create a win-win-win situation.
What are we fighting for?
Historically, victory gardens were encouraged to make up for the fact that many farm and agricultural workers were off fighting war. Today’s battles are more insidious but no less important. And they are found on several fronts:
With victory gardens, we can transform our ornamental landscapes into delicious, productive foodscapes that improve air and water quality, the foods we eat, and even the way we feel.
Get moving with gardening
Working the soil and being outside are two of the best ways to improve your health and mood. There are even soil microorganisms (Mycobacterium vaccae) that act as antidepressants, without all the chemical dependency and side effects of drugs (or driving to doctor’s appointments).
Gardening is a gentle activity that won’t damage joints, pull muscles, or wear you out. It will get you moving the way your body was meant to move. Reaching, pulling, lifting, and carrying plants and soil in your victory garden will help you be healthier without straining anything.
Environmental protection begins at home
Clouds of chemicals, extensive paved roads, islands of trash, and toxins in our waterways are not good for anyone’s health. The more we learn, the more we realized that we can use evolution to our advantage in the garden, protecting both ourselves and the environment. Beneficial insects, appropriate plant selection, and no-dig gardening all work to reduce our carbon footprint while providing us with fresher, better tasting fruits and vegetables. Growing food at home also reduces the amount of plastic and other garbage that has to end up somewhere.
The home front
When you grow even a small portion of your food, you are reducing the negative impacts of massive monoculture, global shipping, and long-term food storage. I appreciate those services for foods I cannot grow at home and for the billions of people who need to be fed. But, the truth is, I can grow food at home and so can you. Even if it is just a few plants, it makes a difference for you and the planet.
Victory garden plant list
Victory gardens are planted with foods you eat regularly and will grow in your yard. There’s no point in planting something that won’t grow where you are, Case in point: I love blueberries. I live in California. Blueberries hate alkaline soil and hot summers. I have both. To grow blueberries, I have to work very hard and it is a constant battle. For me, blueberries are not a good choice for a victory garden. [But I do it anyway.]
To design your victory garden, start by identifying your Hardiness Zone and getting your soil tested. An inexpensive soil test will tell you what is in your soil and what needs to be added (and avoided). Then look at your grocery list. From there, make a list of edibles that will grow in your yard. You can find lots of information online and through your local Master Gardeners and County Extension Office. You may not be able to grow all your groceries where you live, but I’ll bet you can grow a surprising amount of food in your yard, wherever you are!
Popular victory garden plants include:
Interspersing your vegetable crops with flowers, such as marigold, will make it look even nicer and improve pollination. And don't forget fruit and nuts trees. They can produce an astounding amount of food.
I just registered my garden as a Climate Victory Garden. Check them out!
Other players on the winning team
Plants are not the only things that can help you be more active, improve your food supply, and work to protect the environment. Geese will keep your lawn mowed perfectly and guard your house, though they are messy. Chickens can produce both eggs and compost. Raising bees can provide you with honey while improving pollination. And raising worms makes composting even more effective and efficient.
Just as wartime victory gardens made civilians part of the war effort, your modern victory garden can make you part of the solution for environmental protection, better tasting food, and your own good health. And the plants do most of the work! And if you don't have space, see if there is a community garden nearby.
What are you going to plant in your victory garden?
You can grow a surprising amount of food in your own yard. Ask me how!
To help The Daily Garden grow, you may see affiliate ads sprouting up in various places. These are not weeds. Pluck one of these offers and, at no extra cost to you, I get a small commission. As an Amazon Associate I earn from these qualifying purchases. You can also get my book, Stop Wasting Your Yard! |
HS What is Earth Science?
This chapter covers the scientific method and the various branches of Earth science.
HS Studying Earth’s Surface
This chapter informs students about different types of landforms on Earth, map projections, and the use of computers and satellites to study and understand Earth's surface.
HS Earth's Minerals
This chapter covers types of minerals and how they form, how to identify minerals using their physical properties, as well as the various ways minerals form and are used as resources.
This chapter discusses the rock cycle and each of the three major types of rocks that form on Earth. Separate sections cover igneous, sedimentary, and metamorphic rocks individually.
HS Earth's Energy
This chapter discusses available nonrenewable and renewable resources, including resources such as fossil fuels, nuclear energy and solar, wind and water power.
HS Plate Tectonics
This chapter covers properties of Earth's interior, continental drift, seafloor spreading, theories of plate tectonic movement, plate boundaries, and landforms.
The chapter begins with discussion of stresses on rocks and mountain building. Understanding the causes of earthquakes, seismic waves, tsunami and ways to predict earthquakes are followed by information about staying safe during an earthquake.
The chapter considers how and where volcanoes form, types of magma and consequent types of eruptions, as well as different volcanic landforms associated with each.
HS Weathering and Formation of Soil
This chapter begins with a discussion of mechanical and chemical weathering of rock. These concepts are applied to the formation of soil, soil horizons, and different climates related soils.
HS Erosion and Deposition
Erosion and Deposition considers how these processes shape Earth's surface through the actions of rivers, streams, ground water, wind, waves, glaciers, and gravity.
HS Evidence About Earth's Past
This chapter discusses different modes of fossilization, correlation of rock units using relative age dating methods, and absolute age dating of rocks.
HS Earth's History
This chapter discusses the geologic time scale, the development of Earth from early in its history through today, and the evolution of life on Earth.
HS Earth's Fresh Water
This chapter considers the water cycle and develops student understanding of our lakes, rivers, streams, and ground water.
HS Earth's Oceans
This chapter discusses how Earth's oceans were formed, the composition of ocean water, as well as the action of waves and tides. Discussion continues with descriptions of the various areas of the seafloor and types of ocean life.
HS Earth's Atmosphere
This chapter covers the properties, significance, and layers of the Earth’s atmosphere, how energy transfers within our atmosphere, and movement of air throughout our atmosphere and over the Earth’s surface.
This chapter considers various factors of weather, cloud types, movement of air masses, and the development of different types of storms. Discussion concludes with weather forecasting using various tools, maps, and models.
The chapter considers how the different factors of Earth's dynamic surface affect climate, the different climates found worldwide, and the causes and impacts of climate change.
HS Ecosystems and Human Populations
This chapter considers the role of ecosystems, including how matter and energy flow through ecosystems, the carbon cycle and how our human population growth affects our global ecosystem.
HS Human Actions and the Land
This chapter discusses causes and prevention of soil erosion, as well as ways that humans have polluted our land surface with hazardous materials, and what can be done to prevent this type of pollution.
HS Human Actions and Earth's Resources
This chapter considers human use of renewable and nonrenewable resources, availability and conservation of resources, as well as ways to conserve energy and use energy more efficiently.
HS Human Actions and Earth's Waters
This chapter discusses many ways that water is used, the distribution of water on Earth, as well as types and sources of water pollution, and ways to protect our precious water supplies.
HS Human Actions and the Atmosphere
This chapter discusses types of air pollution and their causes, the effects of air pollution, and ways to reduce air pollution.
HS Observing and Exploring Space
This chapter begins with a discussion of electromagnetic radiation, types of telescopes, and ways of gathering information from our universe. Information about recent and current space exploration concludes this chapter.
HS Earth, Moon, and Sun
This chapter discusses the basic properties and motions of the Earth, Moon, and Sun, including information about tides and eclipses. Discussion includes information about the Sun's layers and solar activity.
HS The Solar System
This chapter covers the motion of the planets, formation of our solar system, plus characteristics and properties of the inner and outer planets, as well as dwarf planets, meteors, asteroids, and comets.
HS Stars, Galaxies, and the Universe
This chapter covers constellations, how stars produce light and energy, classification of stars, and stellar evolution. Groupings of stars, types of galaxies, dark matter, dark energy, and the Big Bang Theory are also included.
HS Earth Science Glossary
Earth Science - High school book glossary. |
Little sibling Mercury may only be about one-third the size of Earth, but its story is fascinating, and as yet, untold. NASA spacecraft MESSENGER has passed by the planet as it maneuvers to enter Mercury orbit in 2011, revealing new surprises with each pass. It is the closest planet to our Sun, orbiting it in just 88 days. Because it is so close to the Sun, its surface temperatures are extreme, ranging from 840ºF (450ºC) on the sunward side to –275ºF (–170ºC) at night. Its cratered surface records a long history of bombardment by asteroids and other impactors, and new observations from MESSENGER suggest volcanic activity far longer than thought possible.
Mercury, as viewed by the MESSENGER spacecraft during its flyby of the planet. Credit: NASA /Johns Hopkins University Applied Physics Laboratory/Carnegie Institution
of Washington. Despite continual bombardment by the solar wind, Mercury manages to hold onto a very thin atmosphere of scattered atoms captured from the solar wind and released from the planet itself. Mercury’s liquid core produces a magnetic field for the tiny inner planet. Venus is Earth’s twin in size, but its clouds shroud a darker personality. Its thick atmosphere is composed of carbon dioxide and traces of water and sulfuric acid. This atmosphere — about 90 times the pressure of Earth's atmosphere — creates an intense greenhouse effect; heat is trapped in the atmosphere. Surface temperatures on Venus range from 377ºC to 487ºC (710º to 908ºF) — even hotter than Mercury! Venus has many volcanos, some of which may still be active. Its rotation is very slow. Venus turns once on its axis every 243 Earth days and it spins backward relative to the other planets. The time it takes to rotate is
Venus, as viewed by the Galileo spacecraft.
Credit: NASA/Calvin J. Hamilton. actually longer than the time it takes to orbit the Sun. Mars takes after Earth in many ways. It is only about half the size of Earth, but its similar geology, thin atmosphere, and the presence of water make it seem more like home. Its day is almost as long as Earth's, but it takes about two Earth years to orbit the Sun. Mars is tilted on its axis, so it experiences seasons.
Hubble Space Telescope image of Mars as the planet made its closest approach to Earth in August 2003.
Credit: NASA. Mars has the tallest volcano in our solar system —about 22 kilometers tall (almost 14 miles high). [Compare this to Hawaii's Mauna Loa at 9 kilometers (5.5 miles) tall measured from the sea floor.] Some of the volcanos on Mars have been recently active. However, its surface temperatures are cold — –125º to –23ºF (–87º to –5ºC) — and the planet is very dry. The atmosphere is thin and composed mostly of carbon dioxide. There is no liquid water present at the surface, but robotic explorers have discovered frozen water in the subsurface and in its polar ice caps, which are comprised of frozen carbon dioxide and water ice. There is evidence that Mars had flowing water and oceans at its surface during its early history, perhaps until about three and a half billion years ago. |
Capitalization Teacher Resources
Find Capitalization educational ideas and activities
Showing 1 - 20 of 4,127 resources
In this capital letters worksheet, students circle letters in sentences that should be capitalized and write their own sentences. Students complete 3 tasks.
For this capital letters worksheet, students write the capital letters and lowercase letters of the alphabet in the boxes provided and capitalize the first word correctly in three sentences.
If you're looking for a worksheet that will test your pupils' skills in capitalization, you may have just found it. Learners are presented with a fairly long story, "Freddie's Birthday." None of the words are capitalized. They must go through and appropriately capitalize all of the words that should begin with a capital letter. A great assessment tool, or homework assignment.
An outstanding worksheet for young writers is here for you. In it, learners are coached on when it's necessary to capitalize letters when writing. There is a very good worksheet embedded in the plan to give them extra practice.
In this capital letters worksheet, 2nd graders rewrite 4 sentences while inserting capital letters. They practice writing the capital letters A through N.
In this capital letters worksheet, students read 4 sentences about a zebra named Roy. Students rewrite the sentences, starting each one with a capital letter.
In this capitalization and punctuation worksheet, students read 4 sentences. Students rewrite each sentence and put in the missing capital letters and periods.
Learners trace the capital letters on the page to practice their printing. They trace the entire alphabet five times. Fantastic practice for young writers.
Are you working on punctuation? Use this excellent worksheet about how to properly begin a sentence and end it with punctuation! Learners practice capitalizing letters when appropriate, as well as choosing the correct ending punctuation. A very good worksheet is included in the plan for extra practice.
Get your scholars ready to read their first story...with a little assistance, of course. Projecting the short story (included) for all readers to see, point to each word so they can sound it out together. The strategy here is not to have them shout out the word as soon as they know it; require that learners wait until you say, "what's the word?" before reciting it. Prepare kids for more difficult words. Once they finish, have them read through it again more quickly. Point out that the first word has a capital letter and the sentence ends in a period.
In this capital letters activity, 2nd graders master the usage of capitalization. Students study 8 questions and circle that sentences that are written correctly and fix those that are not.
Sixth graders understand the usage of capital letters for mapping skills. In this capitalization lesson, 6th graders correct sentences to insert appropriate capital letters. Students select cities from the sentences and make a word puzzle.
In this ESL nouns/capital letters worksheet, students choose the correct spelling of given nouns, selecting the one capitalized correctly, then correct capitalization errors in a set of sentences.
Capital letters are the star of the show in a wonderful language arts lesson plan. After a teacher-led demonstration and discussion on capital letters, groups of pupils get together and work on the computer to fix the flashing letters that should be capitalized. There is an online activity embedded in the plan, a printable worksheet that can be used as a homework assignment, and some terrific extension activities as well. A great lesson plan for the young ones!
In this capital letter worksheet, students read find mistakes in a set of 5 sentences, capitalizing when necessary; students capitalize beginning words in the sentence and proper nouns.
In this capital letters worksheet, students write five names of their friends in the first box. Students then write down 5 names of places they have visited. Students write the month they were born and the current day.
In this language arts activity, students sort all the letters of the alphabet into a pile. Then they select all of the capital letters and write the lowercase letters next to each. Students also fill in the missing letters of a chart.
In this recognizing capital letter rules worksheet, students read sentences, circle the letters that should have a capital, and write the corresponding rule. Students correct twelve sentences.
In these two recognizing beginning of sentences and proper nouns begin with capital letters worksheets, students read sentences and rewrite them adding capital letters where appropriate. Students rewrite 9 sentences.
When and how to use capital letters is the focus of this language arts presentation. The first slide gives the five most common rules for using capital letters. The rest of the slides give young readers lots of practice. Instant feedback is provided whether they get the answer right or wrong. |
P l a n e G e o m e t r y
An Adventure in Language and Logic
IT IS NOT POSSIBLE to prove every statement; we saw that in the Introduction. Nevertheless, we should prove as many statements as possible. Which is to say, the statements we do not prove should be as few as possible. They are called the First Principles. They fall into three categories: Definitions, Postulates, and Axioms or Common Notions. We will follow each with a brief commentary.
11. An angle is the inclination to one another of two straight lines that meet.
12. The point at which two lines meet is called the vertex of the angle.
13. If a straight line that stands on another straight line makes the adjacent angles equal,
14. An acute angle is less than a right angle. An obtuse angle is greater than a right angle.
16. Rectilinear figures are figures bounded by straight lines. A triangle is bounded by three straight lines, a quadrilateral by four, and a polygon by more than four straight lines.
17. A square is a quadrilateral in which all the sides are equal, and all the angles are right angles.
18. A regular polygon has equal sides and equal angles.
19. An equilateral triangle has three equal sides. An isosceles triangle has two equal sides. A scalene triangle has three unequal sides.
10. The vertex angle of a triangle is the angle opposite the base.
11. The height of a triangle is the straight line drawn from the vertex perpendicular to the base.
12. A right triangle is a triangle that has a right angle.
13. Figures are congruent when, if one of them were placed on the other, they would exactly coincide. Congruent figures are thus equal to one another in all respects.
15. A parallelogram is a quadrilateral whose opposite sides are parallel
16. A circle is a plane figure bounded by one line, called the circumference, such that all straight lines drawn from a certain point within the figure to the circumference, are equal to one another.
17. And that point is called the center of the circle.
18. A diameter of a circle is a straight line through the center and terminating in both directions on the circumference. A straight line from the center to the circumference is called a radius; plural, radii.
1. Grant the following:
1. To draw a straight line from any point to any point.
2. To extend a straight line for as far as we please in a straight line.
3. To draw a circle whose center is the extremity of any straight line, and whose radius is the straight line itself.
4. All right angles are equal to one another.
5. If a straight line that meets two straight lines makes the interior angles on the same side less than two right angles, then those two straight lines, if extended, will meet on that same side.
(That is, if angles 1 and 2 together are less than two right angles, then the straight lines AB, CD, if extended far enough, will meet on that same side; which is to say, AB, CD are not parallel.)
Axioms or Common Notions
1. Things equal to the same thing are equal to one another.
2. If equals are joined to equals, the wholes will be equal.
3. If equals are taken from equals, what remains will be equal.
4. Things that coincide with one another are equal to one another.
5. The whole is greater than the part.
6. Equal magnitudes have equal parts; equal halves, equal thirds,
2. and so on.
Commentary on the Definitions
A definition clarifies the idea of what is being defined, and gives it a name. A definition has the character of a postulate, because we must agree to use the name in precisely that way. What has that name obviously exists as an idea, for we have understood the definition; or at least, we should. But we may not assume that it can be depicted in the physical world.
And so an "equilateral triangle" is defined. But the very first proposition presents the logical steps that allow us to construct a figure that represents and satisfies the definition. That is what we mean when we say that an equilateral triangle exists logically. This is geometry, after all. And the definition of an equilateral triangle correctly describes something we can actually witness and draw.
Again, a figure is an idea. Its boundarya lineis the idea of length only. For the figure to exist logically, it must be more than an idea. We must be able to draw it. With any definition, we must either postulate that possibility (that is done in the case of a circle, Postulate 3), or we must prove it, as we do with an equilateral triangle.
By maintaining the logical separation of a definition and its physical representation, mathematics becomes a science in the same way that physics is a science. Physics must show that the things of which it speaks"electrons," "protons," "neutrinos"actually exist. And physics does that by showing that it is possible to experience them. It was geometry that led the way. Geometry was the first science. By requiring that a definition does not assert its physical existence, mathematics avoids dealing in fantasies and the possibility of contradictions. For example, by a "hemigon" I mean a rectilineal figure that has half as many sides as angles. Do you understand? Good.
A definition is reversible. That means that when the conditions of the definition are satisfied, then we may use that word. And conversely, if we use that word, that implies those conditions have been satisfied. A definition is equivalent to an if and only if sentence.
Note that the definition of a right angle says nothing about measurement, about 90°. Plane geometry is not the study of how to apply arithmetic to figures. In geometry we are concerned only with what we can see and reason directly, not through computation. A most basic form of knowledge is that two magnitudes are simply equalnot that they are both 90° or 9 meters.
How can we know when things are equal? That is one of the main questions of geometry. The definition (and existence) of a circle provides our first way of knowing that two straight lines could be equal. Because if we know that a figure is a circle, then we would know that any two radii are equal. (Definition 16.)
We have not formally defined a point, although Euclid does. ("A point is that which has no part." That is, it is indivisible. Most significantly, Euclid adds, "The extremities of a line are points." Thus when a line exists, its endpoints also exist.) And we have not defined a "line," although again Euclid does. ("A line is length without breadth.") Euclid defines them because they are rudimentary ideas in geometry. But since there is never occasion to prove that something is a point or a line, a definition of one is not logically required.
And so we may say that all definitions are technical, in that they define a necessary term of the science. The definitions of a radius, the vertex of an angle, and a regular polygon are technical. A definition is also functional when we must satisfy it to prove a theorem or a problem. The definitions of a right angle, an isosceles triangle, and a square are functionalbecause we will have occasion to prove that something is a right angle, an isosceles triangle, and a square. We will never have occasion to prove that something is the vertex of an angle.
Commentary on the Postulates
We require that the figures of geometrythe triangles, squares, circlesbe more than mental objects. We must be able to draw them; to represent them in the physical world. The fact that we can draw a figure permits us to say not only that it exists as an idea. But it exists logically.
The first three Postulates narrowly set down what we are permitted to draw. Everything else we must prove. Each of those Postulates is therefore a "problem"a constructionthat we are asked to consider solved: "Grant the following."
The instruments of construction are straightedge and compass. Postulate 1, in effect, asks us to grant that what we draw with a straightedge is a straight line. Postulate 3 asks us to grant that the figure we draw with a compass is a circle. And so we will then have an actual figure that refers to the idea of a circle, rather than just the word "circle."
Note, finally, that the word all, as in "all right angles" or "all straight lines," refer to all that exist, that is, all that we have actually drawn. Geometryat any rate Euclid'sis never just in our mind.
Commentary on the Axioms or Common Notions
The distinction between a postulate and an axiom is that a postulate is about the specific subject at hand, in this case, geometry; while an axiom is a statement we acknowledge to be more generally true; it is in fact a common notion. Yet each has the same logical function, which is to authorize statements in the proofs that follow.
Implicit in these Axioms is our very understanding of equal versus unequal, which is: Two magnitudes of the same kind are either equal or one of them is greater.
So, these Axioms, together with the Definitions and Postulates, are the first principles from which our theory of figures will be deduced.
Please "turn" the page and do some Problems.
Continue on to Proposition 1.
Please make a donation to keep TheMathPage online.
Copyright © 2016 Lawrence Spector
Questions or comments? |
In the years following the War of 1812, the “Era of Good Feelings” evolved between the years 1815 and 1825. In the first half of this period, there was a strong sense of nationalism throughout the United States. However, political changes and economic differences between the states warped this nationalism into the sectionalism that divided the country into north, south and west regions. Celebrations of unity within the United States soon turned into disagreements concerning representation within the government and the differences within the national government caused by the emergence of different Republican factions. States distanced themselves from working collectively in a united economy. They were largely concerned with their own financial needs and remained with states that had similar economic demands. The years immediately following the War of 1812, were years of nationalism, caused by political unity and the expansion western American territory. This was seen through the festivities which celebrated the creation of the United States of America. In the image of the Fourth of July Celebration, it is easy to see the glee all the people in this image posses.
Taking place in Philadelphia, the first capital of the United States, men and women of all ages celebrated the independence gained by the U.S. in 1776. Although women had not gained the right to vote and were considered subordinate to men, they were still included in the occasion, showing the nationalism that these Americans were displaying. On the left side of the picture, there is an image of George Washington. He is famously the only president who gained presidency without any opposition. His portrait, along with the flags displayed show a glorification of the country, and the nationalism that follows it. There is also the physical unification of the United States that created a sense of nationalism. The national government sponsored this by funding the construction of roads, interstates and canals that created a more connective country. John Calhoun was one of the political leaders who supported this. At the time of these construction projects, the United States was expanding its territory both westward into the newly-acquired Louisiana territory and southward into the Florida territory. Afraid that the rapid expansion would cause a sense of disunity, Calhoun supported the idea of these transportation systems in order to “bind the republic together”. This emphasis of the bringing together of the regions of the country advocates nationalism and creates a physical connection throughout the nation that cannot be broken. The nationalism that stemmed from the political unification of the country was both cultural and geographical, brought the United States together as one. Sectionalism began to emerge with the belief that some regions were better represented than others and disagreements regarding the country’s leadership. The population density map depicts the inhabitants per square mile of the United States in 1820. The most densely populated region was the north. This is because the north contained several large cities.
Many people crowded to work in the large manufacturing companies, including women because they made more money in the factories than they would have worked for on the farm. These large cities included Philadelphia, New York and Boston. The economy in the south was based upon agriculture and slave trade. Plantations were large, and spread apart from one another, creating a low population density in the south. Because the west was only beginning to receive American settlers, it had the lowest density of all three regions. This imbalance of population distribution led to conflicts within the House of Representatives. This is the point in which the sectionalism begins to emerge. Although southern and western states were not as crowded as those in the north, they believed they should be better represented. The north region disagreed. With a higher percentage of politicians representing the north, many decisions made by the House were in favor of the north. For example, there was a high tariff placed on foreign goods. This was beneficial for the north because it increased the chances of Americans purchasing the goods produced in their factories. However, this was disadvantageous for the south. Since slaves were viewed as objects, imported slaves were considered imported goods and were sold with high tariffs attached to them. The three regions were becoming more sectionalist because they were becoming increasing concerned with problems facing their own area, and not the country as a whole. The sectionalism that emerged during this time period did so within a short period of time. This is easy to see in the election maps of 1820 and 1824. Within these four years, the United States went from being uniformly supportive of one candidate (Monroe in 1820), to having support for several candidates. While much of the south was in support of Andrew Jackson, the first president who did not have a formal education and supported the “common man”, much of the north backed John Q. Adams, the eventual winner. These maps illustrate the regional differences that eventually led to the lack of a majority vote. Sectionalism became quite prominent with the emergence of different factions within the Republican Party and eventually the creation of new parties. Each region was sectionalist in that they voted for the candidate who would be the best advocate for their area, rather than leader for their country.
The unification of the United States was economical as well. Nationalism existed when the government believed they should protect a vital part of the national economy. The letter by Anna Hayes stated the issue of slave revolts. The slave revolt addressed in this letter were concerning that of Charleston, a rebellion which led to hundreds of slaves to be arrested, tried and executed. Most Americans at this time were fearful of future revolts that could be similar the slave revolt in Haiti, which left many dead. During the year this was written, 1822, most states, north and south, still used slaves as the main or prominent source of labor. If enough slaves got together, a rebellion could become extremely violent and, in the eyes of plantation and business owners, detrimental to the economy. Since slaves were not viewed as citizens, but rather property, white Americans believed they had to band together to protect their economy. This nationalism created the widespread knowledge of potential rebellion and increased the actions taken by local officials. The nationalistic views in protecting the national economy soon morphed into sectionalist views of protecting the economy of the region. The differing economies of north, south and west areas created conflicts that separated the already politically-different regions even more. In order to create an economy that was not dependent upon imported goods, Congress passed legislature that put a high tariff on goods that were being imported from foreign countries. This, in effect, promoted the consumer to buy American-made products because they were much cheaper. Northern manufacturers approved of this idea, but southern plantation owners disagreed. Because they were considered imported goods, slaves were purchased with high tariffs.
This angered many southern business owners and made them feel as if the factories in the north were being favored. Because of the large gap in economic interest, these areas did not scrutinize over the economies of the other regions and became sectionalized towards their own area. Another economic area that divided the nation was that of slavery. Although most states still utilized slavery as a workforce in the early 1820’s, the southern states were more dependent upon it. They were the main people to work the plantations, and were treated extremely poorly because they were seen as property. However, northern states soon began to slowly abolish the practice of slavery. Thomas Jefferson felt that slavery would divide the nation, creating sectionalist views. Southerners believed in the practice of slavery, while northerners were discontent with it. Thomas Jefferson’s thesis proved correct when the Missouri Compromise created a physical, geographic line, splitting the nation in half. States north of the southern border of Missouri were free and those south were enslaved. Jefferson expressed his concern for the growing differences and sectionalism between the north and south, stating that he fears a divided nation.
Of course this sectionalism soon became so prominent that states began to secede, leading to the Civil War. The “Era of Good Feelings” followed the War of 1812 and developed between the years 1815 and 1825. In the first half of this period, there was a strong sense of nationalism throughout the United States. However, this nationalism changed into the sectionalism because of political changes and economic differences between the states that divided the country into north, south and west regions. Disagreements concerning representation within the government and the differences within the national government caused by the emergence of different Republican factions and the distancing of states economically from each other facilitated the emergence of sectionalism. |
There are many measurements of the human body that are positively correlated. For example, the length of one’s forearm (measured from elbow to wrist) is approximately the same length as the foot (measured from heel to toe). They are positively correlated because, as one measurement increases, so does the other measurement.
You will discover through this project whether a human’s arm span (measured across the body with the arms extended) is correlated to his height.
You will need to collect data from 11 people, which will give you 12 data points including your own personal data. You will turn in and answer questions regarding only one scatter plot if doing the project alone.
Part One: Measurements
- Measure your own height and arm span (from finger-tip to finger-tip) in inches. You will likely need some help from a parent, guardian, or sibling to get accurate measurements. Record your measurements on the “Data Record” document. Use the “Data Record” to help you complete Part Two of this project.
- Measure eleven additional people, and record their arm spans and heights in inches.
Part Two: Representing Data with Plots
- Using GeoGebra or graphing software of your choice, create a scatter plot of your data. Predict the line of best fit, and sketch it on your graph. Then, use the software to make a box plot.
Note: Directions for downloading and using GeoGebra can be found in the “Course Information area.”
- Copy and paste your scatter plot and box plot into a word processing document.
Part Three: The Line of Best Fit
Include your scatter plot, box plot, and the answers to the following questions in your word processing document, and submit to your instructor. Be sure to review how to save your files before getting started.
- Which variable did you plot on the x-axis, and which variable did you plot on the y-axis? Explain why you assigned the variables in that way.
- Which two points did you use to draw the line of best fit?
- Write the equation of the line passing through those two points using the point-slope formula y−y1=m(x−x1). Show all of your work. Remember to find the slope of the line first.
- What does the slope of the line represent within the context of your graph? What does the y-intercept represent?
- Test the residuals of two other points to determine how well the line of best fit models the data.
- Use the line of best fit to help you to describe the data correlation.
- Using the line of best fit that you found in Part 3, Question 3, approximate how tall is a person whose arm span is 66 inches?
- According to your line of best fit, what is the arm span of a 74-inch-tall person?
- What might cause the arm span and height not to be equal?
- Explain why the equation you wrote to represent a human’s arm span (measured across the body with the arms extended) is a correlation and not causation. |
LAWRENCE — Scientists using ice-penetrating radar data collected by NASA’s Operation IceBridge and earlier airborne campaigns have built the first-ever comprehensive map of layers deep inside the Greenland Ice Sheet. Engineers and researchers at the University of Kansas were central to the success of this endeavor.
This new map allows scientists to determine the age of large swaths of Greenland’s ice, extending ice core data for a better picture of the ice sheet’s history.
“This new, huge data volume records how the ice sheet evolved and how it’s flowing today,” said Joe MacGregor, a glaciologist at the University of Texas at Austin’s Institute for Geophysics and the study’s lead author.
Engineers at the National Science Foundation Center for Remote Sensing of Ice Sheets (CReSIS), headquartered at KU, were fundamental to this effort.
“The high-sensitivity radars developed by CReSIS at KU with long-term support from NSF and NASA enabled mapping of both shallow and deep internal layers for conducting this study,” said CReSIS Director Prasad Gogineni, a distinguished professor in the School of Engineering.
Greenland’s ice sheet is the second largest mass of ice on Earth, containing enough water to raise ocean levels by about 20 feet. The ice sheet has been losing mass over the past two decades and warming temperatures will mean more losses for Greenland. Scientists are studying ice from different climate periods in the past to better understand how the ice sheet might respond in the future.
One way of studying this distant past is with ice cores. These cylinders of ice drilled from the ice sheet hold evidence of past snow accumulation and temperature and contain impurities like dust and volcanic ash that were carried by snow that accumulated and compacted over hundreds of thousands of years. These layers are visible in ice cores and can be detected with ice-penetrating radar.
Ice-penetrating radar works by sending radar signals into the ice and recording the strength and return time of reflected signals. From those signals, scientists can detect the ice surface, sub-ice bedrock and layers within the ice.
New techniques used in this study allowed scientists to efficiently pick out these layers in radar data. Prior studies had mapped internal layers, but not at the scale made possible by these newer, faster methods. Another major factor in this study was the amount of Greenland Operation IceBridge has measured.
“IceBridge surveyed previously unexplored parts of the Greenland Ice Sheet and did it using state-of-the-art CReSIS radars,” said study co-author Mark Fahnestock, a glaciologist from the Geophysical Institute at University of Alaska Fairbanks and IceBridge science team member.
IceBridge’s flight lines often intersect ice core sites where other scientists have analyzed the ice’s chemical composition to map and date layers in the ice. These core data provide a reference for radar measurements and provide a way to calculate how much ice from a given climate period exists across the ice sheet, something known as an age volume. Scientists are interested in knowing more about ice from the Eemian Period, a time from 115,000 to 130,000 years ago that was roughly as warm as today. This new age volume provides the first rough estimate of where Eemian ice may remain.
Comparing this age volume to simple computer models helped the research team better understand the ice sheet’s history. Differences in the mapped and modeled age volumes point to past changes in ice flow or processes like melting at the ice sheet’s base. This information will be helpful for evaluating the more sophisticated ice sheet models that are crucial for projecting Greenland’s future contribution to sea-level rise.
“Prior to this study, a good ice-sheet model was one that got its present thickness and surface speed right. Now, they’ll also be able to work on getting its history right, which is important because ice sheets have very long memories,” MacGregor said.
This study was published online Jan. 16 in the Journal of Geophysical Research Earth Surface. It was a collaboration between scientists at UTIG, UAF-GI, CReSIS and the Department of Earth System Science at University of California, Irvine. It was supported by NASA’s Operation IceBridge and the National Science Foundation’s Arctic Natural Sciences.
Established by the NSF Division of Polar Programs in 2005, CReSIS has made great strides in research and fieldwork concerning changes in ice sheets and their effect on sea level rise. The center has been a key participant in NASA’s Operation IceBridge, which is the largest airborne survey of Earth’s polar ice ever conducted. More than 250 undergraduate and graduate students have worked with faculty on developing the specialized radars at CReSIS. The vast majority of those who’ve graduated are pursuing successful careers with industry, academia, and government agencies.
KU serves as the lead institution of CReSIS, which is composed of six additional partner institutions: Elizabeth City State University, Indiana University, the University of Washington, the Pennsylvania State University, Los Alamos National Laboratory and the Association of Computer and Information Science Engineering Departments at Minority Institutions. CReSIS researchers collaborate with scientists, engineers and institutions around the world.
For more information on Operation IceBridge, visit: www.nasa.gov/icebridge
For more information on CReSIS, visit: https://cresis.ku.edu |
The special theory of relativity implies that only particles with zero rest mass may travel at the speed of light. Tachyons, particles whose speed exceeds that of light, have been hypothesized, but their existence would violate causality, and the consensus of physicists is that they cannot exist. On the other hand, what some physicists refer to as "apparent" or "effective" FTL depends on the hypothesis that unusually distorted regions of spacetime might permit matter to reach distant locations in less time than light could in normal or undistorted spacetime.
According to the current scientific theories, matter is required to travel at slower-than-light (also subluminal or STL) speed with respect to the locally distorted spacetime region. Apparent FTL is not excluded by general relativity; however, any apparent FTL physical plausibility is speculative. Examples of apparent FTL proposals are the Alcubierre drive and the traversable wormhole.
Superluminal travel of non-information
In the context of this article, FTL is the transmission of information or matter faster than c, a constant equal to the speed of light in vacuum, which is 299,792,458 m/s (by definition of the meter) or about 186,282.397 miles per second. This is not quite the same as traveling faster than light, since:
- Some processes propagate faster than c, but cannot carry information (see examples in the sections immediately following).
- In some materials where light travels at speed c/n (where n is the refractive index) other particles can travel faster than c/n (but still slower than c), leading to Cherenkov radiation (see phase velocity below).
In the following examples, certain influences may appear to travel faster than light, but they do not convey energy or information faster than light, so they do not violate special relativity.
Daily sky motion
For an earth-bound observer, objects in the sky complete one revolution around the Earth in one day. Proxima Centauri, the nearest star outside the Solar System, is about four light-years away. In this frame of reference, in which Proxima Centauri is perceived to be moving in a circular trajectory with a radius of four light years, it could be described as having a speed many times greater than c as the rim speed of an object moving in a circle is a product of the radius and angular speed. It is also possible on a geostatic view, for objects such as comets to vary their speed from subluminal to superluminal and vice versa simply because the distance from the Earth varies. Comets may have orbits which take them out to more than 1000 AU. The circumference of a circle with a radius of 1000 AU is greater than one light day. In other words, a comet at such a distance is superluminal in a geostatic, and therefore non-inertial, frame.
Light spots and shadows
If a laser beam is swept across a distant object, the spot of laser light can easily be made to move across the object at a speed greater than c. Similarly, a shadow projected onto a distant object can be made to move across the object faster than c. In neither case does the light travel from the source to the object faster than c, nor does any information travel faster than light.
The rate at which two objects in motion in a single frame of reference get closer together is called the mutual or closing speed. This may approach twice the speed of light, as in the case of two particles travelling at close to the speed of light in opposite directions with respect to the reference frame.
Imagine two fast-moving particles approaching each other from opposite sides of a particle accelerator of the collider type. The closing speed would be the rate at which the distance between the two particles is decreasing. From the point of view of an observer standing at rest relative to the accelerator, this rate will be slightly less than twice the speed of light.
Special relativity does not prohibit this. It tells us that it is wrong to use Galilean relativity to compute the velocity of one of the particles, as would be measured by an observer traveling alongside the other particle. That is, special relativity gives the correct velocity-addition formula for computing such relative velocity.
It is instructive to compute the relative velocity of particles moving at v and −v in accelerator frame, which corresponds to the closing speed of 2v > c. Expressing the speeds in units of c, β = v/c:
If a spaceship travels to a planet one light-year (as measured in the Earth's rest frame) away from Earth at high speed, the time taken to reach that planet could be less than one year as measured by the traveller's clock (although it will always be more than one year as measured by a clock on Earth). The value obtained by dividing the distance traveled, as determined in the Earth's frame, by the time taken, measured by the traveller's clock, is known as a proper speed or a proper velocity. There is no limit on the value of a proper speed as a proper speed does not represent a speed measured in a single inertial frame. A light signal that left the Earth at the same time as the traveller would always get to the destination before the traveller.
Possible distance away from Earth
Since one might not travel faster than light, one might conclude that a human can never travel further from the Earth than 40 light-years if the traveler is active between the age of 20 and 60. A traveler would then never be able to reach more than the very few star systems which exist within the limit of 20–40 light-years from the Earth. This is a mistaken conclusion: because of time dilation, the traveler can travel thousands of light-years during their 40 active years. If the spaceship accelerates at a constant 1 g (in its own changing frame of reference), it will, after 354 days, reach speeds a little under the speed of light (for an observer on Earth), and time dilation will increase the traveler's lifespan to thousands of Earth years, seen from the reference system of the Solar System — but the traveler's subjective lifespan will not thereby change. If they were then to return to Earth, the traveler would arrive on Earth thousands of years into the future. Their travel speed would not have been observed from Earth as being supraluminal — neither for that matter would it appear to be so from the traveler's perspective– but the traveler would instead have experienced a length contraction of the universe in their direction of travel. And [clarification needed] the Earth will seem to experience much more time passing than the traveler does. So while the traveler's (ordinary) coordinate speed cannot exceed c, their proper speed, or distance traveled from the Earth's point of reference divided by proper time, can be much greater than c. This is seen in statistical studies of muons traveling much further than c times their half-life (at rest), if traveling close to c.
Phase velocities above c
The phase velocity of an electromagnetic wave, when traveling through a medium, can routinely exceed c, the vacuum velocity of light. For example, this occurs in most glasses at X-ray frequencies. However, the phase velocity of a wave corresponds to the propagation speed of a theoretical single-frequency (purely monochromatic) component of the wave at that frequency. Such a wave component must be infinite in extent and of constant amplitude (otherwise it is not truly monochromatic), and so cannot convey any information. Thus a phase velocity above c does not imply the propagation of signals with a velocity above c.
Group velocities above c
The group velocity of a wave may also exceed c in some circumstances. In such cases, which typically at the same time involve rapid attenuation of the intensity, the maximum of the envelope of a pulse may travel with a velocity above c. However, even this situation does not imply the propagation of signals with a velocity above c, even though one may be tempted to associate pulse maxima with signals. The latter association has been shown to be misleading, because the information on the arrival of a pulse can be obtained before the pulse maximum arrives. For example, if some mechanism allows the full transmission of the leading part of a pulse while strongly attenuating the pulse maximum and everything behind (distortion), the pulse maximum is effectively shifted forward in time, while the information on the pulse does not come faster than c without this effect. However, group velocity can exceed c in some parts of a Gaussian beam in vacuum (without attenuation). The diffraction causes the peak of the pulse to propagate faster, while overall power does not.
The expansion of the universe causes distant galaxies to recede from us faster than the speed of light, if proper distance and cosmological time are used to calculate the speeds of these galaxies. However, in general relativity, velocity is a local notion, so velocity calculated using comoving coordinates does not have any simple relation to velocity calculated locally. (See Comoving and proper distances for a discussion of different notions of 'velocity' in cosmology.) Rules that apply to relative velocities in special relativity, such as the rule that relative velocities cannot increase past the speed of light, do not apply to relative velocities in comoving coordinates, which are often described in terms of the "expansion of space" between galaxies. This expansion rate is thought to have been at its peak during the inflationary epoch thought to have occurred in a tiny fraction of the second after the Big Bang (models suggest the period would have been from around 10−36 seconds after the Big Bang to around 10−33 seconds), when the universe may have rapidly expanded by a factor of around 1020 to 1030.
There are many galaxies visible in telescopes with red shift numbers of 1.4 or higher. All of these are currently traveling away from us at speeds greater than the speed of light. Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually.
However, because the expansion of the universe is accelerating, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future, because the light never reaches a point where its "peculiar velocity" towards us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Comoving and proper distances#Uses of the proper distance). The current distance to this cosmological event horizon is about 16 billion light-years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event was less than 16 billion light-years away, but the signal would never reach us if the event was more than 16 billion light-years away.
Apparent superluminal motion is observed in many radio galaxies, blazars, quasars, and recently also in microquasars. The effect was predicted before it was observed by Martin Rees[clarification needed] and can be explained as an optical illusion caused by the object partly moving in the direction of the observer, when the speed calculations assume it does not. The phenomenon does not contradict the theory of special relativity. Corrected calculations show these objects have velocities close to the speed of light (relative to our reference frame). They are the first examples of large amounts of mass moving at close to the speed of light. Earth-bound laboratories have only been able to accelerate small numbers of elementary particles to such speeds.
Certain phenomena in quantum mechanics, such as quantum entanglement, might give the superficial impression of allowing communication of information faster than light. According to the no-communication theorem these phenomena do not allow true communication; they only let two observers in different locations see the same system simultaneously, without any way of controlling what either sees. Wavefunction collapse can be viewed as an epiphenomenon of quantum decoherence, which in turn is nothing more than an effect of the underlying local time evolution of the wavefunction of a system and all of its environment. Since the underlying behavior does not violate local causality or allow FTL communication, it follows that neither does the additional effect of wavefunction collapse, whether real or apparent.
The uncertainty principle implies that individual photons may travel for short distances at speeds somewhat faster (or slower) than c, even in vacuum; this possibility must be taken into account when enumerating Feynman diagrams for a particle interaction. However, it was shown in 2011 that a single photon may not travel faster than c. In quantum mechanics, virtual particles may travel faster than light, and this phenomenon is related to the fact that static field effects (which are mediated by virtual particles in quantum terms) may travel faster than light (see section on static fields above). However, macroscopically these fluctuations average out, so that photons do travel in straight lines over long (i.e., non-quantum) distances, and they do travel at the speed of light on average. Therefore, this does not imply the possibility of superluminal information transmission.
There have been various reports in the popular press of experiments on faster-than-light transmission in optics — most often in the context of a kind of quantum tunnelling phenomenon. Usually, such reports deal with a phase velocity or group velocity faster than the vacuum velocity of light. However, as stated above, a superluminal phase velocity cannot be used for faster-than-light transmission of information.
The Hartman effect is the tunneling effect through a barrier where the tunneling time tends to a constant for large barriers. This could, for instance, be the gap between two prisms. When the prisms are in contact, the light passes straight through, but when there is a gap, the light is refracted. There is a non-zero probability that the photon will tunnel across the gap rather than follow the refracted path. For large gaps between the prisms the tunnelling time approaches a constant and thus the photons appear to have crossed with a superluminal speed.
However, the Hartman effect cannot actually be used to violate relativity by transmitting signals faster than c, because the tunnelling time "should not be linked to a velocity since evanescent waves do not propagate". The evanescent waves in the Hartman effect are due to virtual particles and a non-propagating static field, as mentioned in the sections above for gravity and electromagnetism.
In physics, the Casimir–Polder force is a physical force exerted between separate objects due to resonance of vacuum energy in the intervening space between the objects. This is sometimes described in terms of virtual particles interacting with the objects, owing to the mathematical form of one possible way of calculating the strength of the effect. Because the strength of the force falls off rapidly with distance, it is only measurable when the distance between the objects is extremely small. Because the effect is due to virtual particles mediating a static field effect, it is subject to the comments about static fields discussed above.
The EPR paradox refers to a famous thought experiment of Albert Einstein, Boris Podolsky and Nathan Rosen that was realized experimentally for the first time by Alain Aspect in 1981 and 1982 in the Aspect experiment. In this experiment, the measurement of the state of one of the quantum systems of an entangled pair apparently instantaneously forces the other system (which may be distant) to be measured in the complementary state. However, no information can be transmitted this way; the answer to whether or not the measurement actually affects the other quantum system comes down to which interpretation of quantum mechanics one subscribes to.
An experiment performed in 1997 by Nicolas Gisin has demonstrated non-local quantum correlations between particles separated by over 10 kilometers. But as noted earlier, the non-local correlations seen in entanglement cannot actually be used to transmit classical information faster than light, so that relativistic causality is preserved. The situation is akin to sharing a synchronized coin flip, where the second person to flip their coin will always see the opposite of what the first person sees, but neither has any way of knowing whether they were the first or second flipper, without communicating classically. See No-communication theorem for further information. A 2008 quantum physics experiment also performed by Nicolas Gisin and his colleagues has determined that in any hypothetical non-local hidden-variable theory, the speed of the quantum non-local connection (what Einstein called "spooky action at a distance") is at least 10,000 times the speed of light.
Delayed choice quantum eraser
The delayed-choice quantum eraser is a version of the EPR paradox in which the observation (or not) of interference after the passage of a photon through a double slit experiment depends on the conditions of observation of a second photon entangled with the first. The characteristic of this experiment is that the observation of the second photon can take place at a later time than the observation of the first photon, which may give the impression that the measurement of the later photons "retroactively" determines whether the earlier photons show interference or not, although the interference pattern can only be seen by correlating the measurements of both members of every pair and so it can't be observed until both photons have been measured, ensuring that an experimenter watching only the photons going through the slit does not obtain information about the other photons in an FTL or backwards-in-time manner.
Faster-than-light communication is, according to relativity, equivalent to time travel. What we measure as the speed of light in vacuum (or near vacuum) is actually the fundamental physical constant c. This means that all inertial observers, regardless of their relative velocity, will always measure zero-mass particles such as photons traveling at c in vacuum. This result means that measurements of time and velocity in different frames are no longer related simply by constant shifts, but are instead related by Poincaré transformations. These transformations have important implications:
- The relativistic momentum of a massive particle would increase with speed in such a way that at the speed of light an object would have infinite momentum.
- To accelerate an object of non-zero rest mass to c would require infinite time with any finite acceleration, or infinite acceleration for a finite amount of time.
- Either way, such acceleration requires infinite energy.
- Some observers with sub-light relative motion will disagree about which occurs first of any two events that are separated by a space-like interval. In other words, any travel that is faster-than-light will be seen as traveling backwards in time in some other, equally valid, frames of reference, or need to assume the speculative hypothesis of possible Lorentz violations at a presently unobserved scale (for instance the Planck scale). Therefore, any theory which permits "true" FTL also has to cope with time travel and all its associated paradoxes, or else to assume the Lorentz invariance to be a symmetry of thermodynamical statistical nature (hence a symmetry broken at some presently unobserved scale).
- In special relativity the coordinate speed of light is only guaranteed to be c in an inertial frame; in a non-inertial frame the coordinate speed may be different from c. In general relativity no coordinate system on a large region of curved spacetime is "inertial", so it is permissible to use a global coordinate system where objects travel faster than c, but in the local neighborhood of any point in curved spacetime we can define a "local inertial frame" and the local speed of light will be c in this frame, with massive objects moving through this local neighborhood always having a speed less than c in the local inertial frame.
Relative permittivity or permeability less than 1
The speed of light
is related to the vacuum permittivity ε0 and the vacuum permeability μ0. Therefore, not only the phase velocity, group velocity, and energy flow velocity of electromagnetic waves but also the velocity of a photon can be faster than c in a special material that has a constant permittivity or permeability whose value is less than that in vacuum.
Casimir vacuum and quantum tunnelling
Special relativity postulates that the speed of light in vacuum is invariant in inertial frames. That is, it will be the same from any frame of reference moving at a constant speed. The equations do not specify any particular value for the speed of the light, which is an experimentally determined quantity for a fixed unit of length. Since 1983, the SI unit of length (the meter) has been defined using the speed of light.
The experimental determination has been made in vacuum. However, the vacuum we know is not the only possible vacuum which can exist. The vacuum has energy associated with it, called simply the vacuum energy, which could perhaps be altered in certain cases. When vacuum energy is lowered, light itself has been predicted to go faster than the standard value c. This is known as the Scharnhorst effect. Such a vacuum can be produced by bringing two perfectly smooth metal plates together at near atomic diameter spacing. It is called a Casimir vacuum. Calculations imply that light will go faster in such a vacuum by a minuscule amount: a photon traveling between two plates that are 1 micrometer apart would increase the photon's speed by only about one part in 1036. Accordingly, there has as yet been no experimental verification of the prediction. A recent analysis argued that the Scharnhorst effect cannot be used to send information backwards in time with a single set of plates since the plates' rest frame would define a "preferred frame" for FTL signalling. However, with multiple pairs of plates in motion relative to one another the authors noted that they had no arguments that could "guarantee the total absence of causality violations", and invoked Hawking's speculative chronology protection conjecture which suggests that feedback loops of virtual particles would create "uncontrollable singularities in the renormalized quantum stress-energy" on the boundary of any potential time machine, and thus would require a theory of quantum gravity to fully analyze. Other authors argue that Scharnhorst's original analysis, which seemed to show the possibility of faster-than-c signals, involved approximations which may be incorrect, so that it is not clear whether this effect could actually increase signal speed at all.
The physicists Günter Nimtz and Alfons Stahlhofen, of the University of Cologne, claim to have violated relativity experimentally by transmitting photons faster than the speed of light. They say they have conducted an experiment in which microwave photons — relatively low-energy packets of light — travelled "instantaneously" between a pair of prisms that had been moved up to 3 ft (1 m) apart. Their experiment involved an optical phenomenon known as "evanescent modes", and they claim that since evanescent modes have an imaginary wave number, they represent a "mathematical analogy" to quantum tunnelling. Nimtz has also claimed that "evanescent modes are not fully describable by the Maxwell equations and quantum mechanics have to be taken into consideration." Other scientists such as Herbert G. Winful and Robert Helling have argued that in fact there is nothing quantum-mechanical about Nimtz's experiments, and that the results can be fully predicted by the equations of classical electromagnetism (Maxwell's equations).
Nimtz told New Scientist magazine: "For the time being, this is the only violation of special relativity that I know of." However, other physicists say that this phenomenon does not allow information to be transmitted faster than light. Aephraim Steinberg, a quantum optics expert at the University of Toronto, Canada, uses the analogy of a train traveling from Chicago to New York, but dropping off train cars from the tail at each station along the way, so that the center of the ever-shrinking main train moves forward at each stop; in this way, the speed of the center of the train exceeds the speed of any of the individual cars.
Winful argues that the train analogy is a variant of the "reshaping argument" for superluminal tunneling velocities, but he goes on to say that this argument is not actually supported by experiment or simulations, which actually show that the transmitted pulse has the same length and shape as the incident pulse. Instead, Winful argues that the group delay in tunneling is not actually the transit time for the pulse (whose spatial length must be greater than the barrier length in order for its spectrum to be narrow enough to allow tunneling), but is instead the lifetime of the energy stored in a standing wave which forms inside the barrier. Since the stored energy in the barrier is less than the energy stored in a barrier-free region of the same length due to destructive interference, the group delay for the energy to escape the barrier region is shorter than it would be in free space, which according to Winful is the explanation for apparently superluminal tunneling.
A number of authors have published papers disputing Nimtz's claim that Einstein causality is violated by his experiments, and there are many other papers in the literature discussing why quantum tunneling is not thought to violate causality.
It was later claimed by Eckle et al. that particle tunneling does indeed occur in zero real time. Their tests involved tunneling electrons, where the group argued a relativistic prediction for tunneling time should be 500–600 attoseconds (an attosecond is one quintillionth (10−18) of a second). All that could be measured was 24 attoseconds, which is the limit of the test accuracy. Again, though, other physicists believe that tunneling experiments in which particles appear to spend anomalously short times inside the barrier are in fact fully compatible with relativity, although there is disagreement about whether the explanation involves reshaping of the wave packet or other effects.
Give up (absolute) relativity
Because of the strong empirical support for special relativity, any modifications to it must necessarily be quite subtle and difficult to measure. The best-known attempt is doubly special relativity, which posits that the Planck length is also the same in all reference frames, and is associated with the work of Giovanni Amelino-Camelia and João Magueijo. There are speculative theories that claim inertia is produced by the combined mass of the universe (e.g., Mach's principle), which implies that the rest frame of the universe might be preferred by conventional measurements of natural law. If confirmed, this would imply special relativity is an approximation to a more general theory, but since the relevant comparison would (by definition) be outside the observable universe, it is difficult to imagine (much less construct) experiments to test this hypothesis. Despite this difficulty, such experiments have been proposed.
Although the theory of special relativity forbids objects to have a relative velocity greater than light speed, and general relativity reduces to special relativity in a local sense (in small regions of spacetime where curvature is negligible), general relativity does allow the space between distant objects to expand in such a way that they have a "recession velocity" which exceeds the speed of light, and it is thought that galaxies which are at a distance of more than about 14 billion light-years from us today have a recession velocity which is faster than light. Miguel Alcubierre theorized that it would be possible to create a warp drive, in which a ship would be enclosed in a "warp bubble" where the space at the front of the bubble is rapidly contracting and the space at the back is rapidly expanding, with the result that the bubble can reach a distant destination much faster than a light beam moving outside the bubble, but without objects inside the bubble locally traveling faster than light. However, several objections raised against the Alcubierre drive appear to rule out the possibility of actually using it in any practical fashion. Another possibility predicted by general relativity is the traversable wormhole, which could create a shortcut between arbitrarily distant points in space. As with the Alcubierre drive, travelers moving through the wormhole would not locally move faster than light travelling through the wormhole alongside them, but they would be able to reach their destination (and return to their starting location) faster than light traveling outside the wormhole.
Gerald Cleaver and Richard Obousy, a professor and student of Baylor University, theorized that manipulating the extra spatial dimensions of string theory around a spaceship with an extremely large amount of energy would create a "bubble" that could cause the ship to travel faster than the speed of light. To create this bubble, the physicists believe manipulating the 10th spatial dimension would alter the dark energy in three large spatial dimensions: height, width and length. Cleaver said positive dark energy is currently responsible for speeding up the expansion rate of our universe as time moves on.
Lorentz symmetry violation
The possibility that Lorentz symmetry may be violated has been seriously considered in the last two decades, particularly after the development of a realistic effective field theory that describes this possible violation, the so-called Standard-Model Extension. This general framework has allowed experimental searches by ultra-high energy cosmic-ray experiments and a wide variety of experiments in gravity, electrons, protons, neutrons, neutrinos, mesons, and photons. The breaking of rotation and boost invariance causes direction dependence in the theory as well as unconventional energy dependence that introduces novel effects, including Lorentz-violating neutrino oscillations and modifications to the dispersion relations of different particle species, which naturally could make particles move faster than light.
In some models of broken Lorentz symmetry, it is postulated that the symmetry is still built into the most fundamental laws of physics, but that spontaneous symmetry breaking of Lorentz invariance shortly after the Big Bang could have left a "relic field" throughout the universe which causes particles to behave differently depending on their velocity relative to the field; however, there are also some models where Lorentz symmetry is broken in a more fundamental way. If Lorentz symmetry can cease to be a fundamental symmetry at the Planck scale or at some other fundamental scale, it is conceivable that particles with a critical speed different from the speed of light be the ultimate constituents of matter.
In current models of Lorentz symmetry violation, the phenomenological parameters are expected to be energy-dependent. Therefore, as widely recognized, existing low-energy bounds cannot be applied to high-energy phenomena; however, many searches for Lorentz violation at high energies have been carried out using the Standard-Model Extension. Lorentz symmetry violation is expected to become stronger as one gets closer to the fundamental scale.
Superfluid theories of physical vacuum
In this approach the physical vacuum is viewed as a quantum superfluid which is essentially non-relativistic whereas Lorentz symmetry is not an exact symmetry of nature but rather the approximate description valid only for the small fluctuations of the superfluid background. Within the framework of the approach a theory was proposed in which the physical vacuum is conjectured to be a quantum Bose liquid whose ground-state wavefunction is described by the logarithmic Schrödinger equation. It was shown that the relativistic gravitational interaction arises as the small-amplitude collective excitation mode whereas relativistic elementary particles can be described by the particle-like modes in the limit of low momenta. The important fact is that at very high velocities the behavior of the particle-like modes becomes distinct from the relativistic one - they can reach the speed of light limit at finite energy; also, faster-than-light propagation is possible without requiring moving objects to have imaginary mass.
FTL Neutrino flight results
In 2007 the MINOS collaboration reported results measuring the flight-time of 3 GeV neutrinos yielding a speed exceeding that of light by 1.8-sigma significance. However, those measurements were considered to be statistically consistent with neutrinos traveling at the speed of light. After the detectors for the project were upgraded in 2012, MINOS corrected their initial result and found agreement with the speed of light. Further measurements are going to be conducted.
OPERA neutrino anomaly
On September 22, 2011, a preprint from the OPERA Collaboration indicated detection of 17 and 28 GeV muon neutrinos, sent 730 kilometers (454 miles) from CERN near Geneva, Switzerland to the Gran Sasso National Laboratory in Italy, traveling faster than light by a relative amount of 2.48×10−5 (approximately 1 in 40,000), a statistic with 6.0-sigma significance. On 17 November 2011, a second follow-up experiment by OPERA scientists confirmed their initial results. However, scientists were skeptical about the results of these experiments, the significance of which was disputed. In March 2012, the ICARUS collaboration failed to reproduce the OPERA results with their equipment, detecting neutrino travel time from CERN to the Gran Sasso National Laboratory indistinguishable from the speed of light. Later the OPERA team reported two flaws in their equipment set-up that had caused errors far outside their original confidence interval: a fiber optic cable attached improperly, which caused the apparently faster-than-light measurements, and a clock oscillator ticking too fast.
In special relativity, it is impossible to accelerate an object to the speed of light, or for a massive object to move at the speed of light. However, it might be possible for an object to exist which always moves faster than light. The hypothetical elementary particles with this property are called tachyons or tachyonic particles. Attempts to quantize them failed to produce faster-than-light particles, and instead illustrated that their presence leads to an instability.
of a wave that can propagate in the negative-index metamaterial. The pressure of radiation pressure in the metamaterial is negative and negative refraction, inverse Doppler effect and reverse Cherenkov effect imply that the momentum is also negative. So the wave in a negative-index metamaterial can be applied to test the theory of exotic matter and negative mass. For example, the velocity equals
That is to say, such a wave can break the light barrier under certain conditions.
General relativity was developed after special relativity to include concepts like gravity. It maintains the principle that no object can accelerate to the speed of light in the reference frame of any coincident observer. However, it permits distortions in spacetime that allow an object to move faster than light from the point of view of a distant observer. One such distortion is the Alcubierre drive, which can be thought of as producing a ripple in spacetime that carries an object along with it. Another possible system is the wormhole, which connects two distant locations as though by a shortcut. Both distortions would need to create a very strong curvature in a highly localized region of space-time and their gravity fields would be immense. To counteract the unstable nature, and prevent the distortions from collapsing under their own 'weight', one would need to introduce hypothetical exotic matter or negative energy.
General relativity also recognizes that any means of faster-than-light travel could also be used for time travel. This raises problems with causality. Many physicists believe that the above phenomena are impossible and that future theories of gravity will prohibit them. One theory states that stable wormholes are possible, but that any attempt to use a network of wormholes to violate causality would result in their decay. In string theory, Eric G. Gimon and Petr Hořava have argued that in a supersymmetric five-dimensional Gödel universe, quantum corrections to general relativity effectively cut off regions of spacetime with causality-violating closed timelike curves. In particular, in the quantum theory a smeared supertube is present that cuts the spacetime in such a way that, although in the full spacetime a closed timelike curve passed through every point, no complete curves exist on the interior region bounded by the tube.
- Gonzalez-Diaz, P. F. (2000). "Warp drive space-time" (PDF). Physical Review D. 62 (4): 044005. arXiv:gr-qc/9907026. Bibcode:2000PhRvD..62d4005G. doi:10.1103/PhysRevD.62.044005. hdl:10261/99501.
- Loup, F.; Waite, D.; Halerewicz, E. Jr. (2001). "Reduced total energy requirements for a modified Alcubierre warp drive spacetime". arXiv:gr-qc/0107097.
- Visser, M.; Bassett, B.; Liberati, S. (2000). "Superluminal censorship". Nuclear Physics B: Proceedings Supplements. 88 (1–3): 267–270. arXiv:gr-qc/9810026. Bibcode:2000NuPhS..88..267V. doi:10.1016/S0920-5632(00)00782-9.
- Visser, M.; Bassett, B.; Liberati, S. (1999). Perturbative superluminal censorship and the null energy condition. AIP Conference Proceedings. 493. pp. 301–305. arXiv:gr-qc/9908023. Bibcode:1999AIPC..493..301V. doi:10.1063/1.1301601. ISBN 978-1-56396-905-8.
- "The 17th Conférence Générale des Poids et Mesures (CGPM) : Definition of the metre". bipm.org. Retrieved July 5, 2020.
- University of York Science Education Group (2001). Salter Horners Advanced Physics A2 Student Book. Heinemann. pp. 302–303. ISBN 978-0435628925.
- "The Furthest Object in the Solar System". Information Leaflet No. 55. Royal Greenwich Observatory. 15 April 1996.
- Gibbs, P. (1997). "Is Faster-Than-Light Travel or Communication Possible?". The Original Usenet Physics FAQ. Retrieved 20 August 2008.
- Salmon, W. C. (2006). Four Decades of Scientific Explanation. University of Pittsburgh Press. p. 107. ISBN 978-0-8229-5926-7.
- Steane, A. (2012). The Wonderful World of Relativity: A Precise Guide for the General Reader. Oxford University Press. p. 180. ISBN 978-0-19-969461-7.
- Sartori, L. (1976). Understanding Relativity: A Simplified Approach to Einstein's Theories. University of California Press. pp. 79–83. ISBN 978-0-520-91624-1.
- Hecht, E. (1987). Optics (2nd ed.). Addison Wesley. p. 62. ISBN 978-0-201-11609-0.
- Sommerfeld, A. (1907). Physikalische Zeitschrift. 8 (23): 841–842. .
- "Phase, Group, and Signal Velocity". MathPages. Retrieved 2007-04-30.
- Wang, L. J.; Kuzmich, A.; Dogariu, A. (2000). "Gain-assisted superluminal light propagation". Nature. 406 (6793): 277–279. Bibcode:2000Natur.406..277W. doi:10.1038/35018520. PMID 10917523.
- Bowlan, P.; Valtna-Lukner, H.; Lõhmus, M.; Piksarv, P.; Saari, P.; Trebino, R. (2009). "Measurement of the spatiotemporal electric field of ultrashort superluminal Bessel-X pulses". Optics and Photonics News. 20 (12): 42. Bibcode:2009OptPN..20...42M. doi:10.1364/OPN.20.12.000042. S2CID 122056218.
- Brillouin, L (1960). Wave Propagation and Group Velocity. Academic Press.
- Withayachumnankul, W.; Fischer, B. M.; Ferguson, B.; Davis, B. R.; Abbott, D. (2010). "A Systemized View of Superluminal Wave Propagation" (PDF). Proceedings of the IEEE. 98 (10): 1775–1786. doi:10.1109/JPROC.2010.2052910.
- Horváth, Z. L.; Vinkó, J.; Bor, Zs.; von der Linde, D. (1996). "Acceleration of femtosecond pulses to superluminal velocities by Gouy phase shift" (PDF). Applied Physics B. 63 (5): 481–484. Bibcode:1996ApPhB..63..481H. doi:10.1007/BF01828944.
- "BICEP2 2014 Results Release". BICEP2. 17 March 2014. Retrieved 18 March 2014.
- Clavin, W. (17 March 2014). "NASA Technology Views Birth of the Universe". Jet Propulsion Lab. Retrieved 17 March 2014.
- Overbye, D. (17 March 2014). "Detection of Waves in Space Buttresses Landmark Theory of Big Bang". The New York Times. Retrieved 17 March 2014.
- Wright, E. L. (12 June 2009). "Cosmology Tutorial - Part 2". Ned Wright's Cosmology Tutorial. UCLA. Retrieved 2011-09-26.
- Nave, R. "Inflationary Period". HyperPhysics. Retrieved 2011-09-26.
- See the last two paragraphs in Rothstein, D. (10 September 2003). "Is the universe expanding faster than the speed of light?". Ask an Astronomer.
- Lineweaver, C.; Davis, T. M. (March 2005). "Misconceptions about the Big Bang" (PDF). Scientific American. pp. 36–45. Retrieved 2008-11-06.
- Davis, T. M.; Lineweaver, C. H. (2004). "Expanding Confusion:common misconceptions of cosmological horizons and the superluminal expansion of the universe". Publications of the Astronomical Society of Australia. 21 (1): 97–109. arXiv:astro-ph/0310808. Bibcode:2004PASA...21...97D. doi:10.1071/AS03040.
- Loeb, A. (2002). "The Long-Term Future of Extragalactic Astronomy". Physical Review D. 65 (4): 047301. arXiv:astro-ph/0107568. Bibcode:2002PhRvD..65d7301L. doi:10.1103/PhysRevD.65.047301.
- Rees, M. J. (1966). "Appearance of relativistically expanding radio sources". Nature. 211 (5048): 468–470. Bibcode:1966Natur.211..468R. doi:10.1038/211468a0.
- Blandford, R. D.; McKee, C. F.; Rees, M. J. (1977). "Super-luminal expansion in extragalactic radio sources". Nature. 267 (5608): 211–216. Bibcode:1977Natur.267..211B. doi:10.1038/267211a0.
- Grozin, A. (2007). Lectures on QED and QCD. World Scientific. p. 89. ISBN 978-981-256-914-1.
- Zhang, S.; Chen, J. F.; Liu, C.; Loy, M. M. T.; Wong, G. K. L.; Du, S. (2011). "Optical Precursor of a Single Photon" (PDF). Physical Review Letters. 106 (24): 243602. Bibcode:2011PhRvL.106x3602Z. doi:10.1103/PhysRevLett.106.243602. PMID 21770570.
- Kåhre, J. (2012). The Mathematical Theory of Information (Illustrated ed.). Springer Science & Business Media. p. 425. ISBN 978-1-4615-0975-2.
- Steinberg, A. M. (1994). When Can Light Go Faster Than Light? (Thesis). University of California, Berkeley. p. 100. Bibcode:1994PhDT.......314S.
- Chubb, J.; Eskandarian, A.; Harizanov, V. (2016). Logic and Algebraic Structures in Quantum Computing (Illustrated ed.). Cambridge University Press. p. 61. ISBN 978-1-107-03339-9.
- Ehlers, J.; Lämmerzahl, C. (2006). Special Relativity: Will it Survive the Next 101 Years? (Illustrated ed.). Springer. p. 506. ISBN 978-3-540-34523-7.
- Martinez, J. C.; Polatdemir, E. (2006). "Origin of the Hartman effect". Physics Letters A. 351 (1–2): 31–36. Bibcode:2006PhLA..351...31M. doi:10.1016/j.physleta.2005.10.076.
- Hartman, T. E. (1962). "Tunneling of a Wave Packet". Journal of Applied Physics. 33 (12): 3427–3433. Bibcode:1962JAP....33.3427H. doi:10.1063/1.1702424.
- Nimtz, Günter; Stahlhofen, Alfons (2007). "Macroscopic violation of special relativity". arXiv:0708.0681 [quant-ph].
- Winful, H. G. (2006). "Tunneling time, the Hartman effect, and superluminality: A proposed resolution of an old paradox". Physics Reports. 436 (1–2): 1–69. Bibcode:2006PhR...436....1W. doi:10.1016/j.physrep.2006.09.002.
- Suarez, A. (26 February 2015). "History". Center for Quantum Philosophy. Retrieved 2017-06-07.
- Salart, D.; Baas, A.; Branciard, C.; Gisin, N.; Zbinden, H. (2008). "Testing spooky action at a distance". Nature. 454 (7206): 861–864. arXiv:0808.3316. Bibcode:2008Natur.454..861S. doi:10.1038/nature07121. PMID 18704081.
- Kim, Yoon-Ho; Yu, Rong; Kulik, Sergei P.; Shih, Yanhua; Scully, Marlan O. (2000). "Delayed "Choice" Quantum Eraser". Physical Review Letters. 84 (1): 1–5. arXiv:quant-ph/9903047. Bibcode:2000PhRvL..84....1K. doi:10.1103/PhysRevLett.84.1. PMID 11015820.
- Hillmer, R.; Kwiat, P. (16 April 2017). "Delayed-Choice Experiments". Scientific American.
- Motl, L. (November 2010). "Delayed choice quantum eraser". The Reference Frame.
- Einstein, A. (1927). Relativity:the special and the general theory. Methuen & Co. pp. 25–27.
- Odenwald, S. "If we could travel faster than light, could we go back in time?". NASA Astronomy Café. Retrieved 7 April 2014.
- Gott, J. R. (2002). Time Travel in Einstein's Universe. Mariner Books. pp. 82–83. ISBN 978-0618257355.
- Petkov, V. (2009). Relativity and the Nature of Spacetime. Springer Science & Business Media. p. 219. ISBN 978-3642019623.
- Raine, D. J.; Thomas, E. G. (2001). An Introduction to the Science of Cosmology. CRC Press. p. 94. ISBN 978-0750304054.
- Z.Y.Wang (2018). "On Faster than Light Photons in Double-Positive Materials". Plasmonics. 13 (6): 2273–2276. doi:10.1007/s11468-018-0749-8.
- "What is the 'zero-point energy' (or 'vacuum energy') in quantum physics? Is it really possible that we could harness this energy?". Scientific American. 1997-08-18. Retrieved 2009-05-27.
- Scharnhorst, Klaus (1990-05-12). "Secret of the vacuum: Speedier light". Retrieved 2009-05-27.
- Visser, Matt; Liberati, Stefano; Sonego, Sebastiano (2002). "Faster-than-c signals, special relativity, and causality". Annals of Physics. 298 (1): 167–185. arXiv:gr-qc/0107091. Bibcode:2002AnPhy.298..167L. doi:10.1006/aphy.2002.6233.
- Fearn, Heidi (2007). "Can Light Signals Travel Faster than c in Nontrivial Vacuua in Flat space-time? Relativistic Causality II". Laser Physics. 17 (5): 695–699. arXiv:0706.0553. Bibcode:2007LaPhy..17..695F. doi:10.1134/S1054660X07050155.
- Nimtz, G (2001). Superluminal Tunneling Devices. The Physics of Communication. pp. 339–355. arXiv:physics/0204043. doi:10.1142/9789812704634_0019. ISBN 978-981-238-449-2.
- Winful, Herbert G. (2007-09-18). "Comment on "Macroscopic violation of special relativity" by Nimtz and Stahlhofen". arXiv:0709.2736 [quant-ph].
- Helling, R. (20 September 2005). "Faster than light or not". atdotde.blogspot.ca.
- Anderson, Mark (18–24 August 2007). "Light seems to defy its own speed limit". New Scientist. 195 (2617). p. 10.
- Winful, Herbert G. (December 2006). "Tunneling time, the Hartman effect, and superluminality: A proposed resolution of an old paradox" (PDF). Physics Reports. 436 (1–2): 1–69. Bibcode:2006PhR...436....1W. doi:10.1016/j.physrep.2006.09.002. Archived from the original (PDF) on 2011-12-18. Retrieved 2010-06-08.
- For a summary of Herbert G. Winful's explanation for apparently superluminal tunneling time which does not involve reshaping, see Winful, Herbert (2007). "New paradigm resolves old paradox of faster-than-light tunneling". SPIE Newsroom. doi:10.1117/2.1200711.0927.
- A number of papers are listed at Literature on Faster-than-light tunneling experiments
- Eckle, P.; Pfeiffer, A. N.; Cirelli, C.; Staudte, A.; Dorner, R.; Muller, H. G.; Buttiker, M.; Keller, U. (5 December 2008). "Attosecond Ionization and Tunneling Delay Time Measurements in Helium". Science. 322 (5907): 1525–1529. Bibcode:2008Sci...322.1525E. doi:10.1126/science.1163439. PMID 19056981.
- Sokolovski, D. (8 February 2004). "Why does relativity allow quantum tunneling to 'take no time'?". Proceedings of the Royal Society A. 460 (2042): 499–506. Bibcode:2004RSPSA.460..499S. doi:10.1098/rspa.2003.1222.
- Amelino-Camelia, Giovanni (1 November 2009). "Doubly-Special Relativity: Facts, Myths and Some Key Open Issues". Recent Developments in Theoretical Physics. Statistical Science and Interdisciplinary Research. 9. pp. 123–170. arXiv:1003.3942. doi:10.1142/9789814287333_0006. ISBN 978-981-4287-32-6.
- Amelino-Camelia, Giovanni (1 July 2002). "Doubly Special Relativity". Nature. 418 (6893): 34–35. arXiv:gr-qc/0207049. Bibcode:2002Natur.418...34A. doi:10.1038/418034a. PMID 12097897.
- Chang, Donald C. (March 22, 2017). "Is there a resting frame in the universe? A proposed experimental test based on a precise measurement of particle mass". The European Physical Journal Plus. 132 (3). doi:10.1140/epjp/i2017-11402-4.
- Lineweaver, Charles H.; Davis, Tamara M. (March 2005). "Misconceptions about the Big Bang". Scientific American.
- Alcubierre, Miguel (1 May 1994). "The warp drive: hyper-fast travel within general relativity". Classical and Quantum Gravity. 11 (5): L73–L77. arXiv:gr-qc/0009013. Bibcode:1994CQGra..11L..73A. CiteSeerX 10.1.1.338.8690. doi:10.1088/0264-9381/11/5/001.
- Traveling Faster Than the Speed of Light: A New Idea That Could Make It Happen Newswise, retrieved on 24 August 2008.
- Heim, Burkhard (1977). "Vorschlag eines Weges einer einheitlichen Beschreibung der Elementarteilchen [Recommendation of a Way to a Unified Description of Elementary Particles]". Zeitschrift für Naturforschung. 32a (3–4): 233–243. Bibcode:1977ZNatA..32..233H. doi:10.1515/zna-1977-3-404.
- Colladay, Don; Kostelecký, V. Alan (1997). "CPT violation and the standard model". Physical Review D. 55 (11): 6760–6774. arXiv:hep-ph/9703464. Bibcode:1997PhRvD..55.6760C. doi:10.1103/PhysRevD.55.6760.
- Colladay, Don; Kostelecký, V. Alan (1998). "Lorentz-violating extension of the standard model". Physical Review D. 58 (11): 116002. arXiv:hep-ph/9809521. Bibcode:1998PhRvD..58k6002C. doi:10.1103/PhysRevD.58.116002.
- Kostelecký, V. Alan (2004). "Gravity, Lorentz violation, and the standard model". Physical Review D. 69 (10): 105009. arXiv:hep-th/0312310. Bibcode:2004PhRvD..69j5009K. doi:10.1103/PhysRevD.69.105009.
- Gonzalez-Mestres, Luis (2009). "AUGER-HiRes results and models of Lorentz symmetry violation". Nuclear Physics B: Proceedings Supplements. 190: 191–197. arXiv:0902.0994. Bibcode:2009NuPhS.190..191G. doi:10.1016/j.nuclphysbps.2009.03.088.
- Kostelecký, V. Alan; Russell, Neil (2011). "Data tables for Lorentz and CPT violation". Reviews of Modern Physics. 83 (1): 11–31. arXiv:0801.0287. Bibcode:2011RvMP...83...11K. doi:10.1103/RevModPhys.83.11.
- Kostelecký, V. A.; Samuel, S. (15 January 1989). "Spontaneous breaking of Lorentz symmetry in string theory" (PDF). Physical Review D. 39 (2): 683–685. Bibcode:1989PhRvD..39..683K. doi:10.1103/PhysRevD.39.683. hdl:2022/18649. PMID 9959689.
- "PhysicsWeb - Breaking Lorentz symmetry". PhysicsWeb. 2004-04-05. Archived from the original on 2004-04-05. Retrieved 2011-09-26.
- Mavromatos, Nick E. (15 August 2002). "Testing models for quantum gravity". CERN Courier.
- Overbye, Dennis; Interpreting the Cosmic Rays, The New York Times, 31 December 2002
- Volovik, G. E. (2003). "The Universe in a helium droplet". International Series of Monographs on Physics. 117: 1–507.
- Zloshchastiev, Konstantin G. (2011). "Spontaneous symmetry breaking and mass generation as built-in phenomena in logarithmic nonlinear quantum theory". Acta Physica Polonica B. 42 (2): 261–292. arXiv:0912.4139. Bibcode:2011AcPPB..42..261Z. doi:10.5506/APhysPolB.42.261.
- Avdeenkov, Alexander V.; Zloshchastiev, Konstantin G. (2011). "Quantum Bose liquids with logarithmic nonlinearity: Self-sustainability and emergence of spatial extent". Journal of Physics B: Atomic, Molecular and Optical Physics. 44 (19): 195303. arXiv:1108.0847. Bibcode:2011JPhB...44s5303A. doi:10.1088/0953-4075/44/19/195303.
- Zloshchastiev, Konstantin G.; Chakrabarti, Sandip K.; Zhuk, Alexander I.; Bisnovatyi-Kogan, Gennady S. (2010). "Logarithmic nonlinearity in theories of quantum gravity: Origin of time and observational consequences". American Institute of Physics Conference Series. AIP Conference Proceedings. 1206: 288–297. arXiv:0906.4282. Bibcode:2010AIPC.1206..112Z. doi:10.1063/1.3292518.
- Zloshchastiev, Konstantin G. (2011). "Vacuum Cherenkov effect in logarithmic nonlinear quantum theory". Physics Letters A. 375 (24): 2305–2308. arXiv:1003.0657. Bibcode:2011PhLA..375.2305Z. doi:10.1016/j.physleta.2011.05.012.
- Adamson, P.; Andreopoulos, C.; Arms, K.; Armstrong, R.; Auty, D.; Avvakumov, S.; Ayres, D.; Baller, B.; et al. (2007). "Measurement of neutrino velocity with the MINOS detectors and NuMI neutrino beam". Physical Review D. 76 (7): 072005. arXiv:0706.0437. Bibcode:2007PhRvD..76g2005A. doi:10.1103/PhysRevD.76.072005.
- Overbye, Dennis (22 September 2011). "Tiny neutrinos may have broken cosmic speed limit". The New York Times.
That group found, although with less precision, that the neutrino speeds were consistent with the speed of light.
- "MINOS reports new measurement of neutrino velocity". Fermilab today. June 8, 2012. Retrieved June 8, 2012.
- Adam, T.; et al. (OPERA Collaboration) (22 September 2011). "Measurement of the neutrino velocity with the OPERA detector in the CNGS beam". arXiv:1109.4897v1 [hep-ex].
- Cho, Adrian; Neutrinos Travel Faster Than Light, According to One Experiment, Science NOW, 22 September 2011
- Overbye, Dennis (18 November 2011). "Scientists Report Second Sighting of Faster-Than-Light Neutrinos". The New York Times. Retrieved 2011-11-18.
- Adam, T.; et al. (OPERA Collaboration) (17 November 2011). "Measurement of the neutrino velocity with the OPERA detector in the CNGS beam". arXiv:1109.4897v2 [hep-ex].
- Reuters: Study rejects "faster than light" particle finding
- Antonello, M.; et al. (ICARUS Collaboration) (15 March 2012). "Measurement of the neutrino velocity with the ICARUS detector at the CNGS beam". Physics Letters B. 713 (1): 17–22. arXiv:1203.3433. Bibcode:2012PhLB..713...17A. doi:10.1016/j.physletb.2012.05.033.
- Strassler, M. (2012) "OPERA: What Went Wrong" profmattstrassler.com
- Randall, Lisa; Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions, p. 286: "People initially thought of tachyons as particles travelling faster than the speed of light...But we now know that a tachyon indicates an instability in a theory that contains it. Regrettably for science fiction fans, tachyons are not real physical particles that appear in nature."
- Gates, S. James (2000-09-07). "Superstring Theory: The DNA of Reality". Cite journal requires
- Chodos, A.; Hauser, A. I.; Alan Kostelecký, V. (1985). "The neutrino as a tachyon". Physics Letters B. 150 (6): 431–435. Bibcode:1985PhLB..150..431C. doi:10.1016/0370-2693(85)90460-5.
- Chodos, Alan; Kostelecký, V. Alan; IUHET 280 (1994). "Nuclear Null Tests for Spacelike Neutrinos". Physics Letters B. 336 (3–4): 295–302. arXiv:hep-ph/9409404. Bibcode:1994PhLB..336..295C. doi:10.1016/0370-2693(94)90535-5.
- Chodos, A.; Kostelecký, V. A.; Potting, R.; Gates, Evalyn (1992). "Null experiments for neutrino masses". Modern Physics Letters A. 7 (6): 467–476. Bibcode:1992MPLA....7..467C. doi:10.1142/S0217732392000422.
- Chang, Tsao (2002). "Parity Violation and Neutrino Mass". Nuclear Science and Techniques. 13: 129–133. arXiv:hep-ph/0208239. Bibcode:2002hep.ph....8239C.
- Hughes, R. J.; Stephenson, G. J. (1990). "Against tachyonic neutrinos". Physics Letters B. 244 (1): 95–100. Bibcode:1990PhLB..244...95H. doi:10.1016/0370-2693(90)90275-B.
- Wang, Z.Y. (2016). "Modern Theory for Electromagnetic Metamaterials". Plasmonics. 11 (2): 503–508. doi:10.1007/s11468-015-0071-7.
- Veselago, V. G. (1968). "The electrodynamics of substances with simultaneously negative values of permittivity and permeability". Soviet Physics Uspekhi. 10 (4): 509–514. Bibcode:1968SvPhU..10..509V. doi:10.1070/PU1968v010n04ABEH003699.
- Gimon, Eric G.; Hořava, Petr (2004). "Over-rotating black holes, Gödel holography and the hypertube". arXiv:hep-th/0405019.
- Falla, D. F.; Floyd, M. J. (2002). "Superluminal motion in astronomy". European Journal of Physics. 23 (1): 69–81. Bibcode:2002EJPh...23...69F. doi:10.1088/0143-0807/23/1/310.
- Kaku, Michio (2008). "Faster than Light". Physics of the Impossible. Allen Lane. pp. 197–215. ISBN 978-0-7139-9992-1.
- Nimtz, Günter (2008). Zero Time Space. Wiley-VCH. ISBN 978-3-527-40735-4.
- Cramer, J. G. (2009). "Faster-than-Light Implications of Quantum Entanglement and Nonlocality". In Millis, M. G.; et al. (eds.). Frontiers of Propulsion Science. American Institute of Aeronautics and Astronautics. pp. 509–529. ISBN 978-1-56347-956-4.
|Wikimedia Commons has media related to Faster-than-light travel.|
- Measurement of the neutrino velocity with the OPERA detector in the CNGS beam
- Encyclopedia of laser physics and technology on "superluminal transmission", with more details on phase and group velocity, and on causality
- Markus Pössel: Faster-than-light (FTL) speeds in tunneling experiments: an annotated bibliography
- Alcubierre, Miguel; The Warp Drive: Hyper-Fast Travel Within General Relativity, Classical and Quantum Gravity 11 (1994), L73–L77
- A systemized view of superluminal wave propagation
- Relativity and FTL Travel FAQ
- Usenet Physics FAQ: is FTL travel or communication Possible?
- Relativity, FTL and causality
- Yan, Kun (2006). "The tendency analytical equations of stable nuclides and the superluminal velocity motion laws of matter in geospace". Progress in Geophysics. 21: 38. Bibcode:2006PrGeo..21...38Y.
- Glasser, Ryan T. (2012). "Stimulated Generation of Superluminal Light Pulses via Four-Wave Mixing". Physical Review Letters. 108 (17): 173902. arXiv:1204.0810. Bibcode:2012PhRvL.108q3902G. doi:10.1103/PhysRevLett.108.173902. PMID 22680868.
- Conical and paraboloidal superluminal particle accelerators
- Relativity and FTL (=Superluminal motion) Travel Homepage |
Graph Linear Equations Practice
Lesson 18 of 19
Objective: SWBAT graph linear equations using intercepts, points, and slope-intercept form.
This warm-up will give students more work with algebraically manipulating an equation in two variables. Have students work on the two problems by themselves first and then turn and talk with their partner to compare their answers. When students are comparing answers encourage them to discuss how they determined the slope and y-intercept of each equation (MP3). As you are monitoring student discussions, listen for those that are making good justifications for their work. You can ask a pair of students to model their discussion for the class as a model of how a solid discussion will sound.
The purpose of this practice is to help students develop fluency around the three different techniques for graphing a linear equation (finding intercepts, plotting points that make the equation true, and putting the equation in slope-intercept form).
Depending on your class, you may want to work through the first question together as guided practice. In either case, as students are working, ask them to pay attention to which technique seems to work best for each equation (MP7). Students will notice that one technique may work better than another based on the structure of the equation. The equations were also chosen so that one technique will appear to be more advantageous on certain questions.
In each question, the students will be graphing the same equation three times. The advantage to this type of practice is that students have a built-in check for each question. Since students will be graphing the same linear equation, if one graph looks different from the others students know that they need to check their work for correctness (MP6).
In this ticket out the door students will be evaluating an equation to determine which graphing technique will be most advantageous (MP1 & MP2). Students will need to justify their choice through a written explanation (MP3). In this ticket out, it is expected that many students will choose either intercepts or point-plotting. However, an argument could be made for any of the three techniques. The idea is that students have an opportunity to choose a technique and use mathematical language to justify their choice. |
Understanding the Basics: How to Derive a Formula from First Principles
Deriving a formula from first principles is an essential skill in mathematics and science. It involves starting with basic principles or axioms and using logical reasoning to arrive at a general equation that describes a particular phenomenon or relationship between variables. This process can be challenging, but it is also rewarding because it allows us to understand the underlying principles of a problem and develop new insights into its behavior.
The first step in deriving a formula is to identify the key variables involved in the problem. These may include physical quantities such as distance, time, velocity, acceleration, force, energy, or temperature, or abstract concepts such as probability, logic, or algebraic expressions. Once we have identified the variables, we need to define them precisely and establish their relationships based on our understanding of the problem.
Next, we need to apply logical reasoning and mathematical techniques to derive a formula that expresses the relationship between the variables. This may involve using algebraic manipulation, calculus, geometry, or other mathematical tools depending on the nature of the problem. We may also need to make assumptions or simplifications to the problem to make it more tractable or to focus on the most important aspects of the problem.
One common approach to deriving a formula is to use dimensional analysis. This involves analyzing the units of the variables involved in the problem and using them to construct a formula that has the correct dimensions and units. For example, if we are trying to derive a formula for the period of a pendulum, we know that the period must depend on the length of the pendulum, the gravitational acceleration, and possibly other factors such as the mass of the pendulum or the amplitude of its swing. By analyzing the units of these variables, we can construct a formula that has the correct dimensions of time and depends only on the relevant variables.
Another approach to deriving a formula is to use physical or geometric reasoning. This involves visualizing the problem and using our intuition about how physical or geometric quantities behave to derive a formula. For example, if we are trying to derive a formula for the area of a circle, we can imagine dividing the circle into small sectors and approximating each sector as a triangle with base equal to the radius and height equal to half the circumference. By summing up the areas of all the triangles, we can derive a formula for the area of the circle in terms of its radius.
In some cases, we may need to use experimental data or empirical observations to derive a formula. This involves collecting data on the variables involved in the problem and using statistical methods to analyze the data and derive a formula that fits the observed behavior. For example, if we are trying to derive a formula for the relationship between temperature and pressure in a gas, we can measure the temperature and pressure of the gas at different conditions and use regression analysis to fit a curve or equation that describes the relationship between the two variables.
Regardless of the approach we use, deriving a formula from first principles requires patience, persistence, and creativity. It is not always easy to see the underlying principles of a problem or to find the right mathematical tools to express them. However, by practicing this skill and seeking feedback from others, we can develop our ability to derive formulas and gain a deeper understanding of the world around us.
Using Mathematical Concepts to Derive Formulas for Complex Problems
Mathematics is a subject that has been around for centuries, and it has played a significant role in shaping the world we live in today. One of the most important aspects of mathematics is the ability to derive formulas for complex problems. A formula is a mathematical expression that describes a relationship between different variables. It is an essential tool for solving problems in various fields such as physics, engineering, economics, and many others.
Deriving a formula can be a challenging task, especially when dealing with complex problems. However, with the right approach and understanding of mathematical concepts, it is possible to come up with a formula that accurately describes the problem at hand. In this article, we will explore some of the key concepts involved in deriving formulas for complex problems.
The first step in deriving a formula is to understand the problem you are trying to solve. This involves identifying the variables involved and their relationships. For example, if you are trying to calculate the distance traveled by a car, you need to know the speed of the car and the time it takes to travel a certain distance. Once you have identified the variables involved, you can start to look for patterns or relationships between them.
The next step is to use mathematical concepts to express these relationships. This may involve using algebraic equations, trigonometric functions, or calculus. For example, if you are trying to calculate the area of a circle, you can use the formula A = πr^2, where A is the area, r is the radius, and π is a constant value equal to approximately 3.14.
Once you have expressed the relationships between the variables using mathematical concepts, you can start to manipulate the equations to simplify them and make them easier to work with. This may involve rearranging terms, factoring, or using identities. For example, if you are trying to solve a quadratic equation, you can use the quadratic formula, which is derived from completing the square.
Another important concept in deriving formulas is the use of limits. Limits are used to describe the behavior of a function as it approaches a certain value. They are essential in calculus, where they are used to calculate derivatives and integrals. For example, if you are trying to find the slope of a curve at a particular point, you can use the limit definition of the derivative.
In addition to these concepts, there are also various techniques that can be used to derive formulas for complex problems. These include numerical methods, such as finite difference methods and Monte Carlo simulations, as well as analytical methods, such as perturbation theory and asymptotic analysis. The choice of method depends on the nature of the problem and the available resources.
In conclusion, deriving a formula for a complex problem requires a deep understanding of mathematical concepts and techniques. It involves identifying the variables involved, expressing their relationships using mathematical concepts, manipulating equations to simplify them, and using limits to describe the behavior of functions. With the right approach and knowledge, it is possible to come up with a formula that accurately describes the problem at hand.
Step-by-Step Guide: Deriving Formulas for Physics Equations
Deriving formulas for physics equations can be a daunting task, especially if you are new to the subject. However, with a step-by-step guide, it is possible to derive formulas that will help you solve complex problems in physics.
Step 1: Identify the Variables
The first step in deriving a formula is to identify the variables involved in the problem. These variables could be physical quantities such as distance, time, velocity, acceleration, force, and energy. Once you have identified the variables, you need to assign symbols to them. For example, distance could be represented by ‘d,’ time by ‘t,’ velocity by ‘v,’ and so on.
Step 2: Write Down the Known Equations
The next step is to write down any known equations that relate to the variables you have identified. These equations could be from your textbook or any other reliable source. For example, if you are working on a problem involving motion, you could use the equation:
distance = velocity x time
This equation relates distance, velocity, and time. It tells us that the distance traveled by an object is equal to its velocity multiplied by the time taken.
Step 3: Manipulate the Equations
Once you have written down the known equations, you need to manipulate them to obtain the desired formula. This involves rearranging the equations to isolate the variable you want to find. For example, if you want to find the velocity of an object, you could rearrange the above equation as follows:
velocity = distance / time
This equation tells us that the velocity of an object is equal to the distance traveled divided by the time taken.
Step 4: Check Units
It is important to check the units of the variables in your formula to ensure that they are consistent. In physics, we use the SI system of units, which includes meters (m) for distance, seconds (s) for time, and meters per second (m/s) for velocity. If the units are not consistent, you need to convert them to the appropriate units.
Step 5: Test Your Formula
Once you have derived your formula, it is important to test it using different values of the variables. This will help you determine if your formula is correct and reliable. You can also compare your results with those obtained from other sources to ensure that they are consistent.
In conclusion, deriving formulas for physics equations requires a systematic approach that involves identifying the variables, writing down the known equations, manipulating the equations, checking units, and testing the formula. With practice, you can become proficient in deriving formulas and solving complex problems in physics.
Deriving Formulas in Chemistry: Tips and Tricks for Success
Deriving Formulas in Chemistry: Tips and Tricks for Success
Chemistry is a fascinating subject that deals with the study of matter, its properties, and how it interacts with other substances. One of the fundamental aspects of chemistry is the ability to derive formulas, which are essential for understanding chemical reactions and predicting their outcomes.
Deriving formulas can be a challenging task, especially for beginners. However, with some tips and tricks, you can master this skill and become proficient in chemistry. In this article, we will explore some useful strategies for deriving formulas in chemistry.
Understand the Basics
Before you start deriving formulas, it is crucial to have a solid understanding of the basics of chemistry. You should be familiar with the periodic table, atomic structure, and chemical bonding. These concepts provide the foundation for deriving formulas and understanding chemical reactions.
Identify the Elements
The first step in deriving a formula is to identify the elements present in the compound. This can be done by analyzing the name or formula of the compound. For example, if the compound is named sodium chloride, you know that it contains sodium (Na) and chlorine (Cl).
Determine the Valence Electrons
Once you have identified the elements, the next step is to determine their valence electrons. Valence electrons are the outermost electrons in an atom that participate in chemical bonding. The number of valence electrons determines the element’s reactivity and its ability to form bonds.
Calculate the Charge
After determining the valence electrons, you need to calculate the charge on each ion. Ions are atoms or molecules that have gained or lost electrons, resulting in a positive or negative charge. To calculate the charge, you need to know the number of valence electrons and the number of electrons gained or lost.
Write the Formula
Finally, you can write the formula by combining the symbols of the elements and indicating the number of atoms using subscripts. The subscripts indicate the number of atoms of each element in the compound. For example, the formula for sodium chloride is NaCl, indicating that there is one atom of sodium and one atom of chlorine.
Practice, Practice, Practice
Deriving formulas requires practice and repetition. The more you practice, the better you will become at identifying elements, determining valence electrons, calculating charges, and writing formulas. You can use online resources, textbooks, and practice problems to improve your skills.
Mnemonics are memory aids that help you remember complex information. In chemistry, mnemonics can be used to remember the names and symbols of elements, as well as their valence electrons. For example, the mnemonic “HOFBrINCl” can be used to remember the diatomic elements (hydrogen, oxygen, fluorine, bromine, iodine, nitrogen, and chlorine).
If you are struggling with deriving formulas, don’t hesitate to seek help from your teacher or tutor. They can provide you with additional resources and guidance to help you improve your skills. You can also join study groups or online forums to connect with other students and share tips and strategies.
In conclusion, deriving formulas is an essential skill in chemistry that requires practice, patience, and persistence. By understanding the basics, identifying elements, determining valence electrons, calculating charges, and writing formulas, you can master this skill and become proficient in chemistry. Remember to use mnemonics, seek help when needed, and most importantly, keep practicing! |
Start a 10-Day Free Trial to Unlock the Full Review
Why Lesson Planet?
Find quality lesson planning resources, fast!
Share & remix collections to collaborate.
Organize your curriculum with collections. Easy!
Have time to be more creative & energetic with your students!
Addition Math Review 2
In this addition review activity, students solve 12 problems that require them to add multiple addends. They also respond to 3 short answer math review questions and complete a pelican drawing activity.
3 Views 8 Downloads
Saxon Math Intermediate 5 - Student Edition
Expand your resource library with this collection of Saxon math materials. Covering a wide range of topics from basic arithmetic and place value, to converting between fractions, decimals, and percents, these example problems and skills...
4th - 6th Math CCSS: Adaptable
Adding Mixed Numbers (Unlike Denominators)
Mix things things up in your elementary math class with a series of problem-solving exercises. Presented with a series of mixed number word problems, young mathematicians are asked to solve them by using either visual fraction models or...
3rd - 6th Math CCSS: Adaptable
Water: The Math Link
Make a splash with a math skills resource! Starring characters from the children's story Mystery of the Muddled Marsh, several worksheets create interdisciplinary connections between science, language arts, and math. They cover a wide...
1st - 4th English Language Arts CCSS: Adaptable
Study Jams! Relate Addition & Subtraction
Understanding the inverse relationship between addition and subtraction is essential for developing fluency in young mathematicians. Zoe and RJ explain how three numbers can form fact families that make two addition and two subtraction...
1st - 4th Math CCSS: Adaptable |
For this geometry worksheet, learners figure out the center of enlargement and the scale being used. They find the image and pre-image and label it. There are 7 questions.
3 Views 19 Downloads
Comparing Linear and Exponential Representations
Graphs, tables, and equations, oh my! Your Algebra learners will compare and contrast linear and exponential functions in this complete lesson plan. It includes the graphs, tables, verbal description scenarios, equations, and an...
7th - 9th Math CCSS: Designed
Stitching Quilts into Coordinate Geometry
Who knew quilting would be so mathematical? Introduce linear equations and graphing while working with the lines of pre-designed quilts. Use the parts of the design to calculate the slope of the linear segments. The project comes with...
8th - 11th Math CCSS: Adaptable
Solving Linear Equations in Two Variables
Solving problems about pen and paper with systems of equations ... or is it the other way around? In the lesson, learners first interpret expressions and use equations in two variables to solve problems about notebooks and pens. They...
9th - 12th Math CCSS: Designed |
Plan your lesson in writing and conclusion with helpful with a worksheet with the parts of a conclusion paragraph and part of the betterlesson. Conclusion worksheet instructions: using techniques learned in this lesson, write the conclusion to your essay in the space below. Learn how to write a great conclusion paragraph to your essay check out all our essay writing resources and worksheets click here. Writing closing sentences and conclusions worksheet personal conclusion is where the i intersects with our environment to sentence sense of the world around us. Tip sheet writing introductions & conclusions even when you know everything about your paper's topic, it's hard to know how to create a hook that makes a reader.
Students learn what it means to draw a conclusion in science they are given tips for drawing conclusions, then consider a specific scenario finally, they practice. Teaching intros and conclusions to ells without a to get students reading analytically to inform their own writing: the worksheets to tesol blog. Writing memorable conclusions – write the conclusion, and only the conclusion, for an essay about this topic that develops these print all 25,000+ worksheets. Practice writing and evaluating conclusion paragraphs directions: write conclusion paragraphs for each of the following experiments you have been.
Noun worksheets writing prompts compound words figurative language view all writing worksheets literature the wizard of oz inferences and conclusions worksheet 5. Gauge your knowledge of narrative essay conclusions using this quiz/worksheet essay topics with the lesson titled writing a conclusion for a narrative essay. Use these exercises that focus on introduction and conclusion paragraphs, and improve how you write these crucial paragraphs for all your essays.
The writing center conclusions random facts and bits of evidence at the end of an otherwise-well-organized essay can just conclusions - the writing center. This conclusions packet contains 31 pages of worksheets, posters, and detailed teacher notes to help your students craft better conclusion sentences this writing. Ninth grade (grade 9) writing essays questions for your custom printable tests and worksheets in a hurry browse our pre-made printable worksheets library with a.
The conclusion of an essay is as important concluding paragraph 1: view a it’s important for students to learn how to write a conclusion that finishes. In this lesson, students will learn how to write a conclusion for informational text using pre-written texts for practice. Writing conclusions worksheets writing conclusions worksheets practice conclusions are often the most difficult part of an essay to write, because many writers feel. In a conclusion paragraph, you summarize what you’ve written about in your paper when you’re writing a good conclusion paragraph, you need to think about the.
Developing ideas – write three story ideas create a personal ending conclusion for each print all 25,000+ worksheets. |
The Federal Reserve System was established in 1913, to create a more stable monetary system. Its purpose was to give the United States a monetary policy that would be more efficient, flexible, and safe.
The Fed has evolved over the years as banking and economics have changed. It now focuses on monitoring payments, credit risk, and time lag risk. These changes are beneficial for banks, as they increase their ability to borrow. Nevertheless, the Fed must remain independent from the executive branch. Hence, the governors and the Board of Governors play an important role in determining the nation’s monetary policy.
In addition to regulating national banks, the Fed also regulates some state banks. Some of the major duties of the Fed are to monitor and control financial institutions, promote a safe monetary system, and advocate community development.
Before the creation of the Fed, the government was able to borrow money by issuing bonds to finance war. However, after World War II, it became necessary to create an emergency reserve. Banks needed to have extra reserves to prevent a run on their accounts. This requirement was set by the Board of Governors, which fixed the minimum reserve requirement at seven, ten, and thirteen percent.
Following World War I, the system’s operations were expanded. The Board of Governors hired Benjamin Strong as the comptroller of the currency. He continued to dominate Fed policies through 1951, when the Democrats won back control of the White House.
During this period, the Fed maintained very low interest rates, and purchased securities on the open market. This policy led to large amounts of inflationary new money entering the economy. Congress passed a 1935 act that granted the Fed policymaking authority, which was later added to the Board of Governors.
Today, the Federal Open Market Committee (FOMC) determines the nation’s monetary policy. Members of the FOMC are governors, appointed by the president of the United States, who serve staggered terms of 14 years. They are confirmed by the Senate.
Currently, the Board of Governors consists of seven members, who are nominated and appointed by the president of the United States. Six of the board’s members are confirmed by the Senate.
Aside from the Board of Governors, there are twelve regional Reserve Banks in the United States. Each of these banks has a president, who serves as a director of the bank.
The Board of Governors is responsible for leading the Federal Reserve System and overseeing the performance of the Reserve Banks. Additionally, the Board of Governors reviews the budgets of all the Reserve Banks.
Although the Board of Governors is the central body of the Federal Reserve System, it is accountable to Congress. The Board of Governors is a federal agency located in Washington DC.
The Board of Governors has five primary functions. The first is to determine the requirements of member banks to hold reserve balances. Currently, the minimum requirement is 10-20 percent of the bank’s total assets. Occasionally, the board may change this requirement. |
This tutorial will explain the basic C style strings/ characters that typically belong to the C language by later on are supported by C++. This string is a container/ data type that contains its characters as an array. This array is a one-dimensional data type. This sort of array is terminated by a null character ‘\0’. The functions applied on C or C++ are those functions that perform operations in the presence of a string library. This library provides many functions like strcat, strcopy, etc.
We will explain this concept on the Linux operating system, so you need to have Ubuntu installed and in the running form on your system. So you must install Virtual Box and, after downloading and installing, configure it. Now add the Ubuntu file to it. You can access Ubuntu’s official website and download the file according to your system requirement and operating system. It will take hours, then after installation, configure it on the virtual machine. In the configuration process, make sure you have created the user because it is essential for any operation on the Ubuntu terminal. Moreover, Ubuntu needs the authentication of the user before doing any installation.
We have used the 20.04 version of Ubuntu; you may use the latest one. For the implementation, you need to have a text editor and access the Linux terminal because we will be able to see the output of the source codes on the terminal through the query.
It is a very commonly used data type that is supplied by the library used in the programming language. It is a variable that contains a sequence of the letters or characters like space etc. Firstly, a string is declared, and then the value is given to it to initialize the string. To use the C programming language functions, we need a library <string> at the top of the source code or in a program. This library provides all the functions that are to be performed on a string. This string library should be included in a header file.
We have declared that the string or a character array has a terminating character at the end of the array. The string is declared and initialized by having the word ‘Aqsa’. We will now see how this name is held by an array having a null character. For instance, the name ‘Aqsa’ contains 4 letters, but the total words will be 5, including the terminating character.
But according to the rule of initialization of array, we can write the above-mentioned statement as:
There are many functions of strings that are supported by C++. Some of them are explained here:
- strcpy(s1, s2): Its function is to Copy string s2 at the end of the first string s1.
- strcat(s1, s2): It concatenates the string s2 onto the end of string s1.
- strlen(s1): Its function is to Return the length of string s1.
- strchr(s1, ch): Its function is to return the pointer to the character that has occurred the first time in the string.
We will explain each type later in the article. First, go to the basic example of the strings in C++.
Use the input-output stream library. Inside the main program, we declare a character array with size . As we have described earlier about the terminating character that is also used at the end of the array. Now display the value of the variable by using its name in the cout statement.
Write the code in the file and save it with the extension of C. To see the output of the file, compile the code and then execute it; for every C++ code, we need a compiler name G++. In the Linux operating system, the same compiler is used.
‘-o’ is used to save the resultant value of the source code.
Concatenation is the process of joining two strings. This is a built-in feature of strings. But besides this, concatenation is also performed by directly adding two strings without having a function. For this purpose, first, use a string library.
Then in the main program, take two strings. Now to store its value, use the third string.
Add both the values inside the string and then store them in the third variable. Then take print of the last string.
From the resultant value, you can see that both the words/ strings that we have provided in the program are combined.
This program contains the usage of three built-in functions of strings. For this purpose, first, you need three variables like the previous example. Assign values to two of them. Take another integer value to count the total length of the words. The first function is to copy string 1 into an empty character array str3. For this purpose, use the following strcpy features.
After that, the str3 string is displayed to check if the data is being copied. The next feature is to concatenate by using the built-in feature of strings ‘strcat’. Here we have used both the strings str1 and str2. Both these strings are used inside the parameter of the function. You do not need any third variable to store value this time. Now display the str1 string after combining both.
After the concatenation process, we applied a feature to measure the total length of the first string after the concatenation process. For this purpose, use the length function having a single argument, string str1. The value is stored in the integer value
After that, print the value by using the ‘len’ variable. When we execute the code, and the resultant value is displayed on the terminal, you can see that str3 contains the same value as str1. Similarly, concatenation makes two strings combined. And at the end, the number of letters in the string after concatenation is displayed.
The most commonly used feature of C++ is the ‘getline’ function. It takes a variable containing the value user has entered and ‘cin’ as arguments of a function.
The variable is now displayed. Execute the code in the terminal; you will see that first, you are asked for the string to be entered. Then this same string is displayed in the next line.
The next example is to match two strings. Take two strings. And then pass them as a parameter of the function.
This result is stored in a new variable.
When we execute the code, the answer will be 0 because both the strings are equal.
This article contains examples of almost all the basic features of strings in the Linux environment. Not all features of strings are built-in. You can also use manual functions, as we have explained. We hope this article will prove to be helpful for the users. |
The control group in an experiment is the group that does not receive any treatment. It is used as a benchmark against which other test results are measured. This group includes individuals who are very similar in many ways to the individuals who are receiving the treatment, in terms of age, gender, race, or other factors. Explore control group examples to get a better handle on what a control group is.
Why Are Control Groups Used in Experiments?
A control group is used in an experiment as a point of comparison. By having a group that does not receive any sort of treatment, researchers are better able to isolate whether the experimental treatment did or did not affect the subjects who received it.
Participants in an experiment do not know if they are in the control group or the experimental group. Members of the control group often are given placebos. This allows researchers to more accurately identify the effectiveness of what is being studied.
Examples of Medication Testing Using Control Groups
Control groups are commonly used when pharmaceutical companies test new medications for physical health or psychological health. Subjects are screened to ensure they are appropriate candidates for the experiment. Those who are accepted to participate are randomly assigned to either an experimental group or the control group.
Blood Pressure Medicine
In an experiment in which blood pressure medication is tested, one group is given the blood pressure medication while the control group is given a placebo pill.
In a test of anxiety treatment, one group attends individual therapy sessions and receives a new medication. The control group receives only an inert pill.
Drug Addiction Treatment
If a treatment for drug addiction is being tested, people who have similar addictions will be recruited to participate in the study. One group will be given a treatment while the control group is given the placebo.
Crohn's Disease Medication
Researchers who are testing the effectiveness of a drug intended to reduce symptoms of Crohn's disease will likely use a control group design. Many sufferers of Crohn's disease are recruited for the effort and the group that receives the placebo is the control group.
Weight Loss Drug
Doctors who are studying a weight loss drug will recruit volunteers that meet specific requirements for their study. For example, they may seek participants who are in their thirties and are at least 50 pounds overweight. The control group receives a pill pack that looks exactly the same as the packet that the others received, but the control group's pack is a placebo.
Psychiatrists are testing the effectiveness of a proposed ADHD drug. They recruit parents who are willing to allow their child to be a part of the clinical tests. All of the children are between the ages of 9 and 14 and have been diagnosed with ADHD. The psychiatrists involve the parents and the teachers in questionnaires throughout the process but only one group has the pill that is being tested while the other group of children is given a sugar pill.
Tests are being run to determine whether a newly developed medication can help to ease the effects of Post Traumatic Stress Disorder (PTSD). Researchers may seek to recruit volunteers within a certain age group or background who suffer from PTSD and divide them into an experimental group and a control group. Control group members receive what appears to be the same medication as those in the experimental group, but is only an inert pill.
Cosmetic Testing Using a Control Group Examples
The procedure of testing the effectiveness and safety of cosmetic products and treatments also relies on experimental research methodology. As a result, control groups are typically included in this type of research.
Hair Loss Treatment
A new treatment for hair loss in men would be tested using an experimental group that receives the treatment and a control group that is administered a placebo. Men of the same age range with similar hair loss would be recruited to participate in the study. One group receives an application with the active ingredient while the control group receives an application that appears the same but does not have the active ingredient in it.
Anti-Wrinkle Facial Products
Researchers who test anti-wrinkle facial products would follow a similar approach. They would recruit volunteers all within a specified age range of 38-45 and distribute bottles to all of the subjects. While the bottles would all appear to hold the same product, those who were randomly assigned to the control group will receive bottles that do not have the compound that is intended to reduce wrinkles.
Long-Lasting Lip Color
A cosmetics company that has developed what they believe to be a long-lasting lip color product would need to test the product before releasing it to the market in order to be able to substantiate their claims.
Study participants would be divided into experimental and control groups, with those in the experimental group being assigned to use the new product for a set period of time and record data about how long it lasts. Those in the control group would be provided with lip color not made with the new formula, and would be asked to record the same data as the other group.
Examples of Control Groups in Business Research
Control groups aren’t always tied to testing new medicines or cosmetic products. Control groups can have business applications beyond being used to test the effectiveness and safety of products developed in a scientific laboratory. This approach is often used to make decisions about investing in training or adjusting business practices.
If a company’s leaders are interested in discovering whether training will impact employee productivity, the organization might use a control group. For example, if a manager wants to know if a sales training program will lead to an increase in sales, the salespeople could be randomly assigned to an experimental group that attends that training and a control group that does not participate in the training.
The company would evaluate sales numbers and other performance related data at appropriate intervals after the experimental group completes training to determine if those who attended the program performed better than those who did not.
Testing Business Process Changes
For companies looking to streamline business processes, tweaking the way things are done and comparing the results against a control group can be beneficial. For example, if a business currently requires three rounds of interviews before making a job offer, but is considering cutting back to two rounds, an experiment could be helpful.
Open positions could be assigned to a control group that will continue as usual. Positions assigned to the experimental group would be filled using the proposed new procedure. If data reveals that taking away a round of interviewing doesn’t decrease quality of hire, the company will likely move forward with the change.
Control and Treatment Groups in Experimental Design
Control groups are critical to the scientific method. Experimental research design depends on the use of treatment and control groups to test a hypothesis. Without a control group, researchers could report results specific to study participants who received a treatment, but they would have no way of demonstrating that the treatment itself actually had any impact.
By including a control group to use as a point of comparison, researchers are better able to isolate the effects of the treatment. Being able to report on the difference (or lack of difference) between the control and experimental groups is very important to ensuring that conclusions drawn from the study are valid.
Learn More About Research Methodology
Now that you are more familiar with what the control group in an experiment is and have reviewed some examples of control groups, you’re ready to learn more about research methodology. Start by exploring the difference between parameters and populations in studies. |
This page has a large collection of basic subtraction worksheets. Subtraction has been around for several years now.
On this page you will find algebra worksheets mostly for middle school students on algebra topics such as algebraic expressions equations and graphing functions.
Basic algebra worksheets addition and subtraction. Students are required to figured out which operation to apply given the problem context. The pre algebra worksheets in this section contain a mix the sort of addition and subtraction problems from the previous two pages. Mixed addition and subtraction word problems.
Addition and subtraction equations 1 of 2 grades k 8 worksheets. Well maybe more than a few so it s probably a good thing. Mixed addition and subtraction problems.
Download and print a subtraction dice game bingo activity math mystery pictures and number line activities. This page includes subtraction worksheets on topics such as five minute frenzies one two three and multi digit subtraction and subtracting across zeros. It is important when learning the basic math operations to develop the skill of looking at the operation itself on each problem.
These addition and subtraction worksheets are created with single digit equations which will get students to learn their simple addition and subtraction facts inside and out this worksheet is perfect for students working at the first grade level or for older kids that would benefit from some extra skills practice. Plunge into practice with our addition and subtraction worksheets featuring oodles of exercises to practice performing the two basic arithmetic operations of addition and subtraction. Looking for high quality math worksheets aligned to common core standards for grades k 8.
This page starts off with some missing numbers worksheets for younger students. Presenting a mixed review of addition and subtraction of single digit 2 digit 3 digit 4 digit and 5 digit numbers each pdf practice set is designed to suit. Our premium worksheet bundles contain 10 activities and answer key to challenge your students and help them understand each and every topic within their grade level.
Great for budding 3rd grade mathemeticians. Math worksheets word problems mixed addition and subtraction word problems word problems. This set of worksheets includes a mix of addition and subtraction word problems.
Students of grade 6 grade 7 and grade 8 are required to perform the addition and subtraction operation to solve the equations in just one step. Often when we focus on only a single type of math fact at a time progressing our way through addition subtraction multiplication and division facts we can find that students become modal when looking at a worksheet. One step equation worksheets have exclusive pages to solve the equations involving fractions integers and decimals.
This page has addition subtraction number bonds fact families and number triangles. |
Feb 15, 2009
Basic evidence of dark matter appeared in 1933 through intense investigations by cosmic astronomer Fritz Zwicky. Fritz was researching galaxies in the Coma Cluster and made a startling discovery. The essence of Dark matter continues to be unexplained. However, scientists maintain that the majority of the mass included within the universe is indeed Dark Matter, but currently it is undetectable by our present instruments. This mystifying material does not exude or indeed reflect any forms of light or any noticeable electromagnetic radiation, but can however, be recognised by its influences upon gravity.
Dark matter was principally exposed in 1998 between two teams of astronomers, the Department of Energy and NASA backed scientists working at the Lawrence Berkeley National Laboratory These scientists were observing light omitting from erupting stars classed as a Type IA supernovae, in other respects know as ‘standard candles’ given their continuous radiance. The astronomers perceived that distant supernovae were considerably further off than their forecasts were anticipating and recorded that Dark Matter is a repellent force analogous to Albert Einstein’s ‘cosmological constant’, which looks to be hastening the enlargement of the Universe rather than contracting it as was initially thought.
It would seem to appear that Dark Matter is giving rise to this advancement by pushing space itself apart, and in addition to this, varying the intervals to the supernovas while it does so. Dark Matter, is a substance astronomers have not directly seen, but they infer it subsists considering that they have discovered its gravitational impacts upon intelligible material.
The strongest traditional interpretation to date is that Dark Matter is some class of ‘cosmogonic constant’ that originates from empty space not actually being empty, however having an energy since fundamental particles snap in and out of being. Up until recently, the supernova details were the exclusively forthright evidence of the cosmic advancement, and the only persuasive motive to embrace the Dark Matter theory.
Albert Einstein theorised that a repellent power in space could illustrate the cosmos’s balance with the strength of gravity. Einstein’s concept of the cosmological constant did not evaluate mathematically, although suggestion is credited for his primary notion that some anti gravitational strength must permeate vacant space to perpetuate the cosmos from falling apart, due to the power of its own gravity. With Einstein’s initial position upon his assumptions of theory of relativity together, he incorporated a cosmologic constant, a quantitative amount, which was considered for the increasing nature of the cosmos’s enlargement.
For the most part, favoured interpretation is that some type of power is pushing the quickening of the cosmos’s development. The equivalent enigmatic power that is increasing the development of the cosmos is also reducing the enlargement of the materials inside it. Experts discovered that rather than decelerating growth on account of universal gravity, the enlargement of the cosmos was in fact increasing, accompanied by constellations becoming further and further apart at an ever increasing speed.
In that situation, about a hundred billion years thereafter, all additional constellations eventually would vanish from the Milky Way’s observation and, ultimately, the local super clusters of constellations also would break apart. In spite of the fact that the nature of dark matter remains unclear, reactions issued in February 2006 founded on research of twelve neighbouring dwarf constellations, have for the first time, positioned quantities to some of its material properties.
In a fascinating prophecy of suppositions regarding the Big Bang Theory, in addition to the extensive framework of the universe is that “dark constellations”, galaxies constructed practically wholly of dark matter, could well be extremely common. |
Space debris (also known as space junk, space waste, space trash, or space garbage) is a term for defunct human-made objects in space—principally in Earth orbit—which no longer serve a useful function. This can include nonfunctional spacecraft, abandoned launch vehicle stages, mission-related debris and fragmentation debris. Examples of space debris include derelict satellites and spent rocket stages as well as the fragments from their disintegration, erosion and collisions, such as paint flecks, solidified liquids from spacecraft breakups, unburned particles from solid rocket motors, etc. Space debris represents a risk to spacecraft. When the smallest objects of human-made space debris are considered along with micrometeoroids, they are together sometimes referred to by space agencies as Micrometeoroid and Orbital Debris (MMOD).
Space debris is typically a negative externality—it creates an external cost on others from the initial action to launch or use a spacecraft in near-Earth orbit—a cost that is typically not taken into account nor fully accounted for in the cost by the launcher or payload owner. Several spacecraft, both manned and unmanned, have been damaged or destroyed by space debris. For this reason, in 1979 NASA founded the Orbital Debris Program to adopt mitigation measures for space debris in Earth orbit.
As of July 2016, the United States Strategic Command reported nearly 18,000 artificial objects in orbit above the Earth, including 1,419 operational satellites. However, these are just the objects large enough to be tracked. As of January 2019, more than 128 million bits of debris smaller than 1 cm (0.4 in), about 900,000 pieces of debris 1–10 cm, and around 34,000 of pieces larger than 10 cm were estimated to be in orbit around the Earth. Collisions with debris have become a hazard to spacecraft; the smallest objects cause damage akin to sandblasting, especially to solar panels and optics like telescopes or star trackers that cannot easily be protected by a ballistic shield.
Below 2,000 km (1,200 mi) Earth-altitude, pieces of debris are denser than meteoroids; most are dust from solid rocket motors, surface erosion debris like paint flakes, and frozen coolant from RORSAT (nuclear-powered satellites). For comparison, the International Space Station orbits in the 300–400 kilometres (190–250 mi) range, and the 2009 satellite collision and 2007 antisat test occurred at 800 to 900 kilometres (500 to 560 mi) altitude. The ISS has Whipple shielding to resist damage from small MMOD; however, known debris with a collision chance over 1/10,000 are avoided by maneuvering the station.
The Kessler syndrome, a theoretical runaway chain reaction of collisions that could occur, exponentially increasing the number and density of space debris in near-Earth orbit, has been hypothesized to ensue beyond some critical density. This could affect useful polar-orbiting bands, increases the cost of protection for spacecraft missions and could destroy live satellites. Whether Kessler syndrome is already underway has been debated. The measurement, mitigation, and potential removal of debris are conducted by some participants in the space industry.
There are estimated to be over 128 million pieces of debris smaller than 1 cm (0.39 in) as of January 2019. There are approximately 900,000 pieces from one to ten cm. The current count of large debris (defined as 10 cm across or larger) is 34,000. The technical measurement cutoff is c. 3 mm (0.12 in). Over 98 percent of the 1,900 tons of debris in low Earth orbit as of 2002 was accounted for by about 1,500 objects, each over 100 kg (220 lb). Total mass is mostly constant despite addition of many smaller objects, since they reenter the atmosphere sooner. There were "9,000 pieces of orbiting junk" identified in 2008, with an estimated mass of 5,500 t (12,100,000 lb).
Low Earth orbit
In the orbits nearest to Earth—less than 2,000 km (1,200 mi) orbital altitude, referred to as low-Earth orbit (LEO)— there have traditionally been few "universal orbits" which keep a number of spacecraft in particular rings (in contrast to GEO, a single orbit that is widely used by over 500 satellites). This is beginning to change in 2019, and several companies have begun to deploy the early phases of satellite internet constellations, which will have many universal orbits in LEO with 30 to 50 satellites per orbital plane and altitude. Traditionally, the most populated LEO orbits have been a number of sun-synchronous satellites that keep a constant angle between the Sun and the orbital plane, making Earth observation easier with consistent sun angle and lighting. Sun-synchronous orbits are polar, meaning they cross over the polar regions. LEO satellites orbit in many planes, typically up to 15 times a day, causing frequent approaches between objects. The density of satellites—both active and derelict—is much higher in LEO.
Orbits are affected by gravitational perturbations (which in LEO include unevenness of the Earth's gravitational field due to variations in the density of the planet), and collisions can occur from any direction. Impacts between orbiting satellites can occur at up to 16 km/s for a theoretical head-on impact; the closing speed could be twice the orbital speed. The 2009 satellite collision occurred at a closing speed of 11.7 km/s, creating over 2000 large debris fragments. These debris cross many other orbits and increase debris collision risk. It is theorized that a sufficiently large collision of spacecraft could potentially lead to a cascade effect, or even make low Earth orbit effectively unusable by orbiting satellites, a phenomenon known as the Kessler effect.
Crewed space missions are mostly at 400 km (250 mi) altitude and below, where air drag helps clear zones of fragments. The upper atmosphere is not a fixed density at any particular orbital altitude; it varies as a result of atmospheric tides and expands or contracts over longer time periods as a result of space weather. These longer-term effects can increase drag at lower altitudes; the 1990s expansion was a factor in reduced debris density. Another factor was fewer launches by Russia; the Soviet Union made most of their launches in the 1970s and 1980s. :7
At higher altitudes, where air drag is less significant, orbital decay takes longer. Slight atmospheric drag, lunar perturbations, Earth's gravity perturbations, solar wind and solar radiation pressure can gradually bring debris down to lower altitudes (where it decays), but at very high altitudes this may take millennia. Although high-altitude orbits are less commonly used than LEO and the onset of the problem is slower, the numbers progress toward the critical threshold more quickly.
Many communications satellites are in geostationary orbits (GEO), clustering over specific targets and sharing the same orbital path. Although velocities are low between GEO objects, when a satellite becomes derelict (such as Telstar 401) it assumes a geosynchronous orbit; its orbital inclination increases about .8° and its speed increases about 100 miles per hour (160 km/h) per year. Impact velocity peaks at about 1.5 km/s (0.93 mi/s). Orbital perturbations cause longitude drift of the inoperable spacecraft and precession of the orbital plane. Close approaches (within 50 meters) are estimated at one per year. The collision debris pose less short-term risk than from an LEO collision, but the satellite would likely become inoperable. Large objects, such as solar-power satellites, are especially vulnerable to collisions.
Although the ITU now requires proof a satellite can be moved out of its orbital slot at the end of its lifespan, studies suggest this is insufficient. Since GEO orbit is too distant to accurately measure objects under 1 m (3 ft 3 in), the nature of the problem is not well known. Satellites could be moved to empty spots in GEO, requiring less maneuvring and making it easier to predict future motion. Satellites or boosters in other orbits, especially stranded in geostationary transfer orbit, are an additional concern due to their typically high crossing velocity.
Despite efforts to reduce risk, spacecraft collisions have occurred. The European Space Agency telecom satellite Olympus-1 was struck by a meteoroid on 11 August 1993 and eventually moved to a graveyard orbit. On 29 March 2006, the Russian Express-AM11 communications satellite was struck by an unknown object and rendered inoperable; its engineers had enough contact time with the satellite to send it into a graveyard orbit.
In 1958, the United States launched Vanguard I into a medium Earth orbit (MEO). As of October 2009, it, and the upper stage of its launch rocket, are the oldest surviving human-made space objects still in orbit. In a catalog of known launches until July 2009, the Union of Concerned Scientists listed 902 operational satellites from a known population of 19,000 large objects and about 30,000 objects launched.
An example of additional derelict satellite debris is the remains of the 1970s/80s Soviet RORSAT naval surveillance satellite program. The satellite's BES-5 nuclear reactor were cooled with a coolant loop of sodium-potassium alloy, creating a potential problem when the satellite reached end of life. While many satellites were nominally boosted into medium-altitude graveyard orbits, not all were. Even satellites which had been properly moved to a higher orbit had an eight-percent probability of puncture and coolant release over a 50-year period. The coolant freezes into droplets of solid sodium-potassium alloy, forming additional debris.
These events continue to occur. For example, in February 2015, the USAF Defense Meteorological Satellite Program Flight 13 (DMSP-F13) exploded on orbit, creating at least 149 debris objects, which were expected to remain in orbit for decades.
Orbiting satellites have been deliberately destroyed. United States and USSR/Russia have conducted over 30 and 27 ASAT tests, respectively, followed by 10 from China and one from India. The most recent ASATs were Chinese interception of FY-1C, trials of Russian PL-19 Nudol, American interception of USA-193 and Indian interception of unstated live satellite.
According to Edward Tufte's book Envisioning Information, space debris includes a glove lost by astronaut Ed White on the first American space-walk (EVA); a camera lost by Michael Collins near Gemini 10; a thermal blanket lost during STS-88; garbage bags jettisoned by Soviet cosmonauts during Mir's 15-year life, a wrench and a toothbrush. Sunita Williams of STS-116 lost a camera during an EVA. During an STS-120 EVA to reinforce a torn solar panel, a pair of pliers was lost, and in an STS-126 EVA, Heidemarie Stefanyshyn-Piper lost a briefcase-sized tool bag.
In characterizing the problem of space debris, it was learned that much debris was due to rocket upper stages (e.g. the Inertial Upper Stage) which end up in orbit, and break up due to decomposition of unvented unburned fuel. However, a major known impact event involved an (intact) Ariane booster. :2 Although NASA and the United States Air Force now require upper-stage passivation, other launchers do not. Lower stages, like the Space Shuttle's solid rocket boosters or Apollo program's Saturn IB launch vehicles, do not reach orbit.
On 11 March 2000 a Chinese Long March 4 CBERS-1 upper stage exploded in orbit, creating a debris cloud. A Russian Briz-M booster stage exploded in orbit over South Australia on 19 February 2007. Launched on 28 February 2006 carrying an Arabsat-4A communications satellite, it malfunctioned before it could use up its propellant. Although the explosion was captured on film by astronomers, due to the orbit path the debris cloud has been difficult to measure with radar. By 21 February 2007, over 1,000 fragments were identified. A 14 February 2007 breakup was recorded by Celestrak. Eight breakups occurred in 2006, the most since 1993. Another Briz-M broke up on 16 October 2012 after a failed 6 August Proton-M launch. The amount and size of the debris was unknown. A Long March 7 rocket booster created a fireball visible from portions of Utah, Nevada, Colorado, Idaho and California on the evening of 27 July 2016; its disintegration was widely reported on social media. In 2018-2019, three different Atlas V Centaur second stages have broken up.
A past debris source was the testing of anti-satellite weapons (ASATs) by the U.S. and Soviet Union during the 1960s and 1970s. North American Aerospace Defense Command (NORAD) files only contained data for Soviet tests, and debris from U.S. tests were only identified later. By the time the debris problem was understood, widespread ASAT testing had ended; the U.S. Program 437 was shut down in 1975.
The U.S. restarted their ASAT programs in the 1980s with the Vought ASM-135 ASAT. A 1985 test destroyed a 1-tonne (2,200 lb) satellite orbiting at 525 km (326 mi), creating thousands of debris larger than 1 cm (0.39 in). Due to the altitude, atmospheric drag decayed the orbit of most debris within a decade. A de facto moratorium followed the test.
China's government was condemned for the military implications and the amount of debris from the 2007 anti-satellite missile test, the largest single space debris incident in history (creating over 2,300 pieces golf-ball size or larger, over 35,000 1 cm (0.4 in) or larger, and one million pieces 1 mm (0.04 in) or larger). The target satellite orbited between 850 km (530 mi) and 882 km (548 mi), the portion of near-Earth space most densely populated with satellites. Since atmospheric drag is low at that altitude the debris is slow to return to Earth, and in June 2007 NASA's Terra environmental spacecraft maneuvered to avoid impact from the debris.
On 20 February 2008, the U.S. launched an SM-3 missile from the USS Lake Erie to destroy a defective U.S. spy satellite thought to be carrying 450 kg (1,000 lb) of toxic hydrazine propellant. The event occurred at about 250 km (155 mi), and the resulting debris has a perigee of 250 km (155 mi) or lower. The missile was aimed to minimize the amount of debris, which (according to Pentagon Strategic Command chief Kevin Chilton) had decayed by early 2009.
On 27 March 2019, Indian Prime Minister Narendra Modi announced that India shot down one of its own LEO satellites with a ground-based missile. He stated that the operation, part of Mission Shakti, would defend the country's interests in space. Afterwards, US Air Force Space Command announced they were tracking 270 new pieces of debris but expected the number to grow as data collection continues.
The vulnerability of satellites to debris and the possibility of attacking LEO satellites to create debris clouds has triggered speculation that it is possible for countries unable to make a precision attack. An attack on a satellite of 10 tonnes or more would heavily damage the LEO environment.
Space junk can be a hazard to active satellites and spacecraft. It has been theorized that Earth orbit could even become impassable if the risk of collision grows too high.
However, since the risk to spacecraft increases with the time of exposure to high debris densities, it is more accurate to say that LEO would be rendered unusable by orbiting craft. The threat to craft passing through LEO to reach higher orbit would be much lower owing to the very short time span of the crossing.
Although spacecraft are typically protected by Whipple shields, solar panels, which are exposed to the Sun, wear from low-mass impacts. Even small impacts can produce a cloud of plasma which is an electrical risk to the panels.
Satellites are believed to have been destroyed by micrometeorites and (small) orbital debris (MMOD). The earliest suspected loss was of Kosmos 1275, which disappeared on 24 July 1981 (a month after launch). Kosmos contained no volatile propellant, therefore, there appeared to be nothing internal to the satellite which could have caused the destructive explosion which took place. However, the case has not been proven and another hypothesis forwarded is that the battery exploded. Tracking showed it broke up, into 300 new objects.
Many impacts have been confirmed since. For example, on 24 July 1996, the French microsatellite Cerise was hit by fragments of an Ariane-1 H-10 upper-stage booster which exploded in November 1986. :2 On 29 March 2006, the Russian Ekspress AM11 communications satellite was struck by an unknown object and rendered inoperable. On October 13, 2009, Terra suffered a single battery cell failure anomaly and a battery heater control anomaly which were subsequently considered likely the result of an MMOD strike. On March 12, 2010, Aura lost power from one-half of one of its 11 solar panels and this was also attributed to an MMOD strike. On May 22, 2013 GOES-13 was hit by an MMOD which caused it to lose track of the stars that it used to maintain an operational attitude. It took nearly a month for the spacecraft to return to operation.
The first major satellite collision occurred on 10 February 2009. The 950 kg (2,090 lb) derelict satellite Kosmos 2251 and the operational 560 kg (1,230 lb) Iridium 33 collided, 500 mi (800 km) over northern Siberia. The relative speed of impact was about 11.7 km/s (7.3 mi/s), or about 42,120 km/h (26,170 mph). Both satellites were destroyed, creating thousands of pieces of new smaller debris, with legal and political liability issues unresolved even years later. On 22 January 2013 BLITS (a Russian laser-ranging satellite) was struck by debris suspected to be from the 2007 Chinese anti-satellite missile test, changing both its orbit and rotation rate.
Satellites sometimes perform Collision Avoidance Maneuvers and satellite operators may monitor space debris as part of maneuver planning. For example, in January 2017, the European Space Agency made the decision to alter orbit of one of its three Swarm mission spacecraft, based on data from the US Joint Space Operations Center, to lower the risk of collision from Cosmos-375, a derelict Russian satellite.
Crewed flights are naturally particularly sensitive to the hazards that could be presented by space debris conjunctions in the orbital path of the spacecraft. Examples of occasional avoidance maneuvers, or longer-term space debris wear, have occurred in Space Shuttle missions, the MIR space station, and the International Space Station.
Space Shuttle missions
From the early Space Shuttle missions, NASA used NORAD space monitoring capabilities to assess the Shuttle's orbital path for debris. In the 1980s, this used a large proportion of NORAD capacity. The first collision-avoidance maneuver occurred during STS-48 in September 1991, a seven-second thruster burn to avoid debris from the derelict satellite Kosmos 955. Similar maneuvers were initiated on missions 53, 72 and 82.
One of the earliest events to publicize the debris problem occurred on Space Shuttle Challenger's second flight, STS-7. A fleck of paint struck its front window, creating a pit over 1 mm (0.04 in) wide. On STS-59 in 1994, Endeavour's front window was pitted about half its depth. Minor debris impacts increased from 1998.
Window chipping and minor damage to thermal protection system tiles (TPS) were already common by the 1990s. The Shuttle was later flown tail-first to take a greater proportion of the debris load on the engines and rear cargo bay, which are not used in orbit or during descent, and thus are less critical for post-launch operation. When flying attached to the ISS, the two connected spacecraft were flipped around so the better-armored station shielded the orbiter.
A NASA 2009 study concluded that debris accounted for approximately half of the overall risk to the Shuttle. Executive-level decision to proceed was required if catastrophic impact was likelier than 1 in 200. On a normal (low-orbit) mission to the ISS the risk was approximately 1 in 300, but the Hubble telescope repair mission was flown at the higher orbital altitude of 560 km (350 mi) where the risk was initially calculated at a 1-in-185 (due in part to the 2009 satellite collision). A re-analysis with better debris numbers reduced the estimated risk to 1 in 221, and the mission went ahead.
Debris incidents continued on later Shuttle missions. During STS-115 in 2006 a fragment of circuit board bored a small hole through the radiator panels in Atlantis's cargo bay. On STS-118 in 2007 debris blew a bullet-like hole through Endeavour's radiator panel.
Impact wear was notable on Mir, the Soviet space station, since it remained in space for long periods with its original solar module panels.
International Space Station
The ISS also uses Whipple shielding to protect its interior from minor debris. However, exterior portions (notably its solar panels) cannot be protected easily. In 1989, the ISS panels were predicted to degrade approximately 0.23% in four years due to the "sandblasting" effect of impacts with small orbital debris. An avoidance maneuver is typically performed for the ISS if "there is a greater than one-in-10,000 chance of a debris strike". As of January 2014, there have been sixteen maneuvers in the fifteen years the ISS had been in orbit.
As another method to reduce the risk to humans on board, ISS operational management asked the crew to shelter in the Soyuz on three occasions due to late debris-proximity warnings. In addition to the sixteen thruster firings and three Soyuz-capsule shelter orders, one attempted maneuver was not completed due to not having the several days' warning necessary to upload the maneuver timeline to the station's computer. A March 2009 event involved debris believed to be a 10 cm (3.9 in) piece of the Kosmos 1275 satellite. In 2013, the ISS operations management did not make an maneuver to avoid any debris, after making a record four debris maneuvers the previous year.
The Kessler syndrome, proposed by the NASA scientist Donald J. Kessler in 1978, is a scenario in which the density of objects in low Earth orbit (LEO) is high enough that collisions between objects could cause a cascade where each collision generates space debris that increases the likelihood of further collisions. One implication is that the distribution of debris in orbit could render space activities and the use of satellites in specific orbital ranges impractical for many generations.
Although most manned space activity takes place at altitudes below 800 to 1,500 km (500 to 930 mi), a Kessler syndrome cascade in that region would rain down into lower altitudes and the decay time scale is such that "the resulting [low Earth orbit] debris environment is likely to be too hostile for future space use".
In a Kessler syndrome, satellite lifetimes would be measured in years or months. New satellites could be launched through the debris field into higher orbits or placed in lower orbits (where decay removes the debris), but the utility of the region between 800 and 1,500 km (500 and 930 mi) is the reason for its amount of debris.
Although most debris burns up in the atmosphere, larger debris objects can reach the ground intact. According to NASA, an average of one cataloged piece of debris has fallen back to Earth each day for the past 50 years. Despite their size, there has been no significant property damage from the debris.
Notable examples of space junk falling to Earth and impacting human life include:
- 1969: five sailors on a Japanese ship were injured by space debris.
- 1997: an Oklahoma woman, Lottie Williams, was injured when she was hit in the shoulder by a 10 cm × 13 cm (3.9 in × 5.1 in) piece of blackened, woven metallic material confirmed as part of the propellant tank of a Delta II rocket which launched a U.S. Air Force satellite the year before.
- 2001: a Star 48 Payload Assist Module (PAM-D) rocket upper stage re-entered the atmosphere after a "catastrophic orbital decay", crashing in the Saudi Arabian desert. It was identified as the upper-stage rocket for NAVSTAR 32, a GPS satellite launched in 1993.
- 2003: Columbia disaster, large parts of the spacecraft reached the ground and entire equipment systems remained intact. More than 83,000 pieces, along with the remains of the six astronauts, were recovered in an area from three to 10 miles around Hemphill in Sabine County, Texas. More pieces were found in a line from west Texas to east Louisiana, with the westernmost piece found in Littlefield, TX and the easternmost found southwest of Mora, Louisiana. Debris was found in Texas, Arkansas and Louisiana. In a rare case of property damage, a foot-long metal bracket smashed through the roof of a dentist office. NASA warned the public to avoid contact with the debris because of the possible presence of hazardous chemicals. 15 years after the failure, people were still sending in pieces with the most recent, as of February 2018, found in the spring of 2017.
- 2007: airborne debris from a Russian spy satellite was seen by the pilot of a LAN Airlines Airbus A340 carrying 270 passengers whilst flying over the Pacific Ocean between Santiago and Auckland. The debris was reported within 9.3 kilometres (5 nmi) of the aircraft.
Tracking and measurement
Tracking from the ground
Radar and optical detectors such as lidar are the main tools for tracking space debris. Although objects under 10 cm (4 in) have reduced orbital stability, debris as small as 1 cm can be tracked, however determining orbits to allow re-acquisition is difficult. Most debris remain unobserved. The NASA Orbital Debris Observatory tracked space debris with a 3 m (10 ft) liquid mirror transit telescope. FM Radio waves can detect debris, after reflecting off them onto a receiver. Optical tracking may be a useful early-warning system on spacecraft.
The U.S. Strategic Command keeps a catalog of known orbital objects, using ground-based radar and telescopes, and a space-based telescope (originally to distinguish from hostile missiles). The 2009 edition listed about 19,000 objects. Other data come from the ESA Space Debris Telescope, TIRA, the Goldstone, Haystack, and EISCAT radars and the Cobra Dane phased array radar, to be used in debris-environment models like the ESA Meteoroid and Space Debris Terrestrial Environment Reference (MASTER).
Measurement in space
Returned space hardware is a valuable source of information on the directional distribution and composition of the (sub-millimetre) debris flux. The LDEF satellite deployed by mission STS-41-C Challenger and retrieved by STS-32 Columbia spent 68 months in orbit to gather debris data. The EURECA satellite, deployed by STS-46 Atlantis in 1992 and retrieved by STS-57 Endeavour in 1993, was also used for debris study.
The solar arrays of Hubble were returned by missions STS-61 Endeavour and STS-109 Columbia, and the impact craters studied by the ESA to validate its models. Materials returned from Mir were also studied, notably the Mir Environmental Effects Payload (which also tested materials intended for the ISS).
A debris cloud resulting from a single event is studied with scatter plots known as Gabbard diagrams, where the perigee and apogee of fragments are plotted with respect to their orbital period. Gabbard diagrams of the early debris cloud prior to the effects of perturbations, if the data were available, are reconstructed. They often include data on newly observed, as yet uncatalogued fragments. Gabbard diagrams can provide important insights into the features of the fragmentation, the direction and point of impact.
Dealing with debris
An average of about one tracked object per day has been dropping out of orbit for the past 50 years, averaging almost three objects per day at solar maximum (due to the heating and expansion of the Earth's atmosphere), but one about every three days at solar minimum, usually five and a half years later. In addition to natural atmospheric effects, corporations, academics and government agencies have proposed plans and technology to deal with space debris, but as of November 2014, most of these are theoretical, and there is no extant business plan for debris reduction.
A number of scholars have also observed that institutional factors—political, legal, economic and cultural "rules of the game"—are the greatest impediment to the cleanup of near-Earth space. There is no commercial incentive, since costs aren't assigned to polluters, but a number of suggestions have been made. However, effects to date are limited. In the US, governmental bodies have been accused of backsliding on previous commitments to limit debris growth, "let alone tackling the more complex issues of removing orbital debris." The different methods for removal of space debris has been evalauted by the Space Generation Advisory Council, including French astrophysicist Fatoumata Kébé.
A variety of technical approaches to the mitigation of the growth of space debris are typically undertaken as of the 2010s. These include passivation of the spacecraft at the end of its useful life; as well as use of upper stages that can reignite to decelerate the stage to intentionally deorbit it, often on the first or second orbit following payload release; satellites that can, if they remain healthy for years, deorbit themselves from the lower orbits around Earth. Other satellites (such as many CubeSats) in low orbits below approximately 400 km orbital altitude depend on the energy-absorbing effects of the upper atmosphere to reliably deorbit a spacecraft within weeks or months.
Increasingly, spent upper stages in higher orbits—orbits for which low-delta-v deorbit is not possible, or not planned for—and architectures that support satellite passivation, at end of life are passivated at end of life. This removes any internal energy contained in the vehicle at the end of its mission or useful life. While this does not remove the debris of the now derelict rocket stage or satellite itself, it does substantially reduce the likelihood of the spacecraft destructing and creating many smaller pieces of space debris, a phenomenon that was common in many of the early generations of US and Soviet spacecraft.
Upper stage passivation (e.g. of Delta boosters) by releasing residual propellants reduces debris from orbital explosions; however even as late as 2011, not all upper stages implement this practice. SpaceX used the term "propulsive passivation" for the final maneuver of their six-hour demonstration mission (STP-2) of the Falcon 9 second stage for the US Air Force in 2019, but did not define what all that term encompassed.
When originally proposed in 2015, the OneWeb constellation, initially planned to have ~700 satellites anticipated on orbit after 2018, would only state that they would re-enter the atmosphere within 25 years of retirement, which would comply with the Orbital Debris Mitigation Standard Practices (ODMSP) issued by the US government in 2001. By October 2017, both OneWeb—and also SpaceX, with their large Starlink constellation—had filed documents with the US FCC with more aggressive space debris mitigation plans. Both companies committed to a deorbit plan for post-mission satellites which will explicitly move the satellites into orbits where they will reenter the Earth's atmosphere within approximately one year following end-of-life.
With a "one-up, one-down" launch-license policy for Earth orbits, launchers would rendezvous with, capture and de-orbit a derelict satellite from approximately the same orbital plane. Another possibility is the robotic refueling of satellites. Experiments have been flown by NASA, and SpaceX is developing large-scale on-orbit propellant transfer technology.
Another approach to debris mitigation is to explicitly design the mission architecture to always leave the rocket second-stage in an elliptical geocentric orbit with a low-perigee, thus ensuring rapid orbital decay and avoiding long-term orbital debris from spent rocket bodies. Such missions require the use of a small kick stage to circularize the orbit, but the kick stage itself may be designed with the excess-propellant capability to be able to self-deorbit.
Although the ITU requires geostationary satellites to move to a graveyard orbit at the end of their lives, the selected orbital areas do not sufficiently protect GEO lanes from debris. Rocket stages (or satellites) with enough propellant may make a direct, controlled de-orbit, or if this would require too much propellant, a satellite may be brought to an orbit where atmospheric drag would cause it to eventually de-orbit. This was done with the French Spot-1 satellite, reducing its atmospheric re-entry time from a projected 200 years to about 15 by lowering its altitude from 830 km (516 mi) to about 550 km (342 mi).
Passive methods of increasing the orbital decay rate of spacecraft debris have been proposed. Instead of rockets, an electrodynamic tether could be attached to a spacecraft at launch; at the end of its lifetime, the tether would be rolled out to slow the spacecraft. Other proposals include a booster stage with a sail-like attachment and a large, thin, inflatable balloon envelope.
A consensus of speakers at a meeting in Brussels on 30 October 2012 organized by the Secure World Foundation (a U.S. think tank) and the French International Relations Institute reported that removal of the largest debris would be required to prevent the risk to spacecraft becoming unacceptable in the foreseeable future (without any addition to the inventory of dead spacecraft in LEO). Removal costs and legal questions about ownership and the authority to remove defunct satellites have stymied national or international action. Current space law retains ownership of all satellites with their original operators, even debris or spacecraft which are defunct or threaten active missions.
As of 2006 the cost of any of these solutions is about the same as launching a spacecraft and, according to NASA's Nicholas Johnson, not cost-effective. Since then Space Sweeper with Sling-Sat (4S), a grappling satellite which captures and ejects debris, has been studied.
Remotely controlled vehicles
A well-studied solution uses a remotely controlled vehicle to rendezvous with, capture and return debris to a central station. One such system is Space Infrastructure Servicing, a commercially developed refueling depot and service spacecraft for communications satellites in geosynchronous orbit originally scheduled for a 2015 launch. The SIS would be able to "push dead satellites into graveyard orbits." The Advanced Common Evolved Stage family of upper stages is being designed with a high leftover-propellant margin (for derelict capture and de-orbit) and in-space refueling capability for the high delta-v required to de-orbit heavy objects from geosynchronous orbit. A tug-like satellite to drag debris to a safe altitude for it to burn up in the atmosphere has been researched. When debris is identified the satellite creates a difference in potential between the debris and itself, then using its thrusters to move itself and the debris to a safer orbit.
A variation of this approach is for the remotely controlled vehicle to rendezvous with debris, capture it temporarily to attach a smaller de-orbit satellite and drag the debris with a tether to the desired location. The "mothership" would then tow the debris-smallsat combination for atmospheric entry or move it to a graveyard orbit. One such system is the proposed Busek ORbital DEbris Remover (ORDER), which would carry over 40 SUL (satellite on umbilical line) de-orbit satellites and propellant sufficient for their removal.
On 7 January 2010 Star, Inc. reported that it received a contract from the Space and Naval Warfare Systems Command for a feasibility study of the ElectroDynamic Debris Eliminator (EDDE) propellantless spacecraft for space-debris removal. In February 2012 the Swiss Space Center at École Polytechnique Fédérale de Lausanne announced the Clean Space One project, a nanosatellite demonstration project for matching orbit with a defunct Swiss nanosatellite, capturing it and de-orbiting together. The mission has seen several evolutions to reach a pac-man inspired capture model.
In December 2019, the European Space Agency awarded the first contract to clean up space debris. The €120 million mission dubbed ClearSpace-1 is slated to launch in 2025. It aims to remove a 100 kg VEga Secondary Payload Adapter (Vespa) left by Vega flight VV02 in an 800 km orbit in 2013. A "chaser" will grab the junk with four robotic arms and drag it down to Earth's atmosphere where both will burn up.
The laser broom uses a ground-based laser to ablate the front of the debris, producing a rocket-like thrust which slows the object. With continued application, the debris would fall enough to be influenced by atmospheric drag. During the late 1990s, the U.S. Air Force's Project Orion was a laser-broom design. Although a test-bed device was scheduled to launch on a Space Shuttle in 2003, international agreements banning powerful laser testing in orbit limited its use to measurements. The Space Shuttle Columbia disaster postponed the project and according to Nicholas Johnson, chief scientist and program manager for NASA's Orbital Debris Program Office, "There are lots of little gotchas in the Orion final report. There's a reason why it's been sitting on the shelf for more than a decade."
The momentum of the laser-beam photons could directly impart a thrust on the debris sufficient to move small debris into new orbits out of the way of working satellites. NASA research in 2011 indicates that firing a laser beam at a piece of space junk could impart an impulse of 1 mm (0.039 in) per second, and keeping the laser on the debris for a few hours per day could alter its course by 200 m (660 ft) per day. One drawback is the potential for material degradation; the energy may break up the debris, adding to the problem. A similar proposal places the laser on a satellite in Sun-synchronous orbit, using a pulsed beam to push satellites into lower orbits to accelerate their reentry. A proposal to replace the laser with an Ion Beam Shepherd has been made, and other proposals use a foamy ball of aerogel or a spray of water, inflatable balloons, electrodynamic tethers, boom electroadhesion, and dedicated anti-satellite weapons.
On 28 February 2014, Japan's Japan Aerospace Exploration Agency (JAXA) launched a test "space net" satellite. The launch was an operational test only. In December 2016 the country sent a space junk collector via Kounotori 6 to the ISS by which JAXA scientists experiment to pull junk out of orbit using a tether. The system failed to extend a 700-meter tether from a space station resupply vehicle that was returning to Earth. On 6 February the mission was declared a failure and leading researcher Koichi Inoue told reporters that they "believe the tether did not get released".
Since 2012, the European Space Agency has been working on the design of a mission to remove large space debris from orbit. The mission, e.Deorbit, is scheduled for launch during 2023 with an objective to remove debris heavier than 4,000 kilograms (8,800 lb) from LEO. Several capture techniques are being studied, including a net, a harpoon and a combination robot arm and clamping mechanism.
The RemoveDEBRIS mission plan is to test the efficacy of several ADR technologies on mock targets in low Earth orbit. In order to complete its planned experiments the platform is equipped with a net, a harpoon, a laser ranging instrument, a dragsail, and two CubeSats (miniature research satellites). The mission was launched on 2 April 2018.
National and international regulation
There is no international treaty minimizing space debris. However, the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS) published voluntary guidelines in 2007, using a variety of earlier national regulatory attempts at developing standards for debris mitigation. As of 2008, the committee was discussing international "rules of the road" to prevent collisions between satellites. By 2013, a number of national legal regimes existed, typically instantiated in the launch licenses that are required for a launch in all spacefaring nations.
The U.S. issued a set of standard practices for civilian (NASA) and military (DoD and USAF) orbital-debris mitigation in 2001. The standard envisioned disposal for final mission orbits in one of three ways: 1) atmospheric reentry where even with "conservative projections for solar activity, atmospheric drag will limit the lifetime to no longer than 25 years after completion of mission;" 2) maneuver to a "storage orbit:" move the spacecraft to one of four very broad parking orbit ranges (2,000–19,700 km (1,200–12,200 mi), 20,700–35,300 km (12,900–21,900 mi), above 36,100 km (22,400 mi), or out of Earth orbit completely and into any heliocentric orbit; 3) "Direct retrieval: Retrieve the structure and remove it from orbit as soon as practicable after completion of mission." The standard articulated in option 1, which is the standard applicable to most satellites and derelict upper stages launched, has come to be known as the "25-year rule." The US updated the ODMSP in December 2019, but made no change to the 25-year rule even though "[m]any in the space community believe that the timeframe should be less than 25 years." There is no consensus however on what any new timeframe might be.
In 2002, the European Space Agency (ESA) worked with an international group to promulgate a similar set of standards, also with a "25-year rule" applying to most Earth-orbit satellites and upper stages. Space agencies in Europe began to develop technical guidelines in the mid-1990s, and ASI, UKSA, CNES, DLR and ESA signed a "European Code of Conduct" in 2006, which was a predecessor standard to the ISO international standard work that would begin the following year. In 2008, ESA further developed "its own "Requirements on Space Debris Mitigation for Agency Projects" which "came into force on 1 April 2008." Germany and France have posted bonds to safeguard the property from debris damage. The "direct retrieval" option (option no. 3 in the US "standard practices" above) has rarely been done by any spacefaring nation (exception, USAF X-37) or commercial actor since the earliest days of spaceflight due to the cost and complexity of achieving direct retrieval, but the ESA has scheduled a 2025 demonstration mission (Clearspace-1) to do this with a single small 100 kg (220 lb) derelict upper stage at a projected cost of €120 million not including the launch costs.
By 2006, the Indian Space Research Organization (ISRO) had developed a number of technical means of debris mitigation (upper stage passivation, propellant reserves for movement to graveyard orbits, etc.) for ISRO launch vehicles and satellites, and was actively contributing to inter-agency debris coordination and the efforts of the UN COPUOS committee.
In 2007, the ISO began preparing an international standard for space-debris mitigation. By 2010, ISO had published "a comprehensive set of space system engineering standards aimed at mitigating space debris. [with primary requirements] defined in the top-level standard, ISO 24113." By 2017, the standards were nearly complete. However, these standards are not binding on any party by ISO or any international jurisdiction. They are simply available for use in any of a variety of voluntary ways. They "can be adopted voluntarily by a spacecraft manufacturer or operator, or brought into effect through a commercial contract between a customer and supplier, or used as the basis for establishing a set of national regulations on space debris mitigation." The voluntary ISO standard also adopted the "25-year rule" for the "LEO protected region" below 2000 km altitude that has been previously (and still is, as of 2019) used by the US, ESA, and UN mitigation standards, and identifies it as "an upper limit for the amount of time that a space system shall remain in orbit after its mission is completed. Ideally, the time to deorbit should be as short as possible (i.e. much shorter than 25 years)".
NORAD, Gabbard and Kessler
To avoid artificial space debris, many—but not all—research satellites are launched on elliptical orbits with perigees inside Earth's atmosphere so they will destroy themselves. Willy Ley predicted in 1960 that "In time, a number of such accidentally too-lucky shots will accumulate in space and will have to be removed when the era of manned space flight arrives". After the launch of Sputnik 1 in 1957, the North American Aerospace Defense Command (NORAD) began compiling a database (the Space Object Catalog) of all known rocket launches and objects reaching orbit: satellites, protective shields and upper- and lower-stage booster rockets. NASA later published modified versions of the database in two-line element set, and during the early 1980s the CelesTrak bulletin board system re-published them.
The trackers who fed the database were aware of other objects in orbit, many of which were the result of in-orbit explosions. Some were deliberately caused during the 1960s anti-satellite weapon (ASAT) testing, and others were the result of rocket stages blowing up in orbit as leftover propellant expanded and ruptured their tanks. To improve tracking, NORAD employee John Gabbard kept a separate database. Studying the explosions, Gabbard developed a technique for predicting the orbital paths of their products, and Gabbard diagrams (or plots) are now widely used. These studies were used to improve the modeling of orbital evolution and decay.
When the NORAD database became publicly available during the 1970s, NASA scientist Donald J. Kessler applied the technique developed for the asteroid-belt study to the database of known objects. In 1978 Kessler and Burton Cour-Palais co-authored "Collision Frequency of Artificial Satellites: The Creation of a Debris Belt", demonstrating that the process controlling asteroid evolution would cause a similar collision process in LEO in decades rather than billions of years. They concluded that by about 2000, space debris would outpace micrometeoroids as the primary ablative risk to orbiting spacecraft.
At the time, it was widely thought that drag from the upper atmosphere would de-orbit debris faster than it was created. However, Gabbard was aware that the number and type of objects in space were under-represented in the NORAD data and was familiar with its behavior. In an interview shortly after the publication of Kessler's paper, Gabbard coined the term "Kessler syndrome" to refer to the accumulation of debris; it became widely used after its appearance in a 1982 Popular Science article, which won the Aviation-Space Writers Association 1982 National Journalism Award.
The lack of hard data about space debris prompted a series of studies to better characterize the LEO environment. In October 1979, NASA provided Kessler with funding for further studies. Several approaches were used by these studies.
Optical telescopes or short-wavelength radar was used to measure the number and size of space objects, and these measurements demonstrated that the published population count was at least 50% too low. Before this, it was believed that the NORAD database accounted for the majority of large objects in orbit. Some objects (typically, U.S. military spacecraft) were found to be omitted from the NORAD list, and others were not included because they were considered unimportant. The list could not easily account for objects under 20 cm (7.9 in) in size—in particular, debris from exploding rocket stages and several 1960s anti-satellite tests.
Returned spacecraft were microscopically examined for small impacts, and sections of Skylab and the Apollo Command/Service Module which were recovered were found to be pitted. Each study indicated that the debris flux was higher than expected and debris was the primary source of MMOD collisions in space. LEO already demonstrated the Kessler syndrome.
In 1978 Kessler found that 42 percent of cataloged debris was the result of 19 events, primarily explosions of spent rocket stages (especially U.S. Delta rockets). He discovered this by first identifying those launches that were described having a large number of objects associated with a payload, then researching the literature to determine the rockets used in the launch. In 1979, this finding resulted in establishment of the NASA Orbital Debris Program after a briefing to NASA senior management, overturning the previously held belief that most unknown debris was from old ASAT tests, but from US upper stage rocket explosions and could be easily managed by depleting the unused fuel following the payload injection the upper stage Delta rocket. Beginning in 1986, when it was discovered that other international agencies were possibly experiencing the same type of problem, NASA expanded its program to include international agencies, the first being the European Space Agency. :2 A number of other Delta components in orbit (Delta was a workhorse of the U.S. space program) had not yet exploded.
A new Kessler syndrome
During the 1980s, the U.S. Air Force conducted an experimental program to determine what would happen if debris collided with satellites or other debris. The study demonstrated that the process differed from micrometeoroid collisions, with large chunks of debris created which would become collision threats.
In 1991, Kessler published "Collisional cascading: The limits of population growth in low Earth orbit" with the best data then available. Citing the USAF conclusions about creation of debris, he wrote that although almost all debris objects (such as paint flecks) were lightweight, most of its mass was in debris about 1 kg (2.2 lb) or heavier. This mass could destroy a spacecraft on impact, creating more debris in the critical-mass area. According to the National Academy of Sciences:
A 1-kg object impacting at 10 km/s, for example, is probably capable of catastrophically breaking up a 1,000-kg spacecraft if it strikes a high-density element in the spacecraft. In such a breakup, numerous fragments larger than 1 kg would be created.
Kessler's analysis divided the problem into three parts. With a low-enough density, the addition of debris by impacts is slower than their decay rate and the problem is not significant. Beyond that is a critical density, where additional debris leads to additional collisions. At densities beyond this critical mass production exceeds decay, leading to a cascading chain reaction reducing the orbiting population to small objects (several cms in size) and increasing the hazard of space activity. This chain reaction is known as the Kessler syndrome.
In an early 2009 historical overview, Kessler summed up the situation:
Aggressive space activities without adequate safeguards could significantly shorten the time between collisions and produce an intolerable hazard to future spacecraft. Some of the most environmentally dangerous activities in space include large constellations such as those initially proposed by the Strategic Defense Initiative in the mid-1980s, large structures such as those considered in the late-1970s for building solar power stations in Earth orbit, and anti-satellite warfare using systems tested by the USSR, the U.S., and China over the past 30 years. Such aggressive activities could set up a situation where a single satellite failure could lead to cascading failures of many satellites in a period much shorter than years.
During the 1980s, NASA and other U.S. groups attempted to limit the growth of debris. One effective solution was implemented by McDonnell Douglas on the Delta booster, by having the booster move away from its payload and vent any propellant remaining in its tanks. This eliminated one source for pressure buildup in the tanks which had caused them to explode and create additional orbital debris in the past. Other countries were slower to adopt this measure and, due especially to a number of launches by the Soviet Union, the problem grew throughout the decade.
A new battery of studies followed as NASA, NORAD and others attempted to better understand the orbital environment, with each adjusting the number of pieces of debris in the critical-mass zone upward. Although in 1981 (when Schefter's article was published) the number of objects was estimated at 5,000, new detectors in the Ground-based Electro-Optical Deep Space Surveillance system found new objects. By the late 1990s, it was thought that most of the 28,000 launched objects had already decayed and about 8,500 remained in orbit. By 2005 this was adjusted upward to 13,000 objects, and a 2006 study increased the number to 19,000 as a result of an ASAT test and a satellite collision. In 2011, NASA said that 22,000 objects were being tracked.
The growth in the number of objects as a result of the late-1990s studies sparked debate in the space community on the nature of the problem and the earlier dire warnings. According to Kessler's 1991 derivation and 2001 updates, the LEO environment in the 1,000 km (620 mi) altitude range should be cascading. However, only one major incident has occurred: the 2009 satellite collision between Iridium 33 and Cosmos 2251. The lack of obvious short-term cascading has led to speculation that the original estimates overstated the problem. According to Kessler a cascade would not be obvious until it was well advanced, which might take years.
A 2006 NASA model suggested that if no new launches took place the environment would retain the then-known population until about 2055, when it would increase on its own. Richard Crowther of Britain's Defence Evaluation and Research Agency said in 2002 that he believed the cascade would begin about 2015. The National Academy of Sciences, summarizing the professional view, noted widespread agreement that two bands of LEO space—900 to 1,000 km (620 mi) and 1,500 km (930 mi)—were already past critical density.
In the 2009 European Air and Space Conference, University of Southampton researcher Hugh Lewis predicted that the threat from space debris would rise 50 percent in the next decade and quadruple in the next 50 years. As of 2009, more than 13,000 close calls were tracked weekly.
A 2011 report by the U.S. National Research Council warned NASA that the amount of orbiting space debris was at a critical level. According to some computer models, the amount of space debris "has reached a tipping point, with enough currently in orbit to continually collide and create even more debris, raising the risk of spacecraft failures". The report called for international regulations limiting debris and research of disposal methods.
By the late 2010s, plans by multiple providers to deploy large megaconstellations of broadband internet satellites had been licensed by regulatory authorities, with operational satellites entering production by both OneWeb and SpaceX. The first deployments occurred in 2019 with six from OneWeb, followed by 60 227 kg (500 lb) satellites from SpaceX in May, the first satellites for the project Starlink. While the increased satellite density causes concerns, both licensing authorities and the manufacturers are well aware of debris problems. The vendors must have debris-reduction plans, and are taking measures to actively de-orbit unneeded satellites and/or ensure their orbits will decay naturally.
In popular culture
The plot of episode 4 ("Conflict") of Gerry Anderson's 1970 TV series UFO includes routine missions for the disposal of spent satellites by bombing.
Salvage 1 (1979 TV series) deals humorously with a scrap dealer who establish a space junk salvage company.
Planetes is a manga (1999-2004) and anime series (2003-2004) that gives focus on a team which is responsible for the collection and disposal of space debris. The DVDs for the TV series include interviews with NASA's Orbital Debris Program Office.
In 2009, Rhett & Link wrote a song called "Space Junk" and made an accompanying music video for the TV series Brink. The lyrics refer to two men tasked to clean up debris such as satellites and expended rockets.
Antariksham 9000 KMPH is a 2018 Indian Telugu-language science fiction adventure film about a lost Indian satellite that may trigger a Kessler syndrome.
- Category:Derelict satellites
- Interplanetary contamination
- Liability Convention
- List of large reentering space debris
- List of space debris producing events
- Long Duration Exposure Facility
- Near-Earth object
- OneWeb satellite constellation
- Orbital Debris Co-ordination Working Group
- Project West Ford
- Satellite warfare
- Solar Maximum Mission
- Spacecraft cemetery
- "GUIDE TO SPACE DEBRIS from the spaceacademy.net.au". Archived from the original on 26 August 2018. Retrieved 13 August 2018.
- Coase, Ronald (October 1960). "The Problem of Social Cost" (PDF). Journal of Law and Economics (PDF). The University of Chicago Press. 3: 1–44. doi:10.1086/466560. JSTOR 724810.
- Heyne, Paul; Boettke, Peter J.; Prychitko, David L. (2014). The Economic Way of Thinking (13th ed.). Pearson. pp. 227–28. ISBN 978-0-13-299129-2.
- Muñoz-Patchen, Chelsea (2019). "Regulating the Space Commons: Treating Space Debris as Abandoned Property in Violation of the Outer Space Treaty". Chicago Journal of International Law. University of Chicago Law School. Retrieved 13 December 2019.
- "NASA Orbital Debris Program". Archived from the original on 3 November 2016. Retrieved 10 October 2016.
- "SATELLITE BOX SCORE" (PDF). Orbital Debris Quarterly News. Vol. 20 no. 3. NASA. July 2016. p. 8. Archived (PDF) from the original on 11 October 2016. Retrieved 10 October 2016.
- "UCS Satellite Database". Nuclear Weapons & Global Security. Union of Concerned Scientists. 11 August 2016. Archived from the original on 3 June 2010. Retrieved 10 October 2016.
- "Space debris by the numbers" Archived 6 March 2019 at the Wayback Machine ESA, January 2019. Retrieved 5 March 2019
- "The Threat of Orbital Debris and Protecting NASA Space Assets from Satellite Collisions" (PDF). Space Reference. 2009. Archived (PDF) from the original on 16 January 2013. Retrieved 18 December 2012.
- The Threat of Orbital Debris and Protecting NASA Space Assets from Satellite Collisions (PDF), Space Reference, 2009, archived (PDF) from the original on 16 January 2013, retrieved 18 December 2012
- Donald J. Kessler (8 March 2009). "The Kessler Syndrome". Archived from the original on 27 May 2010. Retrieved 22 September 2009.
- Lisa Grossman, "NASA Considers Shooting Space Junk with Lasers" Archived 22 February 2014 at the Wayback Machine, wired, 15 March 2011.
- "Technical report on space debris" (PDF). nasa.gov. United Nations. 2009. ISBN 92-1-100813-1. Archived from the original (PDF) on 24 July 2009.
- "Orbital Debris FAQ: How much orbital debris is currently in Earth orbit?" Archived 25 August 2009 at the Wayback Machine NASA, March 2012. Retrieved 31 January 2016
- Joseph Carroll, "Space Transport Development Using Orbital Debris" Archived 19 June 2010 at the Wayback Machine, NASA Institute for Advanced Concepts, 2 December 2002, p. 3.
- Robin McKie and Michael Day, "Warning of catastrophe from mass of 'space junk'" Archived 16 March 2017 at the Wayback Machine The Guardian, 23 February 2008.
- Matt Ford, "Orbiting space junk heightens risk of satellite catastrophes." Archived 5 April 2012 at the Wayback Machine Ars Technica, 27 February 2009.
- "What are hypervelocity impacts?" Archived 9 August 2011 at the Wayback Machine ESA, 19 February 2009.
- "Orbital Debris Quarterly News, July 2011" (PDF). NASA Orbital Debris Program Office. Archived from the original (PDF) on 20 October 2011. Retrieved 1 January 2012.
- Kessler syndrome renders orbits unusable, not impassable. Space debris in LEO is chiefly a danger to craft that remain in LEO for extended periods. Spacecraft that cross LEO en route to a higher orbit spend very little time at the altitude of LEO, and therefore would face a correspondingly low risk.
- Kessler 1991, p. 65.
- Kessler 1991, p. 268.
- Schildknecht, T.; Musci, R.; Flury, W.; Kuusela, J.; De Leon, J.; Dominguez Palmero, L. De Fatima (2005). "Optical observation of space debris in high-altitude orbits". Proceedings of the 4th European Conference on Space Debris (ESA SP-587). 18–20 April 2005. 587: 113. Bibcode:2005ESASP.587..113S.
- "Colocation Strategy and Collision Avoidance for the Geostationary Satellites at 19 Degrees West." CNES Symposium on Space Dynamics, 6–10 November 1989.
- van der Ha, J. C.; Hechler, M. (1981). "The Collision Probability of Geostationary Satellites". 32nd International Astronautical Congress. 1981: 23. Bibcode:1981rome.iafcR....V.
- Anselmo, L.; Pardini, C. (2000). "Collision Risk Mitigation in Geostationary Orbit". Space Debris. 2 (2): 67–82. doi:10.1023/A:1021255523174.
- Orbital Debris, p. 86.
- Orbital Debris, p. 152.
- "The Olympus failure" ESA press release, 26 August 1993. Archived 11 September 2007 at the Wayback Machine
- "Notification for Express-AM11 satellite users in connection with the spacecraft failure" Russian Satellite Communications Company, 19 April 2006.
- "Vanguard 1". Archived from the original on 15 August 2019. Retrieved 4 October 2019.
- "Vanguard I celebrates 50 years in space". Eurekalert.org. Archived from the original on 5 June 2013. Retrieved 4 October 2013.
- Julian Smith, "Space Junk" USA Weekend, 26 August 2007.
- "Vanguard 50 years". Archived from the original on 5 June 2013. Retrieved 4 October 2013.
- "UCS Satellite Database" Archived 3 June 2010 at the Wayback Machine Union of Concerned Scientists, 16 July 2009.
- C. Wiedemann et al, "Size distribution of NaK droplets for MASTER-2009", Proceedings of the 5th European Conference on Space Debris, 30 March-2 April 2009, (ESA SP-672, July 2009).
- A. Rossi et al, "Effects of the RORSAT NaK Drops on the Long Term Evolution of the Space Debris Population" , University of Pisa, 1997.
- Gruss, Mike (6 May 2015). "DMSP-F13 Debris To Stay On Orbit for Decades". Space News. Retrieved 7 May 2015.
- See image here.
- Loftus, Joseph P. (1989). Orbital Debris from Upper-stage Breakup. AIAA. p. 227. ISBN 978-1-60086-376-9.
- Some return to Earth intact, see this list Archived 28 October 2009 at the Wayback Machine for examples.
- Phillip Anz-Meador and Mark Matney, "An assessment of the NASA explosion fragmentation model to 1 mm characteristic sizes Archived 17 October 2015 at the Wayback Machine" Advances in Space Research, Volume 34 Issue 5 (2004), pp. 987–992.
- "Debris from explosion of Chinese rocket detected by University of Chicago satellite instrument", University of Chicago press release, 10 August 2000.
- "Rocket Explosion" Archived 30 January 2008 at the Wayback Machine, Spaceweather.com, 22 February 2007. Retrieved 21 February 2007.
- Ker Than, "Rocket Explodes Over Australia, Showers Space with Debris" Archived 24 July 2008 at the Wayback Machine Space.com, 21 February 2007. Retrieved 21 February 2007.
- "Recent Debris Events" Archived 20 March 2007 at the Wayback Machine celestrak.com, 16 March 2007. Retrieved 14 July 2001.
- Jeff Hecht, "Spate of rocket breakups creates new space junk" Archived 14 August 2014 at the Wayback Machine, NewScientist, 17 January 2007. Retrieved 16 March 2007.
- "Proton Launch Failure 2012 Aug 6". Zarya. 21 October 2012. Archived from the original on 10 October 2012. Retrieved 21 October 2012.
- Mike, Wall (28 July 2016). "Amazing Fireball Over Western US Caused by Chinese Space Junk". space.com. Archived from the original on 29 July 2016. Retrieved 28 July 2016.
- "Major fragmentation of Atlas 5 Centaur upper stage 2014‐055B (SSN #40209)" (PDF).
- "Rocket break up provides rare chance to test debris formation". Archived from the original on 16 May 2019. Retrieved 22 May 2019.
- "Confirmed breakup of ATLAS 5 CENTAUR R/B (2018-079B, #43652) on April 6, 2019". Archived from the original on 2 May 2019. Retrieved 22 May 2019.
- Note that the list Schefter was presented only identified USSR ASAT tests.
- Clayton Chun, "Shooting Down a Star: America's Thor Program 437, Nuclear ASAT, and Copycat Killers", Maxwell AFB Base, AL: Air University Press, 1999. ISBN 1-58566-071-X.
- David Wright, "Debris in Brief: Space Debris from Anti-Satellite Weapons" Archived 9 September 2009 at the Wayback Machine Union of Concerned Scientists, December 2007.
- Leonard David, "China's Anti-Satellite Test: Worrisome Debris Cloud Circles Earth" Archived 6 January 2011 at the Wayback Machine space.com, 2 February 2007.
- "Fengyun 1C – Orbit Data" Archived 18 March 2012 at the Wayback Machine Heavens Above.
- Brian Burger, "NASA's Terra Satellite Moved to Avoid Chinese ASAT Debris" Archived 13 May 2008 at the Wayback Machine, space.com. Retrieved 6 July 2007.
- "Pentagon: Missile Scored Direct Hit on Satellite." Archived 6 January 2018 at the Wayback Machine, npr.org, 21 February 2008.
- Jim Wolf, "US satellite shootdown debris said gone from space" Archived 14 July 2009 at the Wayback Machine, Reuters, 27 February 2009.
- Chavez, Nicole; Pokharel, Sugam (28 March 2019). "India conducts successful anti-satellite missile operation, Prime Minister says". CNN. Archived from the original on 28 March 2019. Retrieved 28 March 2019.
- "Problem Weltraumschrott: Die kosmische Müllkippe - SPIEGEL ONLINE - Wissenschaft". SPIEGEL ONLINE. Archived from the original on 23 April 2017. Retrieved 22 April 2017.
- Akahoshi, Y.; et al. (2008). "Influence of space debris impact on solar array under power generation". International Journal of Impact Engineering. 35 (12): 1678–1682. doi:10.1016/j.ijimpeng.2008.07.048.
- Kelley, Angelita. "Terra mission operations: Launch to the present (and beyond)" (PDF). Archived (PDF) from the original on 2 December 2017. Retrieved 5 April 2018.
- Fisher, Dominic (13 June 2017). "Mission Status at Aura Science Team MOWG Meeting" (PDF). Retrieved 13 December 2017. Cite journal requires
- Becky Iannotta and Tariq Malik, "U.S. Satellite Destroyed in Space Collision" Archived 10 May 2012 at WebCite, space.com, 11 February 2009
- Paul Marks, "Satellite collision 'more powerful than China's ASAT test" Archived 15 February 2009 at the Wayback Machine, New Scientist, 13 February 2009.
- Iridium 33 and Cosmos 2251, Three Years Later, Michael Listner, Space Safety Magazine, 10 February 2012, accessed 14 December 2019.
- "2 big satellites collide 500 miles over Siberia." yahoo.com, 11 February 2009. Retrieved 11 February 2009.
- Becky Iannotta, "U.S. Satellite Destroyed in Space Collision" Archived 10 May 2012 at WebCite, space.com, 11 February 2009. Retrieved 11 February 2009.
- Leonard David. "Russian Satellite Hit by Debris from Chinese Anti-Satellite Test". space.com. Archived from the original on 11 March 2013. Retrieved 10 March 2013.
- "Swarm Satellite Trio Launched To Study Earth's Magnetic Field - SpaceNews.com". SpaceNews.com. 22 November 2013. Retrieved 25 January 2017.
- "Space junk could take out a European satellite this week". CNET. Archived from the original on 25 January 2017. Retrieved 25 January 2017.
- Schefter, p. 50.
- Rob Matson, "Satellite Encounters" Archived 6 October 2010 at the Wayback Machine Visual Satellite Observer's Home Page.
- "STS-48 Space Shuttle Mission Report" Archived 5 January 2016 at the Wayback Machine, NASA, NASA-CR-193060, October 1991.
- Christiansen, E. L.; Hyden, J. L.; Bernhard, R. P. (2004). "Space Shuttle debris and meteoroid impacts". Advances in Space Research. 34 (5): 1097–1103. Bibcode:2004AdSpR..34.1097C. doi:10.1016/j.asr.2003.12.008.
- Kelly, John. "Debris is Shuttle's Biggest Threat" Archived 23 May 2009 at the Wayback Machine, space.com, 5 March 2005.
- "Debris Danger". Aviation Week & Space Technology, Volume 169 Number 10 (15 September 2008), p. 18.
- William Harwood, "Improved odds ease NASA's concerns about space debris" Archived 19 June 2009 at the Wayback Machine, CBS News, 16 April 2009.
- D. Lear et al, "Investigation of Shuttle Radiator Micro-Meteoroid & Orbital Debris Damage" Archived 9 March 2012 at the Wayback Machine, Proceedings of the 50th Structures, Structural Dynamics, and Materials Conference, 4–7 May 2009, AIAA 2009–2361.
- D. Lear, et al, "STS-118 Radiator Impact Damage" Archived 13 August 2011 at the Wayback Machine, NASA
- Smirnov, V.M.; et al. (2000). "Study of Micrometeoroid and Orbital Debris Effects on the Solar Panelson 'MIR'". Space Debris. 2 (1): 1–7. doi:10.1023/A:1015607813420.
- "Orbital Debris FAQ: How did the Mir space station fare during its 15-year stay in Earth orbit?" Archived 25 August 2009 at the Wayback Machine, NASA, July 2009.
- K Thoma et al, "New Protection Concepts for Meteoroid / Debris Shields" Archived 9 April 2008 at the Wayback Machine, Proceedings of the 4th European Conference on Space Debris (ESA SP-587), 18–20 April 2005, p. 445.
- Henry Nahra, "Effect of Micrometeoroid and Space Debris Impacts on the Space Station Freedom Solar Array Surfaces" Archived 6 June 2011 at the Wayback Machine. Presented at the 1989 Spring Meeting of the Materials Research Society, 24–29 April 1989, NASA TR-102287.
- de Selding, Peter B. (16 January 2014). "Space Station Required No Evasive Maneuvers in 2013 Despite Growing Debris Threat". Space News. Retrieved 17 January 2014.
- "Junk alert for space station crew" Archived 18 March 2009 at the Wayback Machine, BBC News, 12 March 2009.
- "International Space Station in debris scare" Archived 31 October 2018 at the Wayback Machine, BBC News, 28 June 2011.
- Haines, Lester. "ISS spared space junk avoidance manoeuvre" Archived 10 August 2017 at the Wayback Machine, The Register, 17 March 2009.
- "Scientist: Space weapons pose debris threat – CNN". Articles.CNN.com. 3 May 2002. Archived from the original on 30 September 2012. Retrieved 17 March 2011.
- "The Danger of Space Junk – 98.07". TheAtlantic.com. Retrieved 17 March 2011.
- Donald J. Kessler and Burton G. Cour-Palais (1978). "Collision Frequency of Artificial Satellites: The Creation of a Debris Belt". Journal of Geophysical Research. 83 (A6): 2637–2646. Bibcode:1978JGR....83.2637K. doi:10.1029/JA083iA06p02637.
- Kessler 1991, p. 63.
- Bechara J. Saab, "Planet Earth, Space Debris" Archived 25 April 2012 at the Wayback Machine, Hypothesis Volume 7 Issue 1 (September 2009).
- Jan Stupl et al, "Debris-debris collision avoidance using medium power ground-based lasers", 2010 Beijing Orbital Debris Mitigation Workshop, 18–19 October 2010, see graph p. 4 Archived 9 March 2012 at the Wayback Machine
- Brown, M. (2012). Orbital Debris Frequently Asked Questions. Retrieved from https://orbitaldebris.jsc.nasa.gov/faq.html Archived 28 March 2019 at the Wayback Machine.
- U.S. Congress, Office of Technology Assessment, "Orbiting Debris: A Space Environmental Problem" Archived 4 March 2016 at the Wayback Machine, Background Paper, OTA-BP-ISC-72, U.S. Government Printing Office, September 1990, p. 3
- "Today in Science History" Archived 13 January 2006 at the Wayback Machine todayinsci.com. Retrieved 8 March 2006.
- Tony Long, "Jan. 22, 1997: Heads Up, Lottie! It's Space Junk!" Archived 2 January 2018 at the Wayback Machine, wired, 22 January 2009. Retrieved 27 March 2016
- "PAM-D Debris Falls in Saudi Arabia" Archived 16 July 2009 at the Wayback Machine, The Orbital Debris Quarterly News, Volume 6 Issue 2 (April 2001).
- "Debris Photos" Archived 25 December 2017 at the Wayback Machine NASA.
- Wallach, Dan (1 February 2016). "Columbia shuttle tragedy marks Sabine County town". Archived from the original on 9 May 2018. Retrieved 8 May 2018.
- "Columbia Accident Investigation Report, Volume II Appendix D.10" (PDF). Archived from the original on 17 October 2011. Retrieved 10 May 2018.CS1 maint: BOT: original-url status unknown (link)
- "Shuttle Debris Falls on East Texas, Louisiana". 1 February 2003. Archived from the original on 9 May 2018. Retrieved 8 May 2018.
- "Debris Warning" Archived 17 October 2015 at the Wayback Machine NASA.
- "Debris from fallen space shuttle Columbia has new mission 15 years after tragedy". 1 February 2018. Archived from the original on 6 February 2018. Retrieved 8 May 2018.
- Jano Gibson, "Jet's flaming space junk scare" Archived 6 December 2011 at the Wayback Machine, The Sydney Morning Herald, 28 March 2007.
- D. Mehrholz et al;"Detecting, Tracking and Imaging Space Debris" Archived 10 July 2009 at the Wayback Machine, ESA bulletin 109, February 2002.
- Ben Greene, "Laser Tracking of Space Debris" Archived 18 March 2009 at the Wayback Machine, Electro Optic Systems Pty
- "Orbital debris: Optical Measurements" Archived 15 February 2012 at the Wayback Machine, NASA Orbital Debris Program Office
- Pantaleo, Rick. "Australian Scientists Track Space Junk by Listening to FM Radio". web. Archived from the original on 4 December 2013. Retrieved 3 December 2013.
- Englert, Christoph R.; Bays, J. Timothy; Marr, Kenneth D.; Brown, Charles M.; Nicholas, Andrew C.; Finne, Theodore T. (2014). "Optical orbital debris spotter". Acta Astronautica. 104 (1): 99–105. Bibcode:2014AcAau.104...99E. doi:10.1016/j.actaastro.2014.07.031.
- Grant Stokes et al, "The Space-Based Visible Program", MIT Lincoln Laboratory. Retrieved 8 March 2006.
- "MIT Haystack Observatory" Archived 29 November 2004 at the Wayback Machine haystack.mit.edu. Retrieved 8 March 2006.
- "AN/FPS-108 COBRA DANE." Archived 5 February 2016 at the Wayback Machine fas.org. Retrieved 8 March 2006.
- Darius Nikanpour, "Space Debris Mitigation Technologies" Archived 19 October 2012 at the Wayback Machine, Proceedings of the Space Debris Congress, 7–9 May 2009.
- "STS-76 Mir Environmental Effects Payload (MEEP)". NASA. March 1996. Archived from the original on 18 April 2011. Retrieved 8 March 2011.
- MEEP Archived 5 June 2011 at the Wayback Machine, NASA, 4 April 2002. Retrieved 8 July 2011
- "STS-76 Mir Environmental Effects Payload (MEEP)" Archived 29 June 2011 at the Wayback Machine, NASA, March 1996. Retrieved 8 March 2011.
- David Portree and Joseph Loftus. "Orbital Debris: A Chronology" Archived 1 September 2000 at the Wayback Machine, NASA, 1999, p. 13.
- David Whitlock, "History of On-Orbit Satellite Fragmentations" Archived 3 January 2006 at the Wayback Machine, NASA JSC, 2004
- Johnson, Nicholas (5 December 2011). "Space debris issues". audio file, @0:05:50-0:07:40. The Space Show. Archived from the original on 27 January 2012. Retrieved 8 December 2011.
- Foust, Jeff (25 November 2014). "Companies Have Technologies, but Not Business Plans, for Orbital Debris Cleanup". Space News. Retrieved 6 December 2014.
- Foust, Jeff (24 November 2014). "Industry Worries Government 'Backsliding' on Orbital Debris". Space News. Archived from the original on 8 December 2014. Retrieved 8 December 2014.
Despite growing concern about the threat posed by orbital debris, and language in U.S. national space policy directing government agencies to study debris cleanup technologies, many in the space community worry that the government is not doing enough to implement that policy.
- Northfield, Rebecca (20 June 2018). "Women of Nasa: past, present and future". eandt.theiet.org. Archived from the original on 21 January 2019. Retrieved 20 January 2019.
- "USA Space Debris Environment, Operations, and Policy Updates" (PDF). NASA. UNOOSA. Retrieved 1 October 2011.
- Johnson, Nicholas (5 December 2011). "Space debris issues". audio file, @1:03:05-1:06:20. The Space Show. Archived from the original on 27 January 2012. Retrieved 8 December 2011.
- Eric Ralph (19 April 2019). "SpaceX's Falcon Heavy flies a complex mission for the Air Force in launch video". Teslarati. Retrieved 14 December 2019.
- "OneWeb Taps Airbus To Build 900 Internet Smallsats". SpaceNews. 15 June 2015. Retrieved 19 June 2015.
- Foust, Jeff (9 December 2019). "U.S. government updates orbital debris mitigation guidelines". SpaceNews. Retrieved 14 December 2019.
the first update of the guidelines since their publication in 2001, and reflect a better understanding of satellite operations and other technical issues that contribute to the growing population of orbital debris. ...[The new 2019 guidelines] did not address one of the biggest issues regarding debris mitigation: whether to reduce the 25-year timeframe for deorbiting satellites after the end of their mission. Many in the space community believe that timeframe should be less than 25 years
- Brodkin, Jon (4 October 2017). "SpaceX and OneWeb broadband satellites raise fears about space debris". Ars Technica. Archived from the original on 6 October 2017. Retrieved 7 October 2017.
- Frank Zegler and Bernard Kutter, "Evolving to a Depot-Based Space Transportation Architecture", AIAA SPACE 2010 Conference & Exposition, 30 August-2 September 2010, AIAA 2010–8638. Archived 10 May 2013 at the Wayback Machine
- "Robotic refueling Mission". Archived from the original on 10 August 2011. Retrieved 30 July 2012.
- Bergin, Chris (27 September 2016). "SpaceX reveals ITS Mars game changer via colonization plan". NASASpaceFlight.com. Archived from the original on 28 September 2016. Retrieved 21 October 2016.
- "https://www.nasaspaceflight.com/2018/03/rocket-lab-capitalize-test-flight-success-first-operational-mission/". Archived from the original on 7 March 2018. Retrieved 8 March 2018. External link in
- Luc Moliner, "Spot-1 Earth Observation Satellite Deorbitation" Archived 16 January 2011 at the Wayback Machine, AIAA, 2002.
- "Spacecraft: Spot 3" Archived 30 September 2011 at the Wayback Machine, agi, 2003
- Bill Christensen, "The Terminator Tether Aims to Clean Up Low Earth Orbit" Archived 26 November 2009 at the Wayback Machine, space.com. Retrieved 8 March 2006.
- Jonathan Amos, "How satellites could 'sail' home" Archived 1 July 2009 at the Wayback Machine, BBC News, 3 May 2009.
- "Safe And Efficient De-Orbit Of Space Junk Without Making The Problem Worse". Space Daily. 3 August 2010. Archived from the original on 14 October 2013. Retrieved 16 September 2013.
- "Experts: Active Removal Key To Countering Space Junk Threat" Peter B. de Selding, Space.com 31 October 2012.
- Stefan Lovgren, "Space Junk Cleanup Needed, NASA Experts Warn." Archived 7 September 2009 at the Wayback Machine National Geographic News, 19 January 2006.
- Jan, McHarg (10 August 2012). "Project aims to remove space debris". Phys.org. Archived from the original on 5 October 2013. Retrieved 3 April 2013.
- Erika Carlson et al, "Final design of a space debris removal system", NASA/CR-189976, 1990.
- "Intelsat Picks MacDonald, Dettwiler and Associates Ltd. for Satellite Servicing" Archived 12 May 2011 at the Wayback Machine, CNW Newswire, 15 March 2011. Retrieved 15 July 2011.
- Peter de Selding, "MDA Designing In-orbit Servicing Spacecraft", Space News, 3 March 2010. Retrieved 15 July 2011.
- Schaub, H.; Sternovsky, Z. (2013). "Active Space Debris Charging for Contactless Electrostatic Disposal". Advances in Space Research. 53 (1): 110–118. Bibcode:2014AdSpR..53..110S. doi:10.1016/j.asr.2013.10.003.
- "News" Archived 27 March 2010 at the Wayback Machine, Star Inc. Retrieved 18 July 2011.
- "Cleaning up Earth's orbit: A Swiss satellite tackles space junk". EPFL. 15 February 2012. Archived from the original on 28 May 2013. Retrieved 3 April 2013.
- "Space Debris Removal | Cleanspace One". Space Debris Removal | Cleanspace One. Archived from the original on 2 December 2017. Retrieved 1 December 2017.
- "VV02 – Vega uses Vespa". www.esa.int. Archived from the original on 17 October 2019. Retrieved 13 December 2019.
- "European Space Agency to launch space debris collector in 2025". The Guardian. 9 December 2019. Archived from the original on 9 December 2019. Retrieved 13 December 2019.
- Jonathan Campbell, "Using Lasers in Space: Laser Orbital Debris Removal and Asteroid Deflection" Archived 7 December 2010 at the Wayback Machine, Occasional Paper No. 20, Air University, Maxwell Air Force Base, December 2000.
- Mann, Adam (26 October 2011). "Space Junk Crisis: Time to Bring in the Lasers". Wired Science. Archived from the original on 29 October 2011. Retrieved 1 November 2011.
- Ivan Bekey, "Project Orion: Orbital Debris Removal Using Ground-Based Sensors and Lasers.", Second European Conference on Space Debris, 1997, ESA-SP 393, p. 699.
- Justin Mullins "A clean sweep: NASA plans to carry out a spot of housework.", New Scientist, 16 August 2000.
- Tony Reichhardt, "Satellite Smashers" Archived 29 July 2012 at Archive.today, Air & Space Magazine, 1 March 2008.
- James Mason et al, "Orbital Debris-Debris Collision Avoidance" Archived 9 November 2018 at the Wayback Machine, arXiv:1103.1690v2, 9 March 2011.
- C. Bombardelli and J. Peláez, "Ion Beam Shepherd for Contactless Space Debris Removal". Journal of Guidance, Control, and Dynamics, Vol. 34, No. 3, May–June 2011, pp 916–920. http://sdg.aero.upm.es/PUBLICATIONS/PDF/2011/AIAA-51832-628.pdf Archived 9 March 2012 at the Wayback Machine
- Daniel Michaels, "A Cosmic Question: How to Get Rid Of All That Orbiting Space Junk?" Archived 23 October 2017 at the Wayback Machine, Wall Street Journal, 11 March 2009.
- "Company floats giant balloon concept as solution to space mess" Archived 27 September 2011 at the Wayback Machine, Global Aerospace Corp press release, 4 August 2010.
- "Space Debris Removal" Archived 16 August 2010 at the Wayback Machine, Star-tech-inc.com. Retrieved 18 July 2011.
- Foust, Jeff (5 October 2011). "A Sticky Solution for Grabbing Objects in Space". MIT Technology Review. Retrieved 7 October 2011.
- Jason Palmer, "Space junk could be tackled by housekeeping spacecraft " Archived 30 May 2018 at the Wayback Machine, BBC News, 8 August 2011
- Roppolo, Michael. "Japan Launches Net Into Space to Help With Orbital Debris". CBS News. 28 February 2014
- "Japan launching 'space junk' collector (Update)". Archived from the original on 2 February 2017. Retrieved 24 January 2017.
- "Japan launches 'space junk' collector – Times of India". The Times of India. Archived from the original on 8 February 2017. Retrieved 24 January 2017.
- "Space cargo ship experiment to clean up debris hits snag". The Japan Times Online. 31 January 2017. Archived from the original on 31 January 2017. Retrieved 2 February 2017.
- "A Japanese Space Junk Removal Experiment Has Failed in Orbit". Space.com. Archived from the original on 1 February 2017. Retrieved 2 February 2017.
- "Japan's troubled 'space junk' mission fails". Archived from the original on 12 February 2017. Retrieved 12 February 2017.
- "E.DEORBIT Mission". ESA. 12 April 2017. Retrieved 6 October 2018.
- Biesbroek, 2012 "Introduction to e.Deorbit" Archived 17 September 2014 at the Wayback Machine. e.deorbit symposium. 6 May 2014
- Clark, Stephen (1 April 2018). "Eliminating space junk could take step toward reality with station cargo launch". Spaceflight Now. Archived from the original on 8 April 2018. Retrieved 6 April 2018.
- "UN Space Debris Mitigation Guidelines" Archived 6 October 2011 at the Wayback Machine, UN Office for Outer Space Affairs, 2010.
- Theresa Hitchens, "COPUOS Wades into the Next Great Space Debate" Archived 26 December 2008 at the Wayback Machine, The Bulletin of the Atomic Scientists, 26 June 2008.
- "U.S. Government Orbital Debris Mitigation Standard Practices" (PDF). United States Federal Government. Archived (PDF) from the original on 16 February 2013. Retrieved 28 November 2013.
- "Orbital Debris – Important Reference Documents." Archived 20 March 2009 at the Wayback Machine, NASA Orbital Debris Program Office.
- "Mitigating space debris generation". European Space Agency. 19 April 2013. Archived from the original on 26 April 2013. Retrieved 13 December 2019.
- "Compliance of Rocket Upper Stages in GTO with Space Debris Mitigation Guidelines". Space Safety Magazine. 18 July 2013. Retrieved 16 February 2016.
- "U.S. Government Orbital Debris Mitigation Standard Practices" (PDF). United States Federal Government. Archived (PDF) from the original on 5 April 2004. Retrieved 13 December 2019.
- Stokes; et al. Flohrer, T.; Schmitz, F. (eds.). STATUS OF THE ISO SPACE DEBRIS MITIGATION STANDARDS (2017) (PDF). 7th European Conference on Space Debris, Darmstadt, Germany, 18-21 April 2017. ESA Space Debris Office.
- Howell, E. (2013). Experts Urge Removal of Space Debris From Orbit. Universe Today. Retrieved from http://www.universetoday.com/101790/experts-urge-removal-of-space-debris-from-orbit/ Archived 5 March 2014 at the Wayback Machine
- Space debris mitigation measures in India, Acta Astronautica, February 2006, Vol. 58, Issue 3, pages 168-174, DOI.
- E A Taylor and J R Davey, "Implementation of debris mitigation using International Organization for Standardization (ISO) standards" Archived 9 March 2012 at the Wayback Machine, Proceedings of the Institution of Mechanical Engineers: G, Volume 221 Number 8 (1 June 2007), pp. 987 – 996.
- Ley, Willy (August 1960). "How to Slay Dragons". For Your Information. Galaxy Science Fiction. pp. 57–72.
- Felix Hoots, Paul Schumacher Jr.; Glover, Robert (2004). "History of Analytical Orbit Modeling in the U.S. Space Surveillance System". Journal of Guidance Control and Dynamics. 27 (2): 174–185. Bibcode:2004JGCD...27..174H. doi:10.2514/1.9161.
- T.S. Kelso, CelesTrak BBS: Historical Archives Archived 17 July 2012 at Archive.today, 2-line elements dating to 1980
- Schefter, p. 48.
- Kessler 1978
- Kessler 1981
- "pdf of article" (PDF). Archived (PDF) from the original on 14 August 2018. Retrieved 14 August 2018.
- Technical, p. 4
- See charts, Hoffman p. 7.
- See chart, Hoffman p. 4.
- In the time between writing of Klinkrad (2006) Chapter 1 (earlier) and the Prolog (later) of Space Debris, Klinkrad changed the number from 8,500 to 13,000 – compare p. 6 and ix.
- Michael Hoffman, "It's getting crowded up there." Space News, 3 April 2009.
- "Space Junk Threat Will Grow for Astronauts and Satellites" Archived 9 April 2011 at the Wayback Machine, Fox News, 6 April 2011.
- Kessler 2001
- J.-C Liou and N. L. Johnson, "Risks in Space from Orbiting Debris" Archived 1 June 2008 at the Wayback Machine, Science, Volume 311 Number 5759 (20 January 2006), pp. 340 – 341
- Antony Milne, Sky Static: The Space Debris Crisis, Greenwood Publishing Group, 2002, ISBN 0-275-97749-8, p. 86.
- Technical, p. 7.
- Paul Marks, "Space debris threat to future launches" Archived 26 April 2015 at the Wayback Machine, 27 October 2009.
- Space junk at tipping point, says report Archived 21 December 2017 at the Wayback Machine, BBC News, 2 September 2011
- "Starlink Press Kit" (PDF). SpaceX. 15 May 2019. Retrieved 23 May 2019.
- Foust, Jeff (1 July 2019). "Starlink failures highlight space sustainability concerns". SpaceNews. Retrieved 3 July 2019.
- Rhett & Link (6 July 2009). "Space Junk Song". YouTube (video). Archived from the original on 18 August 2018. Retrieved 18 August 2018.
- Donald Kessler (Kessler 1991), "Collisional Cascading: The Limits of Population Growth in Low Earth Orbit", Advances in Space Research, Volume 11 Number 12 (December 1991), pp. 63 – 66.
- Donald Kessler (Kessler 1971), "Estimate of Particle Densities and Collision Danger for Spacecraft Moving Through the Asteroid Belt", Physical Studies of Minor Planets, NASA SP-267, 1971, pp. 595 – 605. Bibcode 1971NASSP.267..595K.
- Donald Kessler (Kessler 2009), webpages.charter.net, 8 March 2009.
- Donald Kessler (Kessler 1981), "Sources of Orbital Debris and the Projected Environment for Future Spacecraft", Journal of Spacecraft, Volume 16 Number 4 (July–August 1981), pp. 357 – 360.
- Donald Kessler and Burton Cour-Palais (Kessler 1978), "Collision Frequency of Artificial Satellites: The Creation of a Debris Belt" Journal of Geophysical Research, Volume 81, Number A6 (1 June 1978), pp. 2637–2646.
- Donald Kessler and Phillip Anz-Meador, "Critical Number of Spacecraft in Low Earth Orbit: Using Fragmentation Data to Evaluate the Stability of the Orbital Debris Environment", Presented and the Third European Conference on Space Debris, March 2001.
- (Technical), "Orbital Debris: A Technical Assessment" National Academy of Sciences, 1995. ISBN 0-309-05125-8.
- Jim Schefter, "The Growing Peril of Space Debris" Popular Science, July 1982, pp. 48 – 51.
- "What is Orbital Debris?", Center for Orbital and Reentry Debris Studies, Aerospace Corporation
- Committee for the Assessment of NASA's Orbital Debris Programs (2011). Limiting Future Collision Risk to Spacecraft: An Assessment of NASA's Meteoroid and Orbital Debris Programs. Washington, D.C.: National Research Council. ISBN 978-0-309-21974-7.
- Klotz, Irene (1 September 2011). "Space junk reaching 'tipping point,' report warns". Reuters. Retrieved 2 September 2011. News item summarizing the above report
- Steven A. Hildreth and Allison Arnold. Threats to U.S. National Security Interests in Space: Orbital Debris Mitigation and Removal. Washington, D.C.: Congressional Research Service, 8 January 2014.
- David Leonard, "The Clutter Above", Bulletin of the Atomic Scientists, July/August 2005.
- Patrick McDaniel, "A Methodology for Estimating the Uncertainty in the Predicted Annual Risk to Orbiting Spacecraft from Current or Predicted Space Debris Population". National Defense University, 1997.
- "Interagency Report on Orbital Debris, 1995", National Science and Technology Council, November 1995.
- Nickolay Smirnov, Space Debris: Hazard Evaluation and Mitigation. Boca Raton, FL: CRC Press, 2002, ISBN 0-415-27907-0.
- Richard Talcott, "How We Junked Up Outer Space", Astronomy, Volume 36, Issue 6 (June 2008), pp. 40–43.
- "Technical report on space debris, 1999", United Nations, 2006. ISBN 92-1-100813-1.
- Robin Biesbroek (2015). Active Debris Removal in Space: How to Clean the Earth's Environment from Space Debris. CreateSpace. ISBN 978-1-5085-2918-7.
|Wikimedia Commons has media related to Space debris.|
|Wikinews has related news: Out of space in outer space: Special report on NASA's 'space junk' plans|
- Sativew – Tracking Space Junk in real time
- NASA Orbital Debris Program Office
- ESA Space Debris Office
- "Space: the final junkyard", documentary film
- Would a Saturn-like ring system around planet Earth remain stable? Abdul Ahad
- EISCAT Space Debris during the international polar year
- Intro to mathematical modeling of space debris flux
- SOCRATES: A free daily service predicting close encounters on orbit between satellites and debris orbiting Earth
- A summary of current space debris by type and orbit
- Space Junk Astronomy Cast episode No. 82, includes full transcript
- Paul Maley's Satellite Page – Space debris (with photos)
- Space Debris Illustrated: The Problem in Pictures
- PACA: Space Debris
- IEEE – The Growing Threat of Space Debris
- The Threat of Orbital Debris and Protecting NASA Space Assets from Satellite Collisions
- Space Age Wasteland: Debris in Orbit Is Here to Stay; Scientific American; 2012
- [the http://www.idmarch.org/document/United%20States%20Space%20Surveillance%20Network United States Space Surveillance Network]
- PATENDER: GMV’S Trailblazing low-gravity space-debris capture system
- Space Junk Infographic
- Project West Ford an intentional placement of a large number of small copper metallic objects in medium Earth orbit (long lifetime) in the 1960s by the US government, resulting in a large amount of space debris, which created international relations adverse effects. |
An odds ratio (OR) is a statistic that quantifies the strength of the association between two events, A and B. The odds ratio is defined as the ratio of the odds of A in the presence of B and the odds of A in the absence of B, or equivalently (due to symmetry), the ratio of the odds of B in the presence of A and the odds of B in the absence of A. Two events are independent if and only if the OR equals 1, i.e., the odds of one event are the same in either the presence or absence of the other event. If the OR is greater than 1, then A and B are associated (correlated) in the sense that, compared to the absence of B, the presence of B raises the odds of A, and symmetrically the presence of A raises the odds of B. Conversely, if the OR is less than 1, then A and B are negatively correlated, and the presence of one event reduces the odds of the other event.
Note that the odds ratio is symmetric in the two events, and there is no causal direction implied (correlation does not imply causation): an OR greater than 1 does not establish that B causes A, or that A causes B.
Two similar statistics that are often used to quantify associations are the relative risk (RR) and the absolute risk reduction (ARR). Often, the parameter of greatest interest is actually the RR, which is the ratio of the probabilities analogous to the odds used in the OR. However, available data frequently do not allow for the computation of the RR or the ARR, but do allow for the computation of the OR, as in case-control studies, as explained below. On the other hand, if one of the properties (A or B) is sufficiently rare (in epidemiology this is called the rare disease assumption), then the OR is approximately equal to the corresponding RR.
The OR plays an important role in the logistic model.
Suppose a radiation leak in a village of 1,000 people increased the incidence of a rare disease. The total number of people exposed to the radiation was out of which developed the disease and stayed healthy. The total number of people not exposed was out of which developed the disease and stayed healthy. We can organize this in a contingency table:
The risk of developing the disease given exposure is and of developing the disease given non-exposure is . One obvious way to compare the risks is to use the ratio of the two, the relative risk.
The odds ratio is different. The odds of getting the disease if exposed is and the odds if not exposed is The odds ratio is the ratio of the two,
As illustrated by this example, in a rare-disease case like this, the relative risk and the odds ratio are almost the same. By definition, rare disease implies that and . Thus, the denominators in the relative risk and odds ratio are almost the same ( and .
Relative risk is easier to understand than the odds ratio, but one reason to use odds ratio is that usually, data on the entire population is not available and random sampling must be used. In the example above, if it were very costly to interview villagers and find out if they were exposed to the radiation, then the prevalence of radiation exposure would not be known, and neither would the values of or . One could take a random sample of fifty villagers, but quite possibly such a random sample would not include anybody with the disease, since only 2.6% of the population are diseased. Instead, one might use a case-control study in which all 26 diseased villagers are interviewed as well as a random sample of 26 who do not have the disease. The results might turn out as follows ("might", because this is a random sample):
The odds in this sample of getting the disease given that someone is exposed is 20/10 and the odds given that someone is not exposed is 6/16. The odds ratio is thus . The relative risk, however, cannot be calculated, because it is the ratio of the risks of getting the disease and we would need and to figure those out. Because the study selected for people with the disease, half the people in the sample have the disease and it is known that that is more than the population-wide prevalence.
It is standard in the medical literature to calculate the odds ratio and then use the rare-disease assumption (which is usually reasonable) to claim that the relative risk is approximately equal to it. This not only allows for the use of case-control studies, but makes controlling for confounding variables such as weight or age using regression analysis easier and has the desirable properties discussed in other sections of this article of invariance and insensitivity to the type of sampling.
The odds ratio is the ratio of the odds of an event occurring in one group to the odds of it occurring in another group. The term is also used to refer to sample-based estimates of this ratio. These groups might be men and women, an experimental group and a control group, or any other dichotomous classification. If the probabilities of the event in each of the groups are p1 (first group) and p2 (second group), then the odds ratio is:
where qx = 1 − px. An odds ratio of 1 indicates that the condition or event under study is equally likely to occur in both groups. An odds ratio greater than 1 indicates that the condition or event is more likely to occur in the first group. And an odds ratio less than 1 indicates that the condition or event is less likely to occur in the first group. The odds ratio must be nonnegative if it is defined. It is undefined if p2q1 equals zero, i.e., if p2 equals zero or q1 equals zero.
The odds ratio can also be defined in terms of the joint probability distribution of two binary random variables. The joint distribution of binary random variables X and Y can be written
where p11, p10, p01 and p00 are non-negative "cell probabilities" that sum to one. The odds for Y within the two subpopulations defined by X = 1 and X = 0 are defined in terms of the conditional probabilities given X, i.e., P(Y |X):
Thus the odds ratio is
The simple expression on the right, above, is easy to remember as the product of the probabilities of the "concordant cells" (X = Y) divided by the product of the probabilities of the "discordant cells" (X ≠ Y). However note that in some applications the labeling of categories as zero and one is arbitrary, so there is nothing special about concordant versus discordant values in these applications.
If we had calculated the odds ratio based on the conditional probabilities given Y,
we would have obtained the same result
Other measures of effect size for binary data such as the relative risk do not have this symmetry property.
If X and Y are independent, their joint probabilities can be expressed in terms of their marginal probabilities px = P(X = 1) and py = P(Y = 1), as follows
In this case, the odds ratio equals one, and conversely the odds ratio can only equal one if the joint probabilities can be factored in this way. Thus the odds ratio equals one if and only if X and Y are independent.
The odds ratio is a function of the cell probabilities, and conversely, the cell probabilities can be recovered given knowledge of the odds ratio and the marginal probabilities P(X = 1) = p11 + p10 and P(Y = 1) = p11 + p01. If the odds ratio R differs from 1, then
where p1• = p11 + p10, p•1 = p11 + p01, and
In the case where R = 1, we have independence, so p11 = p1•p•1.
Once we have p11, the other three cell probabilities can easily be recovered from the marginal probabilities.
Suppose that in a sample of 100 men, 90 drank wine in the previous week (so 10 did not), while in a sample of 80 women only 20 drank wine in the same period (so 60 did not). This forms the contingency table:
The odds ratio (OR) can be directly calculated from this table as:
Alternatively, the odds of a man drinking wine are 90 to 10, or 9:1, while the odds of a woman drinking wine are only 20 to 60, or 1:3 = 0.33. The odds ratio is thus 9/0.33, or 27, showing that men are much more likely to drink wine than women. The detailed calculation is:
This example also shows how odds ratios are sometimes sensitive in stating relative positions: in this sample men are (90/100)/(20/80) = 3.6 times as likely to have drunk wine than women, but have 27 times the odds. The logarithm of the odds ratio, the difference of the logits of the probabilities, tempers this effect, and also makes the measure symmetric with respect to the ordering of groups. For example, using natural logarithms, an odds ratio of 27/1 maps to 3.296, and an odds ratio of 1/27 maps to −3.296.
Several approaches to statistical inference for odds ratios have been developed.
One approach to inference uses large sample approximations to the sampling distribution of the log odds ratio (the natural logarithm of the odds ratio). If we use the joint probability notation defined above, the population log odds ratio is
If we observe data in the form of a contingency table
then the probabilities in the joint distribution can be estimated as
where ij = nij / n, with n = n11 + n10 + n01 + n00 being the sum of all four cell counts. The sample log odds ratio is
The distribution of the log odds ratio is approximately normal with:
The standard error for the log odds ratio is approximately
This is an asymptotic approximation, and will not give a meaningful result if any of the cell counts are very small. If L is the sample log odds ratio, an approximate 95% confidence interval for the population log odds ratio is L ± 1.96SE. This can be mapped to exp(L − 1.96SE), exp(L + 1.96SE) to obtain a 95% confidence interval for the odds ratio. If we wish to test the hypothesis that the population odds ratio equals one, the two-sided p-value is 2P(Z < −|L|/SE), where P denotes a probability, and Z denotes a standard normal random variable.
An alternative approach to inference for odds ratios looks at the distribution of the data conditionally on the marginal frequencies of X and Y. An advantage of this approach is that the sampling distribution of the odds ratio can be expressed exactly.
Logistic regression is one way to generalize the odds ratio beyond two binary variables. Suppose we have a binary response variable Y and a binary predictor variable X, and in addition we have other predictor variables Z1, ..., Zp that may or may not be binary. If we use multiple logistic regression to regress Y on X, Z1, ..., Zp, then the estimated coefficient for X is related to a conditional odds ratio. Specifically, at the population level
so is an estimate of this conditional odds ratio. The interpretation of is as an estimate of the odds ratio between Y and X when the values of Z1, ..., Zp are held fixed.
If the data form a "population sample", then the cell probabilities are interpreted as the frequencies of each of the four groups in the population as defined by their X and Y values. In many settings it is impractical to obtain a population sample, so a selected sample is used. For example, we may choose to sample units with X = 1 with a given probability f, regardless of their frequency in the population (which would necessitate sampling units with X = 0 with probability 1 − f). In this situation, our data would follow the following joint probabilities:
The odds ratio p11p00 / p01p10 for this distribution does not depend on the value of f. This shows that the odds ratio (and consequently the log odds ratio) is invariant to non-random sampling based on one of the variables being studied. Note however that the standard error of the log odds ratio does depend on the value of f.
This fact is exploited in two important situations:
In both these settings, the odds ratio can be calculated from the selected sample, without biasing the results relative to what would have been obtained for a population sample.
Due to the widespread use of logistic regression, the odds ratio is widely used in many fields of medical and social science research. The odds ratio is commonly used in survey research, in epidemiology, and to express the results of some clinical trials, such as in case-control studies. It is often abbreviated "OR" in reports. When data from multiple surveys is combined, it will often be expressed as "pooled OR".
As explained in the "Motivating Example" section, the relative risk is usually better than the odds ratio for understanding the relation between risk and some variable such as radiation or a new drug. That section also explains that if the rare disease assumption holds, the odds ratio is a good approximation to relative risk and that it has some advantages over relative risk. When the rare disease assumption does not hold, the unadjusted odds ratio can overestimate the relative risk, but novel methods can easily use the same data to estimate the relative risk, risk differences, base probabilities, or other quantities.
If the absolute risk in the unexposed group is available, conversion between the two is calculated by:
where RC is the absolute risk of the unexposed group.
If the rare disease assumption does not apply, the odds ratio may be very different from the relative risk and can be misleading.
Consider the death rate of men and women passengers when the Titanic sank. Of 462 women, 154 died and 308 survived. Of 851 men, 709 died and 142 survived. Clearly a man on the Titanic was more likely to die than a woman, but how much more likely? Since over half the passengers died, the rare disease assumption is strongly violated.
To compute the odds ratio, note that for women the odds of dying were 1 to 2 (154/308). For men, the odds were 5 to 1 (709/142). The odds ratio is 9.99 (4.99/.5). Men had ten times the odds of dying as women.
For women, the probability of death was 33% (154/462). For men the probability was 83% (709/851). The relative risk of death is 2.5 (.83/.33). A man had 2.5 times a woman's probability of dying.
Which number correctly represents how much more dangerous it was to be a man on the Titanic? Relative risk has the advantage of being easier to understand and of better representing how people think.
Odds ratios have often been confused with relative risk in medical literature. For non-statisticians, the odds ratio is a difficult concept to comprehend, and it gives a more impressive figure for the effect. However, most authors consider that the relative risk is readily understood. In one study, members of a national disease foundation were actually 3.5 times more likely than nonmembers to have heard of a common treatment for that disease – but the odds ratio was 24 and the paper stated that members were ‘more than 20-fold more likely to have heard of’ the treatment. A study of papers published in two journals reported that 26% of the articles that used an odds ratio interpreted it as a risk ratio.
This may reflect the simple process of uncomprehending authors choosing the most impressive-looking and publishable figure. But its use may in some cases be deliberately deceptive. It has been suggested that the odds ratio should only be presented as a measure of effect size when the risk ratio cannot be estimated directly, but with newly available methods it is always possible to estimate the risk ratio, which should generally be used instead.
The odds ratio has another unique property of being directly mathematically invertible whether analyzing the OR as either disease survival or disease onset incidence – where the OR for survival is direct reciprocal of 1/OR for risk. This is known as the 'invariance of the odds ratio'. In contrast, the relative risk does not possess this mathematical invertible property when studying disease survival vs. onset incidence. This phenomenon of OR invertibility vs. RR non-invertibility is best illustrated with an example:
Suppose in a clinical trial, one has an adverse event risk of 4/100 in drug group, and 2/100 in placebo... yielding a RR=2 and OR=2.04166 for drug-vs-placebo adverse risk. However, if analysis was inverted and adverse events were instead analyzed as event-free survival, then the drug group would have a rate of 96/100, and placebo group would have a rate of 98/100—yielding a drug-vs-placebo a RR=0.9796 for survival, but an OR=0.48979. As one can see, a RR of 0.9796 is clearly not the reciprocal of a RR of 2. In contrast, an OR of 0.48979 is indeed the direct reciprocal of an OR of 2.04166.
This is again what is called the 'invariance of the odds ratio', and why a RR for survival is not the same as a RR for risk, while the OR has this symmetrical property when analyzing either survival or adverse risk. The danger to clinical interpretation for the OR comes when the adverse event rate is not rare, thereby exaggerating differences when the OR rare-disease assumption is not met. On the other hand, when the disease is rare, using a RR for survival (e.g. the RR=0.9796 from above example) can clinically hide and conceal an important doubling of adverse risk associated with a drug or exposure.
The sample odds ratio n11n00 / n10n01 is easy to calculate, and for moderate and large samples performs well as an estimator of the population odds ratio. When one or more of the cells in the contingency table can have a small value, the sample odds ratio can be biased and exhibit high variance.
A number of alternative estimators of the odds ratio have been proposed to address limitations of the sample odds ratio. One alternative estimator is the conditional maximum likelihood estimator, which conditions on the row and column margins when forming the likelihood to maximize (as in Fisher's exact test). Another alternative estimator is the Mantel–Haenszel estimator.
The following four contingency tables contain observed cell counts, along with the corresponding sample odds ratio (OR) and sample log odds ratio (LOR):
|OR = 1, LOR = 0||OR = 1, LOR = 0||OR = 4, LOR = 1.39||OR = 0.25, LOR = −1.39|
|Y = 1||Y = 0||Y = 1||Y = 0||Y = 1||Y = 0||Y = 1||Y = 0|
|X = 1||10||10||100||100||20||10||10||20|
|X = 0||5||5||50||50||10||20||20||10|
The following joint probability distributions contain the population cell probabilities, along with the corresponding population odds ratio (OR) and population log odds ratio (LOR):
|OR = 1, LOR = 0||OR = 1, LOR = 0||OR = 16, LOR = 2.77||OR = 0.67, LOR = −0.41|
|Y = 1||Y = 0||Y = 1||Y = 0||Y = 1||Y = 0||Y = 1||Y = 0|
|X = 1||0.2||0.2||0.4||0.4||0.4||0.1||0.1||0.3|
|X = 0||0.3||0.3||0.1||0.1||0.1||0.4||0.2||0.4|
|Quantity||Experimental group (E)||Control group (C)||Total|
|Events (E)||EE = 15||CE = 100||115|
|Non-events (N)||EN = 135||CN = 150||285|
|Total subjects (S)||ES = EE + EN = 150||CS = CE + CN = 250||400|
|Event rate (ER)||EER = EE / ES = 0.1, or 10%||CER = CE / CS = 0.4, or 40%||—|
|Absolute risk reduction||ARR||CER − EER||0.3, or 30%|
|Number needed to treat||NNT||1 / (CER − EER)||3.33|
|Relative risk (risk ratio)||RR||EER / CER||0.25|
|Relative risk reduction||RRR||(CER − EER) / CER, or 1 − RR||0.75, or 75%|
|Preventable fraction among the unexposed||PFu||(CER − EER) / CER||0.75|
|Odds ratio||OR||(EE / EN) / (CE / CN)||0.167|
See also: Category:Summary statistics for contingency tables
There are various other summary statistics for contingency tables that measure association between two events, such as Yule's Y, Yule's Q; these two are normalized so they are 0 for independent events, 1 for perfectly correlated, −1 for perfectly negatively correlated. Edwards (1963) studied these and argued that these measures of association must be functions of the odds ratio, which he referred to as the cross-ratio.
((cite journal)): Cite journal requires |
The Pearson correlation coefficient (also known as the “product-moment correlation coefficient”) is a measure of the linear association between two variables X and Y. It has a value between -1 and 1 where:
- -1 indicates a perfectly negative linear correlation between two variables
- 0 indicates no linear correlation between two variables
- 1 indicates a perfectly positive linear correlation between two variables
The Formula to Find the Pearson Correlation Coefficient
The formula to find the Pearson correlation coefficient, denoted as r, for a sample of data is (via Wikipedia):
You will likely never have to compute this formula by hand since you can use software to do this for you, but it’s helpful to have an understanding of what exactly this formula is doing by walking through an example.
Suppose we have the following dataset:
If we plotted these (X, Y) pairs on a scatterplot, it would look like this:
Just from looking at this scatterplot we can tell that there is a positive association between variables X and Y: when X increases, Y tends to increase as well. But to quantify exactly how positively associated these two variables are, we need to find the Pearson correlation coefficient.
Let’s focus on just the numerator of the formula:
For each (X, Y) pair in our dataset, we need to find the difference between the x value and the mean x value, the difference between the y value and the mean y value, then multiply these two numbers together.
For example, our first (X, Y) pair is (2, 2). The mean x value in this dataset is 5 and the mean y value in this dataset is 7. So, the difference between the x value in this pair and the mean x value is 2 – 5 = -3. The difference between the y value in this pair and the mean y value is 2 – 7 = -5. Then, when we multiply these two numbers together we get -3 * -5 = 15.
Here’s a visual look at what we just did:
Next, we just need to do this for every single pair:
The last step to get the numerator of the formula is to simply add up all of these values:
15 + 3 +3 + 15 = 36
Next, the denominator of the formula tells us to find the sum of all the squared differences for both x and y, then multiply these two numbers together, then take the square root:
So, first we’ll find the sum of the squared differences for both x and y:
Then we’ll multiply these two numbers together: 20 * 68 = 1,360.
Lastly, we’ll take the square root: √1,360 = 36.88
So, we found the numerator of the formula to be 36 and the denominator to be 36.88. This means that our Pearson correlation coefficient is r = 36 / 36.88 = 0.976
This number is close to 1, which indicates that there is a strong positive linear relationship between our variables X and Y. This confirms the relationship that we saw in the scatterplot.
Recall that a Pearson correlation coefficient tells us the type of linear relationship (positive, negative, none) between two variables as well as the strength of that relationship (weak, moderate, strong).
When we make a scatterplot of two variables, we can see the actual relationship between two variables. Here are the many different types of linear relationships we might see:
Pearson correlation coefficient: 0.94
Weak, positive relationship: As the variable on the x-axis increases, the variable on the y-axis increases as well. The dots are fairly spread out, which indicates a weak relationship.
Pearson correlation coefficient: 0.44
No relationship: There is no clear relationship (positive or negative) between the variables.
Pearson correlation coefficient: 0.03
Strong, negative relationship: As the variable on the x-axis increases, the variable on the y-axis decreases. The dots are packed tightly together, which indicates a strong relationship.
Pearson correlation coefficient: -0.87
Weak, negative relationship: As the variable on the x-axis increases, the variable on the y-axis decreases. The dots are fairly spread out, which indicates a weak relationship.
Pearson correlation coefficient: –0.46
Testing for Significance of a Pearson Correlation Coefficient
When we find the Pearson correlation coefficient for a set of data, we’re often working with a sample of data that comes from a larger population. This means that it’s possible to find a non-zero correlation for two variables even if they’re actually not correlated in the overall population.
For example, suppose we make a scatterplot for variables X and Y for every data point in the entire population and it looks like this:
Clearly these two variables are not correlated. However, it’s possible that when we take a sample of 10 points from the population, we choose the following points:
We may find that the Pearson correlation coefficient for this sample of points is 0.93, which indicates a strong positive correlation despite the population correlation being zero.
In order to test for whether or not a correlation between two variables is statistically significant, we can find the following test statistic:
Test statistic T = r * √(n-2) / (1-r2)
where n is the number of pairs in our sample, r is the Pearson correlation coefficient, and test statistic T follows a t distribution with n-2 degrees of freedom.
Let’s walk through an example of how to test for the significance of a Pearson correlation coefficient.
The following dataset shows the height and weight of 12 individuals:
The scatterplot below shows the value of these two variables:
The Pearson correlation coefficient for these two variables is r = 0.836.
The test statistic T = .836 * √(12-2) / (1-.8362) = 4.804.
According to our t distribution calculator, a t score of 4.804 with 10 degrees of freedom has a p-value of .0007. Since .0007
While a Pearson correlation coefficient can be useful in telling us whether or not two variables have a linear association, we must keep three things in mind when interpreting a Pearson correlation coefficient:
1. Correlation does not imply causation. Just because two variables are correlated does not mean that one is necessarily causing the other to occur more or less often. A classic example of this is the positive correlation between ice cream sales and shark attacks. When ice cream sales increase during certain times of the year, shark attacks also tend to increase.
Does this mean ice cream consumption is causing shark attacks? Of course not! It just means that during the summer, both ice cream consumption and shark attacks tend to increase since ice cream is more popular during the summer and more people go in the ocean during the summer.
2. Correlations are sensitive to outliers. One extreme outlier can dramatically change a Pearson correlation coefficient. Consider the example below:
Variables X and Y have a Pearson correlation coefficient of 0.00. But imagine that we have one outlier in the dataset:
Now the Pearson correlation coefficient for these two variables is 0.878. This one outlier changes everything. This is why, when you calculate the correlation for two variables, it’s a good idea to visualize the variables using a scatterplot to check for outliers.
3. A Pearson correlation coefficient does not capture nonlinear relationships between two variables. Imagine that we have two variables with the following relationship:
The Pearson correlation coefficient for these two variables is 0.00 because they have no linear relationship. However, these two variables do have a nonlinear relationship: The y values are simply the x values squared.
When using the Pearson correlation coefficient, keep in mind that you’re merely testing to see if two variables are linearly related. Even if a Pearson correlation coefficient tells us that two variables are uncorrelated, they could still have some type of nonlinear relationship. This is another reason that it’s helpful to create a scatterplot when analyzing the relationship between two variables – it may help you detect a nonlinear relationship. |
3 Figure 2.1 Market demand for tomatoes Demand, the assumed inverse relationship between price and quantity purchased, can be represented by a curve that slopes down toward the right. Here, as the price falls from $11 to zero, the number of bushels of tomatoes purchased per week rises from zero to 110,000.
4 Figure 2.2 Shifts in the demand curve An increase in demand is represented by a rightward, outward, shift in the demand curve, from D1to D2. A decrease in demand is represented by a leftward, or inward, shift in the demand curve, from D1 to D3.
5 Figure 2.3 Supply of tomatoes Supply, the assumed relationship between price and quantity produced, can be represented by a curve that slopes up toward the right. Here, as the price rises from zero to $11, the number of bushels of tomatoes offered for sale during the course of a week rises from zero to 110,000.
6 Figure 2.4 Shifts in the supply curve A rightward, or outward, shift in the supply curve, from S1 to S2, represents an increase in supply. A leftward, or inward, shift in the supply curve, from S1 to S3, represents a decrease in supply.
7 Figure 2.5 Market surplus If a price is higher than the intersection of the supply and demand curves, a market surplus – a greater quantity supplied, Q3, than demanded, Q1 – results. Competitive pressure will push the price down to the equilibrium price P1, the price at which the quantity supplied equals the quantity demanded (Q2).
8 Figure 2.6 Market shortages A price that is below the intersection of the supply and demand curves will create a shortage – a greater quantity demanded, Q3, than supplied, Q1. Competitive pressure will push the price up to the equilibrium price P2, the price at which the quantity supplied equals the quantity demanded (Q2).
9 Figure 2.7 The effects of changes in supply and demand An increase in demand – panel (a) – raises both the equilibrium price and the equilibrium quantity. A decrease in demand – panel (b) – has the opposite effect: a decrease in the equilibrium price and quantity. An increase in supply – panel (c) – causes the equilibrium quantity to rise but the equilibrium price to fall. A decrease in supply – panel (d) – has the opposite effect: a rise in the equilibrium price and a fall in the equilibrium quantity.
10 Figure 2.8 Price ceilings and floors A price ceiling Pc – panel (a) – will create a market shortage equal to Q2 – Q1. A price floor Pf – panel (b) – will create a market surplus equal to Q2 – Q1.
11 Figure 2.9 The efficiency of the competitive market Only those price–quantity combinations on or below the demand curve – panel (a) – are acceptable to buyers. Only those price–quantity combinations on or above the supply curve – panel (b) – are acceptable to producers. Those price–quantity combinations that are acceptable to both buyers and producers are shown in the darkest shaded area of panel (c). The competitive market is “efficient” in the sense that it results in output Q1, the maximum output level acceptable to both buyers and producers.
12 Figure 2.10 Consumer preference in television size Consumers differ in their wants, but most desire a medium-sized television. Only a few want a very small or a large television.
13 Figure 2.11 Long-run market for calculators With supply and demand for calculators at D1 and S1, the short-run equilibrium price and quantity will be P2 and Q1. As existing firms expand production and new firms enter the industry, the supply curve shifts to S2. Simultaneously, an increase in consumer awareness of the product shifts the demand curve to D2. The resulting long-run equilibrium price and quantity are P1 and Q2, respectively.
14 Figure 2.12 Prices in the long run If demand increases more than supply, the price will rise along with the quantity sold – panel (a). If supply keeps up with demand, however, the price will remain the same even though the quantity sold increases – panel (b).
15 Figure 2.13 Twisted pay scale The worker expects his productivity to rise along line A with years of service. If she starts work with less pay than she could earn elsewhere, then her career pay path could follow line B, representing greater increases in pay with time and greater productivity.
16 Figure 3.1 Constrained choice With a given amount of time and other resources, you can produce any combination of study and games along the curve E1 G1. The particular combination you choose will depend on your personal preferences for those two goods. You will not choose point x, because it represents less than you are capable of achieving – and, as a rational person, you will strive to maximize your utility. Because of constraints on your time and resources, you cannot achieve a point above E1 G1.
17 Figure 3.2 Change in constraints If your study skills improve and your ability at the game remains constant, your production possibilities curve will shift from E1G1 to E2G1. Both the number of chapters you can study and the number of games you can play will increase. On your old curve, E1G1, you could study two chapters and play four games (point a). On your new curve E2G1, you can study three chapters and play five games (point b).
18 Figure 3.3 Policy trade-offs of a negative income tax With a guaranteed income of SI1($5,000) and a break-even earned income level of EI1($10,000), the implicit marginal tax rate on the poor is 50 percent. If policy makers attempt to reduce the implicit tax rate by raising the break-even income level, however, the government’s poverty relief budget will rise by the shaded area SI1ab. A higher explicit tax burden will fall on a smaller group of taxpaying workers.
19 Figure 3.4 Maslow’s hierarchy of needs The pyramid orders human needs by broad categories from the most prepotent needs on the bottom to lesser and lesser prepotent needs as an individual moves up the pyramid. According to Maslow, an individual can be expected to satisfy her needs in the order of their prepotence, or will move from the bottom of the pyramid through the various levels to the top, so long as the individual’s resources to satisfy her needs last.
20 Figure 3.5(a) Demand, price, and need satisfaction The extent to which needs are satisfied depends, in the economists’ view of the world, on the nature of the need’s demand and its price. Physiological needs may indeed be more completely satisfied than other needs, but that may only be because physiological needs have relatively low prices (panel (a)). But then, as shown in this figure (panel (b)), the price of the means of satisfying physiological needs might be higher than the prices of the means of satisfying safety and love needs.
22 Figure 5. 1 The economic effect of an excise tax An excise tax of $0 Figure 5.1 The economic effect of an excise tax An excise tax of $0.25 will shift the supply curve for margarine to the left, from S1 to S2. The quantity produced will fall from Q3 to Q2; the price will rise from P2 to P3. The increase, $0.20, however, will not cover the added cost to the producer, $0.25.
23 Figure 5.2 The effect of an excise tax when demand is more elastic than supply If demand is much more elastic than supply, the quantity purchased declines significantly when supply decreases from S1 to S2 in response to the added cost of the excise tax. Producers will lose $0.20; consumers will pay only $0.05 more.
24 Figure 5.3 The effect of price controls on supply If the supply of gasoline is reduced from S1 to S2, but the price is controlled at P1, a shortage equal to the difference between Q1 and Q2 will emerge.
25 Figure 5.4 The effect of rationing on demand Price controls can create a shortage. For instance, at the controlled price P1, a shortage of Q2 - Q1 gallons will develop. By issuing a limited number of coupons that must be used to purchase a product, the government can reduce demand and eliminate the shortage. Here, rationing reduces demand from D1 to D2, where demand intersects the supply curve at the controlled price.
26 Figure 5.5 The conventional view of the impact of the minimum wage When the minimum wage is set at Wm (and the market clearing wage is Wo), employment will fall from Q2 to Q1; simultaneously, the number of workers who are willing to work in this labor market will expand from Q2 to Q3. The market surplus is then Q3 - Q1.
27 Figure 5.6 An unconventional view of the impact of the minimum wage When the minimum wage is raised to Wm, a surplus is created equal to Q3 - Q1. As a consequence, employers can be expected to respond to the surplus by reducing fringe benefits or increasing work demands on workers. The supply curve of labor contracts, reflecting the greater wage the workers will demand to compensate for the reduction in fringe benefits or increase in work demands. The employers’ demand for labor increases, reflecting the higher wage they are willing to pay workers in terms of money wages who get fewer fringe benefits or work harder and produce more.
28 Figure 5.7 Marginal benefit versus marginal cost The demand curve reflects the marginal benefits of each loaf of bread produced. The supply curve reflects the marginal cost of producing each loaf. For each loaf of bread up to Q1, the marginal benefits exceed the marginal cost. The shaded area shows the maximum welfare that can be gained from the production of bread. When the market is at equilibrium (when supply equals demand), all those benefits will be realized.
29 Figure 5.8 External costs Ignoring the external costs associated with the manufacture of paper products, firms will base their production and pricing decisions on the supply curve S1. If they consider external costs, such as the cost of pollution, they will operate on the basis of the supply curve S2, producing Q1 instead of Q2 units. The shaded area abc shows the amount by which the marginal cost of production of Q2 – Q1 units exceeds the marginal benefits to consumers. It indicates the inefficiency of the private market when external costs are not borne by producers.
30 Figure 5.9 External benefits Ignoring the external benefits of getting flu shots, consumers will base their purchases on the demand curve D1 instead of D2. Fewer shots will be purchased than could be justified economically – Q1 instead of Q2. Because the marginal benefit of each shot between Q1 and Q2 (as shown by demand curve D2) exceeds its marginal cost of production, external benefits are not being realized. The shaded area abc indicates market inefficiency.
31 Figure 5. 10 Is government action justified Figure 5.10 Is government action justified? Because of external costs, the market illustrated produces more than the efficient output. Market inefficiency, represented by the shaded triangular area abc, is quite small – so small that government intervention may not be justified on economic grounds alone.
32 Figure 5.11 Market for pollution rights Reducing pollution is costly (see table 5.1). It adds to the costs of production, increasing product prices and reducing the quantities of products demanded. Therefore firms have a demand for the right to avoid pollution abatement costs. The lower the price of such rights, the greater the quantity of rights that firms will demand. If the government fixes the supply of pollution rights at ten and sells those ten rights to the highest bidder, the price of the rights will settle at the intersection of the supply and demand curves – here, about $1,500.
33 Figure 6.1 External and internal coordinating costs As the firm expands, the internal coordinating costs increase as the external coordinating costs fall. The optimum firm size is determined by summing these two cost structures, which is done in panel (b) of the figure.
34 Figure 6.A1 Fringe benefits and the labor market If fringe benefits are more valuable to workers and impose a cost on the employers, the supply of labor will increase from S1 to S2 while the demand curve falls from D1 to D2. The wage rate falls from W1 to W2, but the workers get fringe benefits that have a value of ac, which means that their overall payment goes up from W1 to W3.
35 Figure 7.1 The law of demand Price varies inversely with the quantity consumed, producing a downward sloping curve such as this one. If the price of Coke falls from $1 to $0.75, the consumer will buy three Cokes instead of two.
36 Figure 7.2 Market demand curve The market demand curve for Coke, DA+B, is obtained by summing the quantities that individuals A and B are willing to buy at each and every price (shown by the individual demand curves DA and DB).
37 Figure 7.3 Elastic and inelastic demand Demand curves differ in their relative elasticity. Curve D1 is more elastic than curve D2, in the sense that consumers on curve D1 are more responsive to a given price change (P2 to P1) than are consumers on curve D2.
38 Figure 7.4 Changes in the elasticity coefficient The elasticity coefficient decreases as a firm moves down the demand curve. The upper half of a linear demand curve is elastic, meaning that the elasticity coefficient is greater than one. The lower half is inelastic, meaning that the elasticity coefficient is less than one. This means that the middle of the linear demand curve has an elasticity coefficient equal to one.
39 Figure 7.5 Perfectly elastic demand A firm that has many competitors may lose all its sales if it increases its price even slightly. Its customers can simply move to another producer. In that case, its demand curve is horizontal, with an elasticity coefficient of infinity.
40 Figure 7.6 Increase in demand When consumer demand for low-rise pants increases, the demand curve shifts from D1 to D2. Consumers are now willing to buy a larger quantity of low-rise pants at the same price, or the same quantity at a higher price. At price P1, for instance, they will buy Q3 instead of Q2. And they are now willing to pay P2 for Q2 low-rise pants, whereas before they would pay only P1.
41 Figure 7.7 Decrease in demand A downward shift in demand, from D1 to D2, represents a decrease in the quantity of low-rise pants consumers are willing to buy at each and every price. It also indicates a decrease in the price they are willing to pay for each and every quantity of low-rise pants. At price P2, for instance, consumers will now buy only Q1 low-rise pants (not Q3, as before); and they will now pay only P2 for Q1 low-rise pants – not P3, as before.
42 Figure 7.8 Network effects and demand As the price falls from P3 to P2, the quantity demanded in the short run rises from Q1 to Q2. However, sales build on sales, causing the demand in the future to expand outward to, say, D2. The lower the price in the current time period, the greater the expansion of demand in the future. The more the demand expands over time in response to greater sales in the current time period, the more elastic is the long-run demand.
43 Figure 7.9 Choosing between housing and bundles of other goods The budget line in Six Mile is A1H1 with an income of $100,000. The budget line in La Jolla is A1H3 with the same income. If the employer were to offer the engineer a salary of $152,000, which covers the additional cost of housing, the engineer’s budget line would be the thin line cutting A1H1 at a. Hence, the engineer could choose combination b and be better off than in Six Mile. This means that the employer can offer the engineer less than $152,000.
44 Figure 7A.1 Derivation of an indifference curve Because the consumer prefers more of a good to less, point a is preferable to point c, and point b is preferable to point a. If a is preferable to d but e is preferable to a, then when we move from point d to e, we must move from a combination that is less preferred to the one that is more preferred. In doing so, we must cross a point – for example, f – that is equal in value to a. Indifference curves are composed by connecting all those points – a, f, i, and so on – that are of equal value to the consumer.
45 Figure 7A.2 Indifference curves for pens and books Any combination of pens and books that falls along curve I1 will yield the same level of utility as any other combination on that curve. The consumer is indifferent among them. By extension, any combination on curve I2 will be preferable to any combination on curve I1.
46 Figure 7A.3 The budget line and consumer equilibrium Constrained by her budget, the consumer will seek to maximize her utility by consuming at the point where her budget line is tangent to an indifference curve. Here the consumer chooses point a, where her budget line just touches indifference curve I1. All other combinations on the consumer’s budget line will fall on a lower indifference curve, providing less utility. Point c, for instance, falls on indifference curve I2.
47 Figure 7A.4 Effect of a change in price on consumer equilibrium If the price of pens falls, the consumer’s budget line will pivot outward, from B1P1 to B1P2. As a result, the consumers can move to a higher indifference curve, I2 instead of I1. At the new price, the consumer buys more pens, twenty-two packs as opposed to fifteen.
48 Figure 7A.5 Derivation of the demand curve for pens When the price of pens changes, shifting the consumer’s budget line from B1P1 to B1P2 in figure 7A.4, the consumer equilibrium point changes with it, from a to c. The consumer’s demand curve for pens is obtained by plotting her equilibrium quantity of pens at various prices. At $5 a pack, the consumer buys fifteen packs of pens (point a). At $3 a pack, she buys twenty-two packages (point c).
49 Figure 7A. 6 Budget line: cash grants vs Figure 7A.6 Budget line: cash grants vs. education subsidies If the price of education is reduced by an in-kind subsidy, a family’s budget line will pivot from H3E3 to H3E5. The family will move from point a to point b, where it can consume more food and housing. If the family is given the same subsidy in cash, its budget line will move from H3E3 to H4E4. Because the relative price of housing is lower on H4E4 than on H3E5, the family will choose a point such as d over b. Because b was the family’s preferred point on H3E5, but it prefers d to b on H4E4 which allows the purchase of b, we must presume that it also prefers cash to a food subsidy.
50 Figure 7B. 1 Upward sloping demand Figure 7B.1 Upward sloping demand? A good might have an upward sloping range, as described in panel (a), given that a price increase might convey greater value to consumers. However, there must be some higher price that will cause sales to contract, since many consumers will no longer be able to buy the good. This means that the demand curve must go beyond some price, P3 in panel (b), must bend backwards and, thus, must have a downward sloping range. The downward sloping range of the curve in panel (b) is the relevant range. If the seller is at combination b, then there is some combination such as d in the downward sloping range of the entire demand curve that is more profitable than combination b.
51 Figure 7B.2 Demand including irrational behavior If irrational consumers demand Q1 cigarettes no matter what the price, but rational consumers take price into consideration, market demand will be D1. The quantity purchased will still vary inversely with the price.
52 Figure 7B.3 Random behavior, budget lines, and downward sloping demand curves If a number of buyers are faced initially with budget line A1B1 and behave randomly, they will buy an average quantity of A2B2. If the price of A increases while the price of B decreases, the budget line pivots on a, causing buyers to purchase on average more of B (B4) and less of A (A4). Thus, quantity changes in the direction predicted by the law of demand (in spite of the absence of rational behavior).
53 Figure 7B.4 Random behavior and the demand curve as a “band” If buyers randomly purchase anywhere from Q1 to Q2 when the price is P1 and anywhere from Q2 to Q3 when the price is P2, then they will tend to increase their average quantity purchased from Q4 to Q5 when the price falls from P1 to P2.
54 Figure 8.1 Rising marginal cost To produce each new watercolor, Jan must give up an opportunity more valuable than the last. Thus the marginal cost of her paintings rises with each new work.
55 Figure 8.2 The law of diminishing marginal returns As production expands with the addition of new workers, efficiencies of specialization initially cause marginal cost to fall. At some point, however – here, just beyond two bushels – marginal cost will begin to rise again. At that point, marginal returns will begin to diminish and marginal costs will begin to rise.
56 Figure 8.3 Costs and benefits of fishing For each fish up to the fifth one, Gary receives more in benefits than he pays in costs. The first fish gives him $4.67 in benefits (point a) and costs him only $1 (point b). The fifth yields equal costs and benefits (point c), but the sixth costs more than it is worth. Therefore Gary will catch no more than five fish.
57 Figure 8.4 Accident prevention Given the increasing marginal cost of preventing accidents and the decreasing marginal value of preventing the accidents, c or 5 accidents will be prevented.
58 Figure 8.5 Total, average, and marginal product curves The total product curve shows how output changes when the amount of the variable input, labor, changes. Total product rises first at an increasing rate (0–five workers), then at a decreasing rate (five–fifteen workers), before declining (beyond fifteen workers). The marginal and average product curves reflect what is happening to total product. Marginal product rises when total product is rising at an increasing rate and falls when total product is rising at a decreasing rate. Marginal product is positive when total product is rising and negative when total product is falling.
59 Figure 8.6 Marginal costs and maximization of profit At price P1 (panel (a)), this firm’s marginal revenue, represented by the area under P1 up to Q1, exceeds its marginal cost up to the output level of Q1. At that point total profit, shown in panel (b), peaks (point a). At price P2, marginal revenue exceeds marginal cost up to an output level of Q2. The increase in price shifts the profit curve in panel (b) upward, from TP1 to TP2, and profits peak at b.
60 Figure 8.7 Market supply curve The market supply curve (SA+B) is obtained by adding together the amount producers A and B are willing to offer at each and every price, as shown by the individual supply curves SA and SB. (The individual supply curves are obtained from the upward sloping portions of the firms’ marginal cost curve.)
61 Figure 9.1 Total fixed costs, total variable costs, and total costs in the short run Total fixed cost does not vary with production; therefore, it is drawn as a horizontal line. Total variable cost does rise with production. Here it is represented by the shaded area between the total cost and total fixed cost curves.
62 Figure 9.2 Marginal and average costs in the short run The average fixed cost curve (AFC) slopes downward and approaches, but never touches, the horizontal axis. The average variable cost curve (AVC) and the total variable cost curve are mathematically related to the marginal cost curve and both intersect with the marginal cost curve (MC) at its lowest point. The vertical distance between the average total cost curve (ATC) and the average variable cost curve equals the average fixed cost at any given output level. There is no relationship between the MC and AFC curves.
63 Figure 9.3 Economies of scale Economies of scale are cost savings associated with the expanded use of resources. To realize such savings, however, a firm must expand its output. Here the firm can lower its costs by expanding production from q1 to q2 – a scale of operation that places it on a lower short-run average total cost curve (ATC2 instead of ATC1).
64 Figure 9.4 Diseconomies of scale Diseconomies of scale may occur because of the communication problems of larger firms. Here the firm realizes economies of scale through its first short-run average total cost curves. The long-run average cost curve begins to turn up at an output level of q1, beyond which diseconomies of scale set in.
65 Figure 9.5 Marginal and average cost in the long run The long-run marginal and average cost curves are mathematically related. The long-run average cost curve slopes downward as long as it is above the long-run marginal cost curve. The two curves intersect at the low point of the long-run average cost curve.
66 Figure 9.6a Individual differences in long-run average cost curves The shape of the long-run average cost curve varies according to the extent and persistence of economies and diseconomies of scale. Firms in industries with few economies of scale will have a long-run average cost curve like the one in panel (a). Firms in industries with persistent economies of scale will have a long-run average cost curve like the one in panel (b), and firms in industries with extensive economies of scale may find that their long-run average cost curve slopes continually downward, as in panel (c).
69 Figure 9.7 Shifts in average and marginal cost curves An increase in a firm’s variable cost (panel (a)) will shift the firm’s average total cost curve up, from ATC1 to ATC2. It will also shift the marginal cost curve, from MC1 to MC2. Production will fall because of the increase in marginal cost. By contrast, an increase in a firm’s fixed cost (panel (b)) will shift the average total cost curve upward from ATC1 to ATC2, but will not affect the marginal cost curve. (Marginal cost is unaffected by fixed cost.) Thus the firm’s level of production will not change.
70 Figure 9A.1 Single isoquant A firm can produce 100 pairs of jeans a day using any of the various combinations of labor and machinery shown on this curve. Because of diminishing marginal returns, more and more machines must be substituted for each worker who is dropped.
71 Figure 9A.2 Several isoquants Different output levels will have different isoquants. The higher the output level, the higher the isoquant.
72 Figure 9A.3 Finding the most efficient combination of resources Assuming that the daily wage of each worker is $100, and the daily rental on each sewing machine is $20, an expenditure of $600 per day will buy any combination of resources on isocost curve IC1. The most cost-effective combination of labor and capital is point a, three workers and fifteen machines. At that point, the isocost curve is just tangent to isoquant IQ2, meaning that the firm can product 150 pairs of jeans a day. If the firm chooses any other combination, it will move to a lower isoquant and a lower output level. At point b (on isoquant IQ1), it will be able to produce only 100 pairs of jeans a day.
73 Figure 9A.4 The effect of increased expenditures on resources An increase in the level of expenditures on resources shifts the isocost curve outward from IC1 to IC2. The firm’s most efficient combination of resources shifts from point a to point c.
74 Figure 10.1 Demand curve faced by perfect competitors The market demand for a product (panel (a)) is always downward sloping. The perfect competitor is on a horizontal, or perfectly elastic, demand curve (panel (b)). It cannot raise its price above the market price even slightly without losing its customers to other producers.
75 Figure 10.2 Demand curve faced by a monopolistic competitor Because the product sold by the monopolistically competitive firm is slightly different from the products sold by competing producers, the firm faces a highly elastic, but not perfectly elastic, demand curve.
76 Figure 10.3 The perfect competitor’s production decision The perfect competitor’s price is determined by market supply and demand (panel (a)). As long as marginal revenue (MR), which equals market price, exceeds marginal cost (MC), the perfect competitor will expand production (panel (b)). The profit maximizing production level is the point at which marginal cost equals marginal revenue (price).
77 Figure 10.4 Change in the perfect competitor’s market price If the market demand rises from D1 to D3 (panel (a)), the price will rise with it, from P1 to P3. As a result, the perfectly competitive firm’s demand curve will rise, from d1 to d3 (panel (b)).
78 Figure 10.5 The profit maximizing perfect competitor The perfect competitor’s demand curve is established by the market clearing price (panel (a)). The profit maximizing perfect competitor will extend production up to the point at which marginal cost equals marginal revenue (price), or point a in panel (b). At that output level – q2 – the firm will earn a short-run economic profit equal to the shaded area ATC1P1 ab. If the perfect competitor were to minimize average total cost, it would produce only q1, losing profits equal to the darker shaded area dca in the process.
79 Figure 10.6 The loss minimizing perfect competitor The market clearing price (panel (a)) establishes the perfect competitor’s demand curve (panel (b)). Because the price is below the average total cost curve, this firm is losing money. As long as the price is above the low point of the average variable cost curve, however, the firm should minimize its short-run losses by continuing to produce where marginal cost equals marginal revenue (price or point b in panel (b)). This perfect competitor should produce q1 units, incurring losses equal to the shaded area P1ATC1ab. (The alternative would be to shut down, in which case the firm would lose all its fixed costs.)
80 Figure 10.7 The long-run effects of short-run profits If perfect competitors are making short-run profits, other producers will enter the market, increasing the market supply from S1 to S2 and lowering the market price from P2 to P1 (panel (a)). The individual firm’s demand curve, which is determined by market price, will shift down, from d1 to d2 (panel (b)). The firm will reduce its output from q2 to q1, the new intersection of marginal revenue (price) and marginal cost. Long-run equilibrium will be achieved when the price falls to the low point of the firm’s average total cost curve, eliminating economic profit (price P1 in panel (b)).
81 Figure 10.8 The long-run effects of short-run losses If perfect competitors are suffering short-run losses, some firms will leave the industry, causing the market supply to shift back from S1 to S2 and the price to rise, from P1 to P2 (panel (a)). The individual firm’s demand curve will shift up with price, from d1 to d2 (panel (b)). The firm will expand from q1 to q2, and equilibrium will be reached when price equals the low point of average total cost P2, eliminating the firm’s short-run losses.
82 Figure 10.9 The long-run effects of economies of scale If the market is in equilibrium at price P1 in panel (a) and the individual firm is producing q1 units on short-run average total cost curve ATC1 (panel (b)), firms will be just breaking even. Because of the profit potential represented by the shaded area ATC1P2ab, firms can be expected to expand production to q3, where the long-run marginal cost curve intersects the demand curve (d1). As they expand production to take advantage of economies of scale, however, supply will expand from S1 to S2 in panel (a), pushing the market price down toward P1, the low point of the long-run average total cost curve (LRATC in panel (b)). Economic profit will fall to zero. Because of rising diseconomies of scale, firms will not expand further.
83 Figure 10.10 The efficiency of the competitive market Perfectly competitive markets are efficient in the sense that they equate marginal benefit (shown by the demand curve in panel (a)) with marginal cost (shown by the supply curve in panel (a). At the market output level, Q1, the marginal benefit of the last unit produced equals the marginal cost of production. The gains generated by the production of Q1 units– that is, the difference between cost and benefits – are shown by the shaded area in panel (a). The perfectly competitive market is also efficient in the sense that the marginal cost of production, P1, is the same for all firms (panels (b) and (c)). If firm X were to produce fewer than its efficient number of units, qx, firm Y would have to produce more than its efficient number, qy, to meet market demand. Firm Y would be pushed up its marginal cost curve, to the point at which the cost of the last unit would exceed its benefits. But competition forces the two firms to produce to exactly the point at which marginal cost equals marginal benefit, thus minimizing the cost of production.
84 Figure Supply and demand cobweb Markets do not always move smoothly toward equilibrium. If current production decisions are based on past prices, price may adjust to supply in the “cobweb pattern” shown here. Having received price P1 in the past, farmers will plan to supply only Q1 bushels of wheat. That amount will not meet market demand, so the price will rise to P4 – inducing farmers to plan for a harvest of Q3 bushels. At price P4, however, Q3 bushels will not clear the market. The price will fall to P2, encouraging farmers to cut production back to Q2. Only after several tries do many farmers find the equilibrium price–quantity combination.
85 Figure 10A.1 A contestable market The market is composed of three firms, each producing output q*, which minimizes average costs. Total industry output is Q* = 3q*. Any attempt by the three firms to reduce output and increase market price will lead to entry by new firms and the dissipation of profits.
86 Figure 11.1 The monopolist’s demand and marginal revenue curves The demand curve facing a monopolist slopes downward, for it is the same as market demand. The monopolist’s marginal revenue curve is constructed from the information contained in the demand curve (see table 11.1).
87 Figure 11.2 Equating marginal cost with marginal revenue The monopolist will move toward production level Q2, the level at which marginal cost equals marginal revenue. At production levels below Q2, marginal revenue will exceed marginal cost; the monopolist will miss the chance to increase profits. At production levels greater than Q2, marginal cost will exceed marginal revenue; the monopolist will lose money on the extra units.
88 Figure 11.3 The monopolist’s profits The profit maximizing monopoly will produce at the level defined by the intersection of the marginal cost and marginal revenue curves: Q1. It will charge a price of P1 – as high as market demand will bear – for that quantity. Because the average total cost of producing Q1 units is ATC1, the firm’s profit is the shaded area ATC1P1ab.
89 Figure 11.4 The monopolist’s short-run Losses Not all monopolists make a profit. With a demand curve that lies below its average total cost curve, this monopoly will minimize its short-run losses by continuing to produce at the point where marginal cost equals marginal revenue (Q1 units). It will charge P1, a price that covers its fixed costs, and will sustain short-run losses equal to the shaded area P1ATC1ab.
90 Figure 11.5 Monopolistic production over the long run In the long run, the monopolist will produce at the intersection of the marginal revenue and long-run marginal cost curves (panel (a)). In contrast to the perfect competitor, the monopolist does not have to minimize long-run average cost by expanding its scale of operation. It can make more profit by restricting production to Qa and charging price Pa. In panel (b), the monopolist produces at the low point of the long-run average cost curve only because thathappens to be the point at which marginal cost and marginal revenue curves intersect. In panel (c), the monopolist produces on a scale beyond the low point of its long-run average cost curve because demand is high enough to justify the cost. In each case, the monopolist charges a price higher than its long-run marginal cost.
91 Figure 11.6 The comparative efficiency of monopoly and competition Firms in a competitive market will tend to produce at point b, the intersection of the marginal cost and demand curves (with the price, or marginal benefit given by the height of the demand curve). Monopolists will tend to produce at point c, the intersection of marginal cost and marginal revenue, and to charge the highest price the market will bear: Pm. In a competitive market, therefore, the price will tend to be lower (Pc) and the quantity produced greater (Qc) than in a monopolistic market. The inefficiency of monopoly is shown by the shaded triangular area abc, the amount by which the benefits of producing Qc - Qm units (shown by the demand curve) exceed their marginal cost of production.
92 Figure 11.7 The costs and benefits of expanded production If the monopolist expands production from Qm to Qc in panel (a), consumers will receive additional benefits equal to the area bounded by QmabQc. They will pay an amount equal to the area QmcdQc for those benefits, leaving a net benefit equal to the shaded area abdc. To expand production, the monopoly must incur additional production costs equal to the area QmcbQc in panel (b). It gains additional revenues equal to the area QmcdQc, leaving a net loss equal to the shaded area cbd. Thus, expanded production helps the consumer but hurts the monopolist.
93 Figure 11.8 Price discrimination By offering customers one can of beans for $0.30, two cans for $0.55, and three cans for $0.75, a grocery store collects more revenues than if it offers three cans for $0.20 each. In either case, the consumer buys three cans. But by making the special offer, the store earns $0.15 more in revenues per customer.
94 Figure 11.9 Perfect price discrimination The perfect price-discriminating monopolist will produce at the point where marginal cost and marginal revenue are equal (point a). Its output level, Qc is therefore the same as that achieved under perfect competition. But because the monopolist charges as much as the market will bear for each unit, its profits – the shaded area ATC1P1ab – are higher than the competitive firm’s.
95 Figure Imperfect price discrimination The monopolist that cannot perfectly price-discriminate may elect to charge a few different prices by segmenting its market. To do so, it divides its market by income, location, or some other factor and finds the demand and marginal revenue curves in each (panels (a) and (b)). Then it adds those marginal revenue curves horizontally to obtain its combined marginal revenue curve for all market segments, MRm (panel (c)). By equating marginal revenue with marginal cost, it selects its output level, Qm. Then it divides that quantity between the two market segments by equating the marginal cost of the last unit produced (panel (c)) with marginal revenue in each market (panels (a) and (b)). It sells Qa in market A and Qb in market B, and charges different prices in each segment. Generally, the price will be higher in the market segment with the less elastic demand (panel (b)).
96 Figure The effect of price controls on the monopolistic production decision In an unregulated market, a monopolistic utility will produce Qm kilowatts and sell them for Pm. If the firm’s price is controlled at P1, however, its marginal revenue curve will become horizontal at P1. The firm will produce Q1 – more than the amount it would normally produce.
97 Figure Taxing monopoly profits Theoretically, a tax on the economic profit of monopoly will not be passed on to the consumer – but taxes are levied on book profit, not economic profit. As a result, a tax shifts the first marginal cost curve up, from MC1 to MC2, raising the price to the consumer and lowering the production level.
98 Figure 11A.1 Construction of the linear marginal revenue curve The marginal revenue curve always starts at the intersection of the vertical axis and any demand curve. However, for a linear demand curve, the marginal revenue curve must slope downward under the demand curve, splitting the horizontal distance between the vertical axis and every point on the demand curve. The marginal revenue curve must cut the horizontal axis at the point below the middle of the linear demand curve, or where the elasticity coefficient equals one.
99 Figure 11A.2 Construction of the nonlinear marginal revenue curve The marginal revenue curve for a nonlinear demand curve is obtained by imagining linear demand curves tangent to every point on the nonlinear demand curve and finding the midpoint between the vertical axis and the imagined linear demand curves.
100 Figure 12.1 Monopolistic competition in the short run As do all profit maximizing firms, the monopolistic competitor will equate marginal revenue with marginal cost. It will produce Qmc units and charge price Pmc, only slightly higher than the price under perfect competition. The monopolistic competitor makes a short-run economic profit equal to the area ATC1Pmcab. The inefficiency of its slightly restricted production level is represented by the shaded area.
101 Figure 12.2 Monopolistic competition in the long run In the long run, firms seeking profits will enter the monopolistically competitive market, shifting the monopolistic competitor’s demand curve down from D1 to D2 and making it more elastic. Equilibrium will be achieved when the firm’s demand curve becomes tangent to the downward sloping portion of the firm’s long-run average cost curve Qm. At that point, price (shown by the demand curve) no longer exceeds average total cost; the firm is making zero economic profit. Unlike the perfect competitor, this firm is not producing at the minimum of the long-run average total cost curve Qm. In that sense, it is underproducing, by Qm - Qmc2 units. This underproduction is also reflected in the fact that the price is greater than the marginal revenue.
102 Figure 12.3 The oligopolist as monopolist With fewer competitors than the monopolistic competitor deals with, the oligopolist faces a less elastic demand curve, Do. Each oligopolist can afford to produce significantly less (Qo) and to charge significantly more Po than the perfect competitor, who produces Qc, at a price of Pc. The shaded area representing inefficiency is larger than that of a monopolistic competitor.
103 Figure 12.4 The oligopolist as price leader The dominant producer who acts as a price leader will attempt to undercut the market price established by small producers (panel (a)). At price P1 the small producers will supply the demand of the entire market, Q2. At a lower price – Pd or Pc – the market will demand more than the small producers can supply. In panel (b), the dominant firm determines its demand curve by plotting the quantity it can sell at each price in panel (a). Then it determines its profit maximizing output level, Qd, by equating marginal cost with marginal revenue. It charges the highest price the market will bear for that quantity, Pd, forcing the market price down to Pd in panel (a). The dominant producer sells Q3 – Q1 units, and the smaller producers supply the rest.
104 Figure 12.5 A duopoly (two-member cartel) In an industry composed of two firms of equal size, firms may collude to restrict total output to Qm and sell at a price of Pm. Having established that price–quantity combination, however, each has an incentive to chisel on the collusive agreement by lowering the price slightly. For example, if one firm charges P1, it can take the entire market, increasing its sales from Q1 to Q2. If the other firm follows suit to protect its market share, each will get a lower price, and the cartel may collapse.
105 Figure 12.6 Long-run marginal and average costs in a natural monopoly In a natural monopoly, long-run marginal cost and average costs decline continuously, over the relevant range of production, because of economies of scale. Although the long-run marginal and average cost curves may eventually turn upward because of diseconomies of scale, the firm’s market is not large enough to support production in that cost range.
106 Figure 12.7 Creation of a natural monopoly Even with declining marginal costs, the firm with monopoly power will produce at the point where marginal cost equals marginal revenue, making Qm units and charging a price of Pm. Unless barriers to entry exist, however, other firms may enter the market, causing the price to fall toward P1 and the quantity produced to rise toward Q1. At that price– quantity combination, only one firm can survive – but without barriers to entry, that firm cannot afford to charge monopoly prices. At a price of P1, its total revenues just cover its total costs. Economic profit is zero.
107 Figure 12.8 Underproduction by a natural monopoly A natural monopolist that cannot price discriminate will produce only Q1 megawatts – less than Q2, the efficient output level – and will charge a price of P1. If the firm tries to produce Q2, it will make losses equal to the shaded area, for its price (P2) will not cover its average cost (AC1).
108 Figure 12.9 Regulation and increasing costs If a natural monopoly is compensated for the losses it incurs in operating at the efficient output level (the shaded area P1ATC1ba), it may monitor its costs less carefully. Its cost curves may shift up, from LRMC1 to LRMC2 and from LRAC1 to LRAC2. Regulators will then have to raise the price from P1 to P2, and production will fall from Q1 to Q2. The firm will still have to be subsidized (by an amount equal to the shaded area P2ATC2dc), and the consumer will be paying more for less.
109 Figure The effect of regulation on a cartelized industry The profit maximizing cartel will equilibrate at point a and produce only Qm units and sell at a price of Pm. In the sense that consumers want Qc units and are willing to pay more than the marginal cost of production for them, Qm is an inefficient production level. Under pure competition, the industry will produce at point b. Regulation can raise output and lower the price, ideally to Pc, thereby eliminating the dead-weight welfare loss that is equal to the shaded triangle abc and which results from monopolistic behavior.
110 Figure 13.1 Shift in demand for labor The demand for labor, as with all other demand curves, slopes downward. An increase in the demand for labor will cause a rightward shift in the demand curve, from D1 to D2. A decrease will cause the leftward shift, to D3.
111 Figure 13.2 Shift in the supply of labor The supply curve for labor slopes upward. An increase in the supply of labor will cause a rightward shift in the supply curve from S1 to S2. A decrease in the supply of labor will cause a leftward shift in the supply curve, from S1 to S3.
112 Figure 13.3 Equilibrium in the labor market Given the supply and demand curves for labor S and D, the equilibrium wage will be W1 and the equilibrium quantity of labor hired Q2. If the wage rate rises to W2, a surplus of labor will develop, equal to the difference between Q3 and Q1.
113 Figure 13.4 The effect of nonmonetary rewards on wage rates The supply of labor is greater for jobs offering nonmonetary benefits – S2 rather than S1. Given a constant demand for labor, the wage rate will be W2 for workers who do not receive nonmonetary benefits and W1 for workers who do. Even though wages are lower when nonmonetary benefits are offered, workers are still better off; they earn a total wage equal, according to their own values, to W3.
114 Figure 13.5 The effect of differences in supply and demand on wage rates In competitive labor markets, higher demand for labor (D2 in panel (a)) will bring a higher wage rate. A higher supply of labor (S2 in panel (b)) will bring a lower wage rate.
115 Figure 13.6 The competitive labor market In a competitive market, the equilibrium wage rate will be W2. Lower wage rates, such as W1, would create a shortage of labor, and employers would compete for the available laborers by offering a higher wage. In pushing up the wage rate to the equilibrium level, employers impose costs on one another. They must pay higher wages not only to new employees but also to all current employees, in order to keep them.
116 Figure 13.7 The marginal cost of labor The marginal cost of hiring additional workers is greater than the wages that must be paid to the new workers. Therefore the marginal cost of labor curve lies above the labor supply curve.
117 Figure 13.8 The monopsonist The monopsonist will hire up to the point at which the marginal value of the last worker, shown by the demand curve for labor, equals his or her marginal cost. For this monopsonistic employer, the optimum number of workers is Q2. The monopsonist must pay only W1 for that number of workers – less than the competitive wage level, W2.
118 Figure 13.9 The employer cartel To achieve the same results as a monopsonist, the employer cartel will devise restrictive employment rules that artificially reduce market demand to D2. The reduced demand allows cartel members to hire only Q2 workers at wage W1 – significantly less than the competitive wage, W2.
119 Figure Menu of two-part pay packages By varying the base salary and the commission rate, employers can get salespeople to reveal more accurately the sales potential of their districts. A salesperson who believes that the sales potential of his district is great will take the income path that starts at a base salary of S3. The salesperson who doesn’t think the sales potential of his district is very good will choose the income path that starts at S1.
120 Figure 14.1 The political spectrum A political candidate who takes a position in the “wings” of a voter distribution, such as D1 or R1, will win fewer votes than a candidate who moves toward the middle of the distribution. In a two-party election, therefore, both candidates will take middle-of-the-road positions, such as D and R.
121 Figure 14.2 Bureaucratic profit maximization Given the demand for police service, D, and the marginal cost of providing it, MC, the optimum quantity of police service is Q2. A monopolistic police department interested in maximizing its profits will supply only Q1 service at a price of P2, however. (A monopolistic bureaucracy interested in maximizing its size would expand police service to Q3.)
122 Figure 15.1 Gains from the export trade Opening up foreign markets to US producers increases the demand for their products, from D1 to D2. As a result, domestic producers can raise their price from P1 to P2 and sell a larger quantity, Q3 instead of Q2. Revenues increase by the shaded area P2bQ3Q2aP1. The more price-elastic or flatter the supply function (S), the larger the change in quantity and the smaller the change in price.
123 Figure 15.2 Losses from competition with imported products Opening up the market to foreign trade increases the supply of textiles from S1 to S2. As a result, the price of textiles falls from P2 to P1, and domestic producers sell a lower quantity, Q1 instead of Q2. Consumers benefit from the lower price and the higher quantity of textiles they are able to buy, but domestic producers, workers, and suppliers lose. Producers’ revenues drop by an amount equal to the shaded area P2aQ2Q1bP1. Workers’ and suppliers’ payments drop by an amount equal to the shaded area Q2abQ1. Starting at point c, a tariff or tax equal to ad is levied, shifting the supply curve from S2 to S1. In an industry whose costs are increasing, the increase in price from P1 to P2 in the importing country is less than the increase in the tariff (ad), because a price fall in the exporting country absorbs some of the burden of the duty.
124 Figure 15.3 Effects of tariff protection on individual industries: case 1 If neither the textiles nor the automobile industry obtains tariff protection, the economy will earn its highest possible collective income (cell I), but each industry has an incentive to obtain tariff protection for itself. If the textiles industry alone seeks protection (cell II), its income will rise while the auto industry’s income falls. If the auto industry alone seeks protection, its income will rise while that of the textiles industry falls. If both obtain protection, the economy will end up in cell IV, its worst possible position. Income in both sectors will fall.
125 Figure 15.4 Effects of tariff protection on individual industries: Case 2 In this case, the auto industry gains from tariff protection, even if both sectors are protected (cell IV). The textiles industry’s income falls from $20 (cell I) to $17 (cell IV), but the auto industry’s income rises from $30 (cell I) to $31 (cell IV). Thus the auto industry has no incentive to agree to the elimination of tariffs.
126 Figure 15.5 Supply and demand for euros on the international currency market The international exchange rate between the dollar and the euro is determined by the forces of supply and demand, with the equilibrium at E. If the exchange rate is below equilibrium, say at ER1, the quantity of euros demanded, shown by the demand curve, will exceed the quantity supplied, shown by the supply curve. Competitive pressure will push the exchange rate up. If the exchange rate is above equilibrium, say, ER3, the quantity supplied will exceed the quantity demanded and competitive pressure will push the exchange rate down. Thus the price of a foreign currency is determined in much the same way as the price of any other commodity.
127 Figure 15.6 Effect of an increase in demand for euros An increase in the demand for euros will shift the demand curve from D1 to D2, pushing the equilibrium from E1 to E2. At the initial equilibrium exchange rate ER1, a shortage will develop. Competition among buyers will push the exchange rate up to the new equilibrium level, ER2. |
Consider a square lattice of unit side. A simple polygon (with non-intersecting sides) of any shape is drawn with its vertices at the lattice points. The area of the polygon can be simply obtained as square units, where B is the number of lattice points on the boundary; I = number of lattice points in the interior region of the polygon. Prove this theorem.
Refer Wikipedia 🙂 🙂 🙂
Two spheres with the same centre have radii r and R, where . From the surface of the bigger sphere we will try to select three points A, B and C such that all sides of the triangle ABC coincide with the surface of the smaller sphere. Prove that this selection is possible if and only if .
Assume A, B and C lie on the surface of a sphere of a radius R and centre O, and AB, BC, and CA touch the surface of a sphere of radius r and centre O. The circumscribed and inscribed circles of ABC then are the intersections of the plane ABC with and , respectively. The centres of these circles both are the foot D of the perpendicular dropped from O to the plane ABC. This point lies both on the angle bisectors of the triangle ABC and on the perpendicular bisectors of its sides. So, these lines are the same, which means that the triangle ABC is equilateral, and the centre of the circles is the common point of intersection of the medians of ABC. This again implies that the radii of the two circles are and for some real number . Let . Then, and . Squaring, we get , , and .
On the other hand, assume . Consider a plane at the distance from the common centre of the two spheres. The plane cuts the surfaces of the spheres along concentric circles of radii
The points A, B, and C can now be chosen on the latter circle in such a way that ABC is equilateral.
Reference: Nordic Mathematical Contest, 1987-2009.
Well, the solutions already exist ! (pun intended! 🙂 🙂 :-))
You may note that putting one of the sides of a quadrilateral to zero (thereby reducing it to a triangle), one recovers Heron’s formula. Consider the quadrilateral as a combination of two triangles by drawing one of the diagonals. The length of the diagonal can be expressed in terms of the lengths of the sides and (cosine of) two diagonally opposite angles. Then, use the Heron’s formula for each of the triangles. Through algebraic manipulation, one can get the required result. If necessary, the reader is advised to consult again Wikipedia Mathematics on the internet!
🙂 🙂 🙂
Heron’s formula for the area of a triangle is well-known. A similar formula for the area of a quadrilateral in terms of the lengths of its sides is given below:
Note that the lengths of the four sides do not specify the quadrilateral uniquely.The area
where a, b, c, and d are the lengths of the four sides; s is the semi-perimeter and is the sum of the diagonally opposite angles of the quadrilateral. This is known as Bertschneider(Coolidge) formula. For a cyclic quadrilateral, is 180 degrees and the area is maximum for the set of given sides and the area is given by (Brahmagupta’s formula):
Prove both the formulae given above!
PS: I will put the solutions on this blog after some day(s). First, you need to try.
- The area of two triangles having equal bases (heights) are in the ratio of their heights (bases).
- If ABC and DEF are two triangles, then the following statements are equivalent: (a) , , (b) (c) and Two triangles satisfying any one of these conditions are said to be similar to each other.
- Appolonius Theorem: If D is the mid-point of the side BC in a triangle ABC then .
- Ceva’s Theorem: If ABC is a triangle, P is a point in its plane and AP, BP, CP meet the sides BC, CA, AB in D, E, F respectively, then . Conversely, if D, E, F are points on the (possibly extended) sides BC, CA, AB respectively and the above relation holds good, then AD, BE, CF concur at a point. Lines such as AD, BE, CF are called Cevians.
- Menelaus’s Theorem: If ABC is a triangle and a line meets the sides BC, CA, AB in D, E, F respectively, then , taking directions of the line segments into considerations, that is, for example, . Conversely, if on the sides BC, CA, AB (possibly extended) of a triangle ABC, points D, E, F are taken respectively such that the above relation holds good, then D, E, F are collinear.
- If two chords AB, CD of a circle intersect at a point O (which may lie inside or outside the circle), then . Conversely, if AB and CD are two line segments intersecting at O, such that , then the four points A, B, C, D are concyclic.
- (This may be considered as a limiting case of 6, in which A and B coincide and the chord AB becomes the tangent at A). If OA is a tangent to a circle at A from a point O outside the circle and OCD is any secant of the circle (that is, a straight line passing through O and intersecting the circle at C and D), then . Conversely, if OA and OCD are two distinct line segments such that , then OA is a tangent at A to the circumcircle of triangle ABC.
- Ptolemy’s Theorem: If ABCD is a cyclic quadrilateral, then . Conversely, if in a quadrilateral ABCD this relation is true, then the quadrilateral is cyclic.
- If AB is a line segment in a plane, then set of points P in the plane such that is a fixed ratio (not equal to 0 or 1) constitute a circle, called the Appolonius circle. If C and D are two points on AB dividing the line segment AB in the ratio internally and externally, then C and D themselves are two points on the circle such that CD is a diameter. Further, for any points P on the circle, PC and PD are the internal and external bisectors of .
- Two plane figures and such as triangles, circles, arcs of a circle are said to be homothetic relative to a point O (in the plane) if for every point A on , OA meets in a point B such that is a fixed ratio (not equal to zero). The point O is called the centre of similitude or hometheity. Also, any two corresponding points X and Y of the figures and (example, the circumcentres of two homothetic triangles) are such that O, X, Y are collinear and .
- Compound and Multiple Angles:
- ; ; .
- ; ;
- ; ; ;
- Conversion Formulae:
- Properties of Triangles:
- Sine Rule:
- Cosine Rule:
- Half-Angle Rule: ; ;
- Medians: and similar expressions for other medians.
- , ,
- If O is the circumcentre and X is the mid-point of BC, then and .
- If AD is the altitude with D on BC and H the orthocentre, then , .
- If G is the centroid and N the nine-point centre, then O, G, N, H are collinear and , .
- If I is the in-centre, then
- The centroid divides the medians in the ratio
- ; ;
- If , then ; ; ; ;
- Area of a quadrilateral ABCD with ; ; ; , is given by If it is cyclic, then . Its diagonals are given by ; .
Some important comments: To the student readers: from Ref: Problem Primer for the Olympiad by C. R. Pranesachar, B. J. Venkatachala, C. S. Yogananda: The best way to use this book is, of course, to look up the problems and solve them!! If you cannot get started then look up the section “Tool Kit”, which is a collection of theorems and results which are generally not available in school text books, but which are extremely useful in solving problems. As in any other trade, you will have to familiarize yourself with the tools and understand them to be able to use them effectively. We strongly recommend that you try to devise your own proofs for these results of Tool Kit or refer to other classic references. …
My remark: This is the way to learn math from scratch or math from first principles.
The book is available in Amazon India or Flipkart:
More geometry later!
This is one of the prime utilities of trigonometry —- to calculate heights and distances, called as surveying in civil engineering.
1. From the extremities of a horizontal base line AB, whose length is 1 km, the bearings of the foot C of a tower are observed and it is found that is 56 degrees and 23 minutes and is 47 degrees and 15 minutes, and the elevation of the tower from A is 9 degrees and 25 minutes, find the height of the tower.
2. A man in a balloon observes that the angle of depression of an object on the ground bearing due north is 33 degrees; the balloon drifts 3 km due west and the angle of depression is now found to be 21 degrees. Find the height of the balloon.
3. A tower PN stands on level ground. A base AB is measured at right angles to AN, the points A, B, and N being in the same horizontal plane, and the angles PAN and PBN are found to be and respectively. Prove that the height of the tower is
If AB is 100m, degrees and degrees, calculate the height.
4. At each end of a horizontal base of length 2a, it is found that the angular height of a certain peak is and that at the middle point it is . Prove that the vertical height of the peak is
5. To find the distance from A to P, a distance AB of 1 km. is measured for a convenient direction. At A the angle PAB is found to be 41 degrees 18 min and at B, the angle PBA is found to be 114 degrees and 38 min. What is the required distance to the nearest metre? |
(CNN) A supermassive black holeConsidered to be one of the largest ever discovered, it has been discovered by astronomers using a new technique.
The findings, published by the Royal Astronomical Society, show that the black hole is more than 30 billion times more massive than the Sun — something rarely seen by astronomers.
The researchers described this as a “very exciting” discovery that opens up “enormous” possibilities for further detection. Black holes.
A team led by Durham University in the United Kingdom used a technique called gravitational lensing — whereby a nearby galaxy is used as a giant magnifying glass, bending light from a distant object. It was able to closely examine how light is bent by a black hole in a galaxy hundreds of millions of light-years from Earth.
Supercomputer simulations and images captured by the Hubble Space Telescope were used to confirm the size of the black hole.
According to a news release from the Royal Astronomical Society, this is the first black hole discovered using gravitational lensing, as the team simulates light traveling hundreds of thousands of times through the universe.
“At about 30 billion times the mass of our Sun, this black hole is one of the largest ever discovered, and we believe that black holes can theoretically become as massive as ever, so this is a very exciting discovery.” said lead study author James Nightingale, an observational cosmologist at Durham University’s Department of Physics.
“Most of the supermassive black holes we know about are active, where matter pulled in close to the black hole heats up and emits energy in the form of light, X-rays and other radiation,” Nightingale added.
“However, gravitational lensing makes it possible to study passive black holes, which is currently not possible in distant galaxies. This approach may find many more black holes beyond our local universe and reveal how these exotic objects have evolved further through cosmic time.”
The researchers believe the discovery is significant because it “allows astronomers to detect more passive and ultramassive black holes than previously thought” and opens up the exciting possibility of “investigating how they grew so large.”
The story of this particular discovery began in 2004, when fellow Durham University astronomer Alastair Edge, a research fellow, noticed a large gravitational lens while reviewing images from a galaxy survey, news release.
The team has now revisited the discovery and investigated it further with the help of NASA’s Hubble Space Telescope and the DiRAC COSMA8 supercomputer.
Ultramassive Black holes The largest objects in the universe and a rare find for astronomers.
Their origin is unclear, some believe they formed from the merger of galaxies a few billion years ago.
Every time a galaxy merges with another, stars are lost and the black hole gains mass – which means that some black holes have an incredibly high mass. |
A dot product, also called the scalar product is a special kind of product that takes coordinates of two vectors, and returns a number(scalar). The alternate name, ‘scalar product’ emphasises the fact that the result is a scalar.
Alternate formula for two vectors making an angle :
Physically, what one expects to get from a dot product is the product of a vector with that component of another vector which is along the first vector’s direction.
Some of the examples include, Work done, Electrical/Magnetic Flux, etc.
In the calculation of work, what one needs is the product of the Force and the Displacement in the direction of the force. So what we do is, we take the dot product of the Force vector, and the Displacement vector, which gives us the product of Force with the component of Displacement along the direction of the Force.
Mathematically, the formula for the dot product is simply the product of the magnitude of one vector with the projection of another vector on the first vector.
Projection(component) of a vector along the direction of vector is:
where is the angle between the two vectors.
NOTE: I have written the above answer for only 3 dimensional(geometrical) dot product.
The definition of the dot product can be extended for an n-dimensional space.
Another name for it is the inner product.
Hope you find it useful!!! |
A free market is a type of economic system where the means of production are privately owned and the prices and distribution of goods are determined by supply and demand. Capitalism and free markets are intertwined, but capitalism refers more to the relationship between the employer and worker and the accumulation of individual wealth. Some of the main characteristics of capitalism include a price system, sometimes laissez-faire, competitive markets, and capital accumulation. The price mechanism is what determines price because it decides what goods and services are produced, the production methods, and product allocation. Laissez-faire is an important concept in capitalism because it is an environment where transactions are made without the regulation or interaction of the government.
The main ideas of capitalism are that capital is privately owned and labour is purchased for wages. Capitalism is the opposite of socialism because socialism entails social ownership of the means of production, and that goods and services are produced for their utility, not for capital accumulation.
This type of market system is also opposite to barter systems because money is the only measure of value and the only means of exchange. Decisions, in a market economy, are based on supply and demand and prices are determined in a free price system. However, a market economy is not the same as capitalism because although capitalism may function in markets, not all market systems are capitalist. What sets market economies apart from other economies is that investment decisions and the allocation of producer goods are determined within the market¹.
The United States is an example of an economy that is highly influenced by capitalism because of the competitive markets that exist and accumulation of capital for individuals. The public school system and other non-profit organizations that function within the United States show the non-capitalist processes that exist within the States¹.
The issue of ethics is tied to the topic of capitalism. Arguments against capitalism focus on labour exploitation, economic inequalities, monopolies, the division between the rich and poor. What many see as an advantage of capitalism is that the limited government invention provides individuals with the ability to decide how they want to spend their own money. Capitalism can also increase the growth of an economy and as well as production.
1. Paul M. Johnson (2005). "A Glossary of Political Economy Terms, Market Economy". |
Examples of applications of functions where quantities such area, perimeter, chord are expressed as function of a variable.
A right triangle has one side x and a hypotenuse of 10 meters. Find the area of the triangle as a function of x.
Solution to Problem 1
If the sides of a right triangle are x and y, the area A of the triangle is given by
A = ( 1 / 2) x * y
We now need to express y in terms of x using the hypotenuse, side x and Pythagora's theorem
10 2 = x 2 + y 2
y = sqrt [100 - x 2 ]
Substitute y by its expression in the area formula to obtain
A(x) = ( 1 / 2) x sqrt [100 - x 2 ]
A rectangle has an area equal to 100 cm2 and a width x. Find the perimeter as a function of x.
Solution to Problem 2
If x and y are the dimensions of the rectangle, using the formula of the area we obtain
100 = x * y
The perimetr P is given by
P = 2(x + y)
Solve the equation 100 = x * y for y and substitute y in the formula for the perimeter
P(x) = 2(x + 100 / x)
Find the area of a square as a function of its perimeter x.
Solution to Problem 3
The area of a square of side L is given by
A = L 2
The perimeter x of a square with side L is given by
x = 4 L
Solve the above for L and substitute in the area formula A above
A(x) = (x/4) 2 = x 2 / 16
A right circular cylinder has a radius r and a height equal to twice r. Find the volume of the cylinder as a function of r.
Solution to Problem 4
The volume V of a right circular cylinder is given by
V = (area of base of cylinder) * (height of cylinder)
= Pi * r 2 * (2 r)
= 2 Pi r 3
Express the length L of the chord of a circle, with given radius r = 10 cm , as a function of the arc length s.(see figure below).
Solution to Problem 5
Using half the angle a, we can write
sin(a / 2) = (L / 2) / r
Substitute r by 10 and solve for L
L = 20 sin(a / 2)
The relationship between arc length s and central angle a is
s = r a = 10 a
Solve for a
a = s / 10
Substitute a by s / 10 in L = 20 sin(a / 2) to obtain
L = 20 sin ( (s / 10) / 2 )
= 20 sin ( s / 20)
Express the distance d = d1+ d2, in the figure below, as a function of x.
Solution to Problem 6
d1 is the length of the hypotenuse of a right triangle of sides x and 3, hence
d1 = sqrt[ 3 2 + x 2 ]
d2 is the length of the hypotenuse of a right triangle of sides 7 - x and 5, hence
d2 = sqrt[ 5 2 + (7 - x) 2 ]
d = d1 + d2 is given by
d = sqrt[ 9 + x 2 ] + sqrt[ 25 + (7 - x) 2 ]
1. Express the area A of a disk in terms of its circumference C.
2. The width of a rectangle is w. Express the area A of this rectangle in terms of its perimeter P and width w.
Solutions to above exercises
1. A = C 2 / (4 Pi)
2. A = (1/2) w (P - 2w)
More tutorials on functions.
Questions on Functions (with Solutions). Several questions on functions are presented and their detailed solutions discussed.
Applications, Graphs, Domain and Range of Functions |
KS4 Magnetism and electromagnetism: Induced potential, transformers & the National Grid
In this section we learn how Michael Faraday reversed the motor effect to discover electromagnetic induction.
Soon after Michael Faraday had discovered the Motor Effect and developed the first electric motor, he began to think....
"if a magnetic field plus a current can produce a force/movement on a conductor,
could a magnetic field plus a force/movement on a conductor produce a current ?"
Another way to look at this is to view the 3 quantities as two "equations":
This led him to discover what we now often call the Generator Effect.
You could perform a simple experiment to see if the statemement is true; all you need is:
a Magnet to make the magnetic field,
a Coil of wire as the conductor (you will move it)
and an Ammeter to measure any induced current.
The Ammeter would show a current reading!
Isn't this amazing. Just by moving a conductor (coil) towards a magnet, we can generate an electrical current, hence we call this the Generator Effect and it is almost literally the opposite of the Motor Effect.
If we stop moving the conductor, the current stops flowing.
If we move the conductor (coil) in the reverse direction, the current reverses its direction. (So if you move the conductor (coil) left and right repeatedly you will generate an alternating current.)
The faster we move the conductor (coil) the larger is the induced current.
Two more interesting observations:
1. You don't have to move the conductor (coil); instead you can move the magnet towards or away from the conductor (coil) and you will still induce a current into the circuit.
2. Moving the magnet is effectively "changing the magnetic field" near to the conductor, so another way of generating the electicity is to somehow "change the magnetic field" near to the conductor; this can be done by having an electromagnet instead of the permanent bar magnet shown in the diagram; then you would vary the current in the electromagnet to "change" its magnetic field. (You might wonder why you would want to do it this way because it is not "generating" a current by using movement, but the technique is relevant to devices called Transformers which you will learn about later, below.)
Finally, if you replace the Ammeter for a Voltmeter then instead of reading an induced current when you move the conductor relative to the magnet (or change the magnetic field), you will read an Induced Potential Difference or Induced Potential which is what we call the voltage produced across the ends of the coil.
The Surprising Effect of the Induced Current
If we have the set up as in the first diagram such that a current is induced in the coil, then we encounter a suprising but easily explained effect.
As we know, if a current flows in a coil of wire (often called a solenoid, see previous section 4.7.3) then a magnetic field is produced around the coil, resembling that of a bar magnet.
So when the induced current appears in the coil, due to moving the coil relative to a magnet, immediately a magnetic field is produced around the coil as shown in the diagram below.
Furthermore, it is found that the direction of this magnetic field is such that it opposes the change in the magnetic field that is producing the induced current!
In the above diagram the coil is moved towards the North pole of the magnet, so the current is induced in such a direction that a North pole will appear at the end nearest to the North pole of the magnet to oppose, by repelling, the approaching North pole.
If the coil is moved away from the magnet then the induced current will reverse so that a South pole appears at the end nearest to the North pole of the magnet to oppose, by attracting, the receeding North pole.
It wasn't Michael Faraday who became well known for this part of the theory of electromagnetism but another Physicist called Emil Lenz who summed up this finding in what became known as "Lenz's Law" and it states : the current induced in the conductor (eg coil) generates a magnetic field that opposes the original change (the movement of the conductor relative to a magnet or of another magnetic field).
The factors affecting the size of the induced current or potential
We have already mentioned one of these which is:
1. The speed of the relative movement of the coil/magnet or of the change in another magnetic field. The greater the speed, the greater the size of the induced potential or the induced current (if there is a complete circuit such as that formed by using an ammeter).
The other 2 are:
2. The number of turns on the coil. The greater the number of turns, the greater the induced potential or current.
3. The strength of the magnet. The stronger the magnet, the greater the induced potential or current.
So, to make a really powerful generator you need:
A coil with as many turns of wire as possible
A strong magnet
And you need to move them relative to each other very quickly.
It will be obvious that the "use" of the generator effect is in generating electricity.
But the "uses", plural, is because we can generate AC (alternating current) electricity or DC (direct current) electricity. The practical differences between these two types of generator is quite small; the fundamental features are the same.
But first two points to clear up:
1. Linear movement OR rotation
The coil and magnet, used above to explain the basics of the generator effect, is a very simple AC generator; as the magnet goes towards/into the coil, the current induced in the coil will flow one way; as it is moved out of the coil the current induced in the coil will flow in the reverse direction.
However, moving the magnet (or the coil) linearly is not an easy movement to maintain; rotation is a much easier movement to maintain, so we want to use a rotation movement.
2. Induction occurs only when field lines are "cut"
If we look again at our basic generator and add the characteristic bar magnet field lines to the diagram:
It turns out that we only get induction if the movement of the coil (or the magnet) causes the wire conductor to pass through or "cut" through the field lines. And that is what you can see happening above when the coil is moved towards or away from the magnet.
I, however, we had a really tiny coil (or a huge magnet) and positioned the coil as shown below, then you can see that a small movement of the coil left or right would not "cut" the field lines so no induction would occur.
Idea - what if we rotated the above coil? Wouldn't it then "cut" the magnet's field lines?
Yes it would, so.....
we have just designed our first rotating AC generator!! Yay. How simple is that.
Generators like this do exist in the real world; the only difference is they tend to keep the coil still and let the magnet rotate. If we rotated the coil what do you think would happen?
The wires would twist and twist and twist...so we avoid this by simply letting the magnet do the rotating.
Simple generators like this used to be used on bicycles to provide electrical power for the bikes lights. The rear wheel of the bike would rotate the magnet, and the coil would be connected to bulbs at the front and back of the bike; lighting without the need for batteries. The disadvantage with such bike lighting systems was that the lights would go out whenever the bike stopped!
Anyway, the point of this section was to learn - induction only occurs when the relative movement of a magnet and a conductor (coil) is such that the magnet's field lines are "cut".
Generators - the fundamentals
Now that we know that we want a rotating coil/magnet we are going to go straight to the most common design for any Generator. This is as shown below.
A coil (here shown as just one turn of wire; in reality there would be many turns of wire) is arranged so that it can be rotated in between two fixed magnets, one a North Pole and one a South Pole. The magnetic field lines go from N to S (left to right in this diagram).
Ok, so in the position shown the "orange" side of the coil is presently cutting upwards through the field lines whilst the blue side is presently cutting downwards; make sure you agree.
Since both sides "cut" the field lines, both sides will have a current induced into them. But which way will the current flow?
To work out the direction of the induced current we use Fleming's RIGHT hand rule which is like his Left hand rule but uses the Right hand, and the thumb points in the direction of the Wire Motion rather than the Force Felt; the "Answer" is now the direction of the induced current and is given by the Second finger.
So his Right hand rule looks like:
If we apply this rule to the orange side of the coil (Left hand: Thumb Up, First finger points left to right) we find that "The Answer", the Second Finger, points inwards. So we can add a current arrow to the orange side as shown:
If we do this for the blue side (Left hand: Thumb Down, First finger point left to right) we find that "The Answer", the Second Finger, points outwards. So we can add a current arrow to the blue side as shown:
We have now determined that the induced current in this coil will always flow in the clockwise direction shown whenever it is rotated in the direction shown; if the direction of rotation changed then so would the direction of the induced current. However, there are 3 things to note:
1. Although the induced current will flow in one direction around the coil, if you look and think carefully you will notice that when the coil rotates such that the orange sides is on the right side and the blue side is on the left which will happen every half turn, the current through each side will reverse! Look at the following diagram.
The current induced in the blue side was flowing outwards, now its flowing inwards;
The current induced in the orange side was flowing inwards, now its flowing outwards.
So the induced current in this type of Generator is fundamentally an Alternating Current.
2. Current is induced, remember, whenever the coil "cuts" through the field lines which occurs for most of the rotation, but when the coil gets close to and passes over its "vertical" position, the coil will move along the field lines; it won't "cut" them. Look at the following diagram.
So as the coil approaches and passes through the vertical position the induced current falls to zero.
It does this every half turn.
The following simplified diagram summarises the above two important points and shows how the output of the fundamental generator varies as its coil is rotated clockwise. Basically, you can see that the output is Alternating; it is a Maximum when the coil is horizontal and it is Zero when the coil is vertical:
3. The third point: all of the diagrams of what I have called a "fundamental generator" are missing a very important detail - they do not show any means of connection to the coil. Simply connecting wires to either end of the coil would not be any good because they would twist and twist when the coil rotates.
So how do we make connection to the coil whilst avoiding this twisting?
There are two way, and it is the means of connection to the coil that determines whether the "fundamental generator" becomes a viable Alternator (A.C generator) or a Dynamo (D.C generator).
The Alternator (A.C Generator)
To turn the "fundamental generator" into an Alternator or A.C Generator we make connection to the two ends of the coil using what we call slip rings.
Slip rings are simply metal conducting rings; one is attached to one end of the coil and one is attached to the other end of the coil. In the diagram below they are shown coloured the same as the coil sides. When the coil rotates, they rotate.
Finally, we make connection to the rings using carbon "brushes" (carbon - because it's a conductor and quite "slippy"). The brushes are lightly held against each ring so they make an electrical connection to the ring whilst it is rotating.
So now we have a way of making an electrical connection to a constantly rotating coil; the carbon brush slip lightly over the rotating rings. We can now connect any circuit we choose to these brushes eg a simple bulb:
So long as someone or something rotates the generator coil the bulb will be lit; the bulb doesn't care whether the current alternates (as it will do here) or whether it is steady or Direct. But some things do care and need a steady or Direct Current, so let's look at a clever alternative connection method that will turn our "fundamental generator" into a D.C Generator otherwise known as a Dynamo.
NB The output of the A.C Generator is the same as that of our "fundamental generator:
The Dynamo (D.C Generator)
To turn the "fundamental generator" into an Dynamo or D.C Generator we make connection to the two ends of the coil using a device that we have used before !
When we investigated the electric motor (which used D.C, remember) we made connection to its rotating coil using a Split Ring Commutator. Well, we can use this same connection method with the Dynamo.
The diagram below shows the fundamental generator with its coil connected to the two halves of the Split Ring Commutator.
Notice that the orange side of the coil is permanently connected to the orange half ring whilst the blue side of the coil is permanently connected to the blue half ring.
To connect to each ring we use a Carbon Brush (as for the Alternator) ; this will allow the coil and rings to rotate whilst making an electrical connection.
Then we can add any circuit we choose - let's add an L.E.D which will only light if the current goes through it in one direction (this will test whether we have truly made a Dynamo and not just another type of Alternator).
Ok, we know from our previous discussion that with the coil in the position shown and moving clockwise in the direction shown, a current will be induced so it goes "inwards" along the orange side of the coil and "outwards" along the blue side of the coil ie clockwise. This current will flow via the split ring commutator and the brushes through the LED circuit, also clockwise which is the correct direction to light the LED. See the current arrows added below:
Now let's see the coil rotated 180 degrees so its orange and blue sides change place. We know from our work on the "fundamental generator" that the current will now flow "outwards" from the orange side and "inwards" through the blue side:
So you can see, because the Split Ring Commutator turns with the coil, the current continues to flow clockwise through the external LED circuit, so it keeps lighting!
We have made a viable D.C Generator; the output current always flows in one direction unlike in the A.C Generator where the output current reversed direction every time the coil turned 180 degrees.
What happens when the coil is in the vertical position?
As noted with the "fundamental generator" (and with the A.C Generator), when the coil is in this position it does NOT cut the field lines so no current is induced. This will also co-incide with the brushes meeting the gap between the rings of the Split Ring Commutator which is another reason whey there would be zero current at this coil orientation. See the diagram below:
The output of our D.C Generator or Dynamo as the coil rotates is as shown below:
Notice, its not a perfect direct current in that the current is not constant, but it doesn't "alternate" between one direction and the other, so it is still a Direct Current. In real D.C Generators a smoothing capacitor is placed across the output which has the effect of smoothing the output so it looks more like:
Increasing the Generator output
We have already discussed the factors to increase the size of the induced current, but to repeat briefly they are:
Increase the number of turns of wire on the coil.
Increase the strength of the magnetic field.
Increase the speed of rotation of the coil within the magnetic field.
Now its time for you to have a go at a few questions.
Just as the loudspeaker followed from the "Motor Effect", the moving coil microphone follows from the "Generator Effect".
A loudspeaker used the Motor Effect to enable the conversion of electrical signals into sound waves.
A microphone uses the Genrator Effect to do the reverse, to enable the conversion of sound waves into electrical signals.
A loudspeaker is likely to be the final output of a sound system, whilst a microphone is likely to be the initial input of a sound system (people speak or sing into it).
Here is how we make a microphone; you should be able to recognise all its parts:
We start with a magnet:
We add a coil that is free to move left and right over the pole piece of the magnet:
Now, if we can make this coil move left/right (in/out) whilst we speak, then according to the Generator Effect an induced potential will appear at the ends of the coil which will then become an induced current if we attach a circuit to the ends of the coil. So let's attach a flexible plastic or paper diaphragm to the coil, something like this:
And that's all there is to it!
We now have a moving coil microphone; the name makes sense doesn't it?
Let's recap this:
Pressure variations due to the sound wave cause the diaphragm to vibrate,
causing the coil to vibrate,
causing an alternating induced potential to appear across the ends of the coil due to the Generator Effect.
An alternating induced current will appear in any attached circuit mirroring the vibration of the diaphragm and of the initial sound wave.
The microphone has successfully turned the sound wave into an electrical signal.
Finally, I hope you have noticed that the moving coil microphone is remarkably similar to the loudspeaker.
They consist of the same 3 main parts, a magnet, a moving coil and some flexible cone/diaphragm. The only difference is that we speak into the microphone to make the coil move to generate an electrical signal whereas we feed an electrical signal into the loudspeaker coil to make it move, producing a sound wave.
The two devices are so similar that you can replace one for the other!
For example, most intercom systems only have one speaker/microphone. When you talk into it you use "it" as a microphone, and when you listen to a voice from it you use "it" as a loudspeaker; it's doing both jobs!
Similarly some phones will use an identical device for its "speaker" as for its "microphone".
Transformers are incredibly useful devices for changing (or transforming) one alternating potential difference to a larger or smaller alternating potential difference eg 10V to 20V (a "step-up transformer") or 10V to 5V (a "step-down transformer").
They are also incredibly simple devices because they consist of just:
One coil of insulated wire, (we call it the primary coil),
another coil of insulated wire, (we call it the secondary coil), and
an iron core on which to wrap the two coils.
This is what they look like:
What transformers do
A transformer will either Step-Up an alternating voltage from one size to a larger size.
Or, it will Step-Down an alternating voltage from one size to a smaller size.
We are very lucky that there is a simple relationship between the sizes of the coil p.d's (Vp and Vs) and the sizes of the numbers of turns on the coils (np and ns).
In words this is:
The ratio of the potential differences across the primary and secondary coils of a transformer (Vp and Vs) depends on the ratio of the number of turns on each coil (np and ns)
As an equation this looks like:
Although you should learn the equation as it is given above, you will most often use it when it is rearranged to find one of the voltages or one of the numbers of turns.
Let's have a look at a few examples.
Transformers and Power
Back in section 126.96.36.199 "Power", part of the "Electricity" topic, we learnt that the Power of a component was easily calculated; all we needed was to know the p.d across the component and the current flowing through it, then to use:
power = p.d x current
or, P = V x I
Well, this is equally true for our transformer, assuming its coils are connected to complete circuits.
The power input (primary coil) is given by Vp x Ip
whilst the power output (secondary coil) is given by Vs x Is
In an ideal, 100% efficient, transformer the output power will equal the input power. As an equation we can write:
(NB A "real" transformer is always less than 100% efficient, but you will not have to consider such cases.)
How do transformers work?
If you look again at the diagram of the transformer above you will notice that the two coils are separated from each other and they are made from insulated wire. So, how does a p.d appear across the secondary coil when a p.d is applied to the primary coil and yet there is no electrical connection between the coils? How do they work?
Transformers work and are able to do their amazing job as a result of 2 principles that we have been studying in the previous section and in this section.
These are: Electromagnetism and Induction.
If we apply an alternating p.d to the Primary Coil (the input of the Transformer), an alternating current will flow in the coil and therefore a magnetic field will appear around the coil and it will "alternate", meaning it will vary in strength from zero to a maximum then reverse direction and decrease in strength, etc. This is simple Electromagnetism.
The following diagram shows the alternating magnetic field around the Primary Coil; notice how it spreads over the Secondary Coil; also note, the magnetic field will more readily alternate, following the changing input p.d, if the core is made from iron because iron is a "soft" magnetic material.
According to the principle of Induction, if a conductor (such as a Coil of wire) is in the vicinity of a changing/alternating magnetic field then an Induced p.d will appear across the Coil.
This is precisely what happens in the transformer; the Secondary Coil is a conductor in the vicinity of an alternating magnetic field (spreading from the Primary Coil), so a p.d will be Induced across the Secondary Coil.
We also know that the size of the Induced p.d depends on the number of turns of the coil, so we can see that, generally, a Secondary Coil with a larger number of turns than the Primary will result in a larger Induced p.d (step-up) whilst a Secondary Coil with a smaller number of turns will result in a smaller Induced p.d (step-down). This is what we find and is in accordance with the transformer equation.
Transformers and the National Grid
The National Grid is the system by which we all, in the UK, get our electricity.
It consists of:
hundreds ofpower stations (of various types),
thousands of miles of cable,
thousands of pylons to hold the cables
plus...something else which we will discover shortly.
The following diagram is a VERY simple representation of one small piece of the National Grid system. It shows a power station, some pylons, copper wire and at the end, a user.
In the real world there would of course be many pylons between a power station and users, but the point to note is the distance between the power station and typical users; it can be significant which means that the resistance of the cables becomes significant.
The role of the resistance
If you recall, from the Electricity Section, 188.8.131.52, power is dissipated in any resistance according to the equation:
power lost = current 2 x resistance
P = I2 R
So, to limit the amount of power lost or dissipated in the cables (causing them to heat up) we need to keep R as small as possible.
This is why the cables are made from copper which is the material with the lowest resistance.
But there is another factor in the equation, the current! And this factor is squared meaning it has a bigger effect on the power lost.
The role of the current
Power is transmitted from the power station at a certain p.d (or voltage) and a certain current. The power output is given by the simple equation:
P = V x I
Let's say we had a power station that was designed to produce a p.d of about 230 V which is the value users need for their mains appliances to work.
And, let's say that we wanted to put out a power of 200,000 W.
To produce the desired power output at the chosen p.d we would need to use a current of:
I = P⁄V
I = 200,000⁄230
I = 870 A
Hm! That's a very high current and would cause a significant power loss in the cables. We would lose a large amount of the power station's power output heating the cables!
Is there a better way? YES.
Instead of the power station producing 230 V let's say it produced 10000 V.
Now, to deliver the desired power output of 200,000 W it only need to use a current of 20 A (you can do the calculation.)
This is a massively reduced current which will reduce the heating effect in the cables tremendously.
The only issue is - how do we get a power station that was designed to produce a p.d of 230 V to now produce a p.d of 10000 V ? And what does the consumer do at the user end? Apart from the fact that a 10000 V supply would be VERY dangerous, all of the users appliances are designed to work with 230V.
The role of the transformer
The transformer saves the day !
At the power station end, a Step-Up transformer is used to increase the p.d from 230 V to 10000 V and at the user end, a Step-Down transformer is used to decrease the p.d from 10000 back to 230 V.
So now you know the missing "thing" from the list of parts that make up the National Grid. Transformers are absolutely vital for the efficient transfer of electricity around the country.
You also know now why mains appliances use Alternating Current, A.C rather than the Direct Current, D.C that is supplied by batteries and used by such things as mobile phones.
It is because Transformers only work with A.C.
That shows you how vital Transformers are; the whole world chose to use A.C mains current purely because it was the type that allowed Transformers to be used.
Now its time for you to have a go at a few questions. |
Tell Me A 2 by 1 Digit Multiplication Story (Lesson 1 of 2)
Lesson 2 of 3
Objective: SWBAT create word problems representing equal groups in multiplication and justify their answer .
I like to vary the start of my math block to keep kids thinking about numbers and relationships, especially related to multiplication. Yesterday we did work with 2-by-1 digit multiplication, and today we will apply what we know to word problems. The common core expects students to be able to use multiplication to solve a word problem involving equal groups, arrays and drawings so that is what this portion of the lesson is aimed at addressing. I will use this lesson for 2 days so that students have additional practice reinforcing these skills.
I have a word problem written on the white board and have students come join me on the carpet. I ask students to think about the ways they can solve this problem, and then I ask a student to come teach their class about the way they would solve it.
Watch the video here.
The common core requires deeper understanding of multiplication as a total number of objects in groups and for students to be able to communicate that understanding, so I will be guiding students through this process using the creating of word problems and presentations of their work at the end.
Who can tell me what the factors in multiplication represent? What is another way we can think about multiplication to make it easier to understand and solve?
One thing that I love about multiplication is that I can tell stories and create word problems with my numbers. I always have to be careful of 2 very important things when I tell my multiplication story. The 1st thing is that I must have groups of things, and then the next thing is that I must have a number in each group.
I have 2 numbers on the board today, 27 and 3. I’m going to tell you a story about these numbers so I need your listening ears open and your math thinking caps on.
As I’m telling this story to students, I’m also writing it on the board. Mr. Andrews has 3 fish tanks in his house. He needs to clean each tank out, and he has 27 fish in each tank. How many fish will have their tanks cleaned?
Here I ask students what kind of tools we have that will help us solve this problem. Here I expect students to tell me that they can repeatedly add 27 up, 3 times. That I can draw my 3 tanks and draw 27 fish in each one, or that I can multiply the numbers using the traditional algorithm. I record all 3 ways on the chart so students are able to refer back to the strategies when they complete their group work.
Each table has a 2 digit and 1 digit number turned over, markers, scratch paper and a larger poster piece of paper to show their completed work. It is important with the common core that students are able to explain themselves and the meaning of their problem, so this activity will give them practice in this skill.
I think that you’re ready to construct and build your own word problems, solutions and examples for the rest of us to see. When you return to your seats, please work in your groups to create a problem to match your numbers. All of you must agree on the problem and the solution, and you can show your work in any way that your group members used to solve it.
Students are dismissed back to their tables to get started. I walk around the room and question groups about their work.
Key questions: Why does this problem represent a multiplication problem? How are you going to be able to prove your answer? What type of model, picture or number sentence can you draw to make solving this problem easier? Can you show me another way to solve this?
The common core emphasizes a lot of problem solving, making and defending arguments, modeling and the need for students to communicate precisely with others. Providing students with the opportunities to share their work and question one another, and for you to ask clarifying questions, is a great way to address these things.
I pull sticks to call on groups to come and share their posters. I have them read the problem, defend why it is a multiplication problem, show us and explain how they solved the problem and why they chose to represent it in the way that they did (ie: drawing out the groups, setting it up with numbers only or using an array). Watch a group presenting their work.
I will display student work in the hallway after this assignment so that students can see their hard work displayed for others. |
Number base conversion is a method to convert the number from one numeric system (radix) into another (for example, from binary into decimal).
Conversion between the whole numbers is often parts of the standard libraries. For instance, in java, Integer.toString(int value, int base) accepts the base as the second parameter, with Character.MAX_RADIX holding the maximal radix value this function would handle. However it is also possible to convert the decimal part as well.
When there is no decimal part, the mathematical value v of the number, written using the system of the base K can found using formula
where is the value of the i-th digit counting from right to left and starting from 0 and the n is the total number of digits to convert (assuming no decimal part). For instance, to convert 123 into the usual decimal system, we get
The same number in the octal system would be
Interestingly, the first member in a sequence it is always one () as it does not depend on the value of the K.
Now, what if we extend this formula to allow the negative values of the i? For instance, 123.456 in decimal would be
But then - eureka! - the generic formula works for the decimal part as well and can be written as
where now m is the number of digits after the decimal comma to convert. Does this work with say binary system? Convert 1.5 = 1.1 in binary:
There are some interesting discussions if it is theoretically possible to implement number base conversion as a stream operation, starting to print the first digits of the result before the conversion is finished and do not storing all digits of the converted value. This may only be useful when converting between extremely large numbers.
The radix K need not be an integer. It can also be an arbitrary floating point number, even irrational one. For instance, in the Golder Ratio Base system . Theoretically it can also be a complex number.
Algorithms to find digits from the value mostly rely on the arithmetic mod (% in C and Java) operation that produces the remainder. It allows to get immediately the first digit of the integer value for any given integer base. For instance, for the same value 123, 123 % 10 = 123 - 12*10 = 3 for decimal system or 123 % 2 = 123 - 2*61 = 1 for binary system.
To find more digits, the value must be divided by the radix, rounding the result down. The mod operation between the obtained integer and radix gives us the second digit (counting from the right to left). For instance, the following steps returns us the second digit of 123 in the decimal system (K=10):
here we highlight in bold the value that will be needed in the next iteration to get remaining digits:
Applying these steps further we will see that all remaining digits to the left will be zero: surely, 0123 = 00123 and so on. Developers most often need to work with integer numbers, and the mod operation for floats is frequently even not supported. Anyway, we can also logically extend the algorithm towards the fractional part, this time multiplying by K rather than dividing. For instance, the steps to find the first decimal digit of 123.456 would be
Seems working. And the next digit
Indeed seems working.
One of the typical applications of these algorithms (other than writing system libraries to deal with usually built-in conversions between decimal and binary representations) is to write code for expressing time or geographical coordinate values in degrees, minutes and seconds. Very similar code is also needed to support various currency systems. |
Start a 10-Day Free Trial to Unlock the Full Review
Why Lesson Planet?
Find quality lesson planning resources, fast!
Share & remix collections to collaborate.
Organize your curriculum with collections. Easy!
Have time to be more creative & energetic with your students!
Balanced Scales Enrichment 13.7
In this standard weights learning exercise, students write a weight from the box on each pan of the scales so that the weights on each scale are equal and all the 8 scales are balanced. The equal weights will have ounces and pounds.
3 Views 28 Downloads
Lightest and Heaviest—Sorting Algorithms
How do computers sort data lists? Using eight unknown weights and a balance scale, groups determine the order of the weights from lightest to heaviest. A second instructional activity provides the groups with other methods to order the...
3rd - 12th Math CCSS: Adaptable
Finding Side Length (Given Perimeter)
Perimeter is the total distance around an object. One can find the perimeter of a rectangle or square by adding all four side lengths together. These worksheets require third graders to find the missing side length, given the perimeter...
3rd Math CCSS: Designed
Collecting and Working with Data
Add to your collection of math resources with this extensive series of data analysis worksheets. Whether your teaching how to use frequency tables and tally charts to collect and organize data, or introducing young mathematicians to pie...
3rd - 6th Math CCSS: Adaptable |
With hundreds of online lessons and activities to choose from, it can be tricky for educators and parents to know where to start when working with students who are learning from home. Here, we’re keeping it simple with this shortlist of lessons and activities that work particularly well for remote learning.
These lessons and curricula can be done completely or nearly completely as part of distance learning, virtual learning, or home schooling. You can find even more lessons plans within the rest of our resource collections. In the topical collections (ocean & coasts, weather & atmosphere, climate, marine life, and freshwater), be sure to look under the "Lesson plans & activities" header within each page. This can be found on the right side on desktop and at the bottom on mobile.
This collection uses real-time environmental data in self-directed student activities exploring the natural world. Students learn about carbon cycling, ocean acidification, and other phenomena related to climate change. These modules are designed with a three-dimensional approach to teaching in mind and use a data literacy framework. Most lessons within this curriculum can be done with an internet connection and printed worksheets. A few require low-cost household materials and a few require more complicated materials.
Data in the Classroom
Data in the Classroom has structured, student-directed lesson plans that use historical and real-time NOAA data. The five modules address research questions and include stepped levels of engagement with complex inquiry investigations with real-time and past data.
Earth science days
From studying the oceans to solar flares, NOAA has researchers in a wide range of scientific topics. Put on your scientist hat and explore the activities paired with videos from NOAA experts. Learn what it is like to be a NOAA scientist! These activities were designed specifically to be done either through virtual education or at home.
Estuaries are unique habitats where a river meets the ocean or another large body of water, like a Great Lake. This collection of lessons includes many that require minimal materials. Other lessons can be done outdoors. This collection also includes videos and animations, ways to virtually connect with estuaries, and resources from partner groups.
The following lessons only require internet access, printable worksheets, and writing materials: Amazing adaptations, Where rivers meet the sea, Don't shut your mouth, Mapping mangroves, Estuary food pyramid, Hooray for horseshoe crabs, Score one for the estuary, Survival in an estuary, Salinity and tides in York River, Human impacts on an estuary ecosystem, The jubilee phenomenon.
Hurricane Resilience curriculum offsite link
Hurricane Resilience is a high school environmental science curriculum for use in coastal locations where hurricanes are common. Through 20 days of instruction, students make connections between the science of hurricanes, how they affect their community and region, and how we can plan for a more resilient future. The curriculum unit aims to empower high school students to have a voice in resilience planning and understand the relationship between the science of hurricanes and the local impacts these storms have on people and places. Most lessons within this curriculum can be done with an internet connection, printed worksheets, and some low-cost household materials.
Marine Debris Tracker mobile app
The Marine Debris Tracker app is a citizen science project where users monitor and report trash found along waterways (including freshwater!) and coastlines. They can then become involved with local and global data collection.
National Marine Sanctuaries virtual dives and lessons
Immerse yourself in the ocean and your national marine sanctuaries without getting wet! These virtual reality voyages use 360-degree images to highlight the amazing habitats, animals, and cultural resources you can find in each national marine sanctuary. You can explore on a desktop browser, on a mobile device, or with a virtual reality device. Five NGSS-aligned lessons will further help students learn about America's underwater treasures. Most lessons can be completed by viewing the videos and completing accompanying tasks with minimal materials.
Sea level rise learning module
This learning module focuses on sea level rise, its causes, and impacts; and challenges students to think about what they can do in response. It features an integrated package of NGSS-aligned grade level-appropriate (6-12) instructions and activities centered on a 23-minute video presentation and an exploration of real time national water level data. The video has scheduled pauses so educators may facilitate discussions of presented topics.
SOS Explorer™ (SOSx) mobile app and learning modules
The revolutionary mobile app takes Science On a Sphere® (SOS) datasets, usually only seen on a 6-foot sphere in large museum spaces, and brings them into the palm of your hand. The visualizations show information provided by satellites, ground observations and computer models. Some of the datasets can be paired with the NGSS-aligned SOS phenomenon-based learning modules, which can inspire your students to dig even deeper into these ideas. |
The Hubble Space Telescope has been seen by astronauts during five space shuttle missions between 1993 and 2009. This two astronauts operate during the first mission, when the telescope’s mirror has been repaired and a new camera was set up\.
If it comes to telescopes, size matters\.
In order to continue learning new things astronomers are building better and larger observatories to gaze at the cosmos both from Earth and out of orbit. Engineers have started developing the technologies needed to construct the next generation of \space telescopes, but there’s only one problem.
Both concerning size and weight, both the astronomers and engineers are \planning for the near future will be outgrowing the abilities of the rockets that exist today\. That is because the abilities of a telescope depend mostly on its own aperture, or the diameter of its primary mirror. New”megarockets” like NASA’s Space Launch System might be large enough to get the next-generation space telescopes that NASA aims to launch in the 2030s, but if subsequent missions need to squeeze to the same-size rocket fairing, these missions may have to sacrifice some scientific possibility. [Giant Space Telescopes of the Future (Infographic)]
As opposed to constrain a telescope’s design to fit in the payload fairing of the biggest available rocket thereby placing a limit on the quantity of science its instruments can reunite — NASA scientists are trying to find new methods to get those heavy space telescopes into orbitby launching these piece by piece and assembling them in distance, either robotically or with the support of astronauts.
“Large telescopes offer you better angular resolution and greater spectral resolution, so the future should be attracting bigger telescopes,” Nick Siegler, the chief technologist for NASA’s Exoplanet Exploration Program, said during a demonstration at the 233rd meeting of the American Astronomical Society in Seattle at January. That higher resolution enables telescopes to view more of the universe, seeing clearer than ever before and more looking. It is going to also be particularly useful for finding and characterizing planets around the stars.
“Obviously,’big’ is relative, but the challenge moving forward would be exactly the same,” Siegler said. “You’ve got large structures that you’re attempting to fold into smaller constructions, and the amount of work that goes into that is actually quite enormous.” As an instance, NASA’s James Webb Space Telescope (JWST) — now scheduled to start on an Ariane 5 heavy-lift rocket 2021 — will probably fold up to fit inside the rocket’s payload fairing. If telescope is about to deploy, more than 200 moving parts need to carefully unfold before the tool could get to work celebrating the skies.
JWST will be the largest space telescope ever launched, using its 6.5-meter (21.3 ft ) mirror. The Ariane 5 that will launch JWST is an heavy-lift rocket that’s typically utilized to launch satellites into Earth’s orbit. But, those rockets have also been utilized to establish interplanetary missions such as the European Space Agency’s BepiColombo mission to Mercury that started last October. NASA scientists are currently working on proposals for its successor although the JWST has not yet established\. (Spoiler alert: They are even bigger than JWST!)
NASA engineers working on the blueprints for proposed area observatories like the huge UV Optical Infrared Surveyor (LUVOIR) and the Origins Space Telescope (OST) have had to deal with the limitations of the rockets. For every one of these two telescopes, the engineers came up with 2 different design options: a 15-m (50 ft ) variant that may establish on NASA’s upcoming Space Launch Method (SLS) and also an 8-m (26 feet) version that may launch on today’s smaller and not as powerful heavy-lift rockets. Those smaller variations are NASA’s backup plans in case the SLS won’t be prepared at the time; the megarocket has faced extensive delays and cost overruns.
Astronauts vs. robots
Rather than waiting for somebody to construct a rocket big enough to encourage the sorts of space telescopes that scientists expect to start later on, a group of NASA researchers is studying the options of in-space assembly. That procedure wouldn’t only eliminate the barriers connected with rocket size, but could also decrease the expense of developing and launching new space telescopes, stated a description of that the”in-Space Assembled Telescope” (iSAT) study.
Figuring out just how to construct a telescope in space is merely the beginning. NASA will need to show that the procedure is not merely possible, but and not overly risky to create telescope meeting a simple fact\. Those factors largely depend on whether the meeting is going to be carried out by astronauts, bots or some mix of the two, members of the iSAT team clarified in the AAS meeting.
Sending astronauts to operate to a space telescope is not a new notion; NASA’s legendary Hubble Space Telescope, which started 1990, has been serviced by astronauts five times between 1993 and 2009. Even though the astronauts did not originally build Hubble, they did put in some new equipment and conduct several significant fixes\. Astronauts because the last Hubble servicing mission have not visited any space telescopes.
While the area shuttles that flew to the Hubble servicing missions are retired because 2011, NASA could send astronauts in the Lunar Orbital Platform-Gateway. That lunar space station that is suggested could serve as a stepping stone for future missions to Mars.
However, some researchers, for example Siegler, believe that robots are for building stuff in space better. “Astronauts are costly,” he said. “We think we could do this entirely robotically.” A robotic system for constructing in-space telescope assembly would work a good deal like the robotic arms at the International Space Station, he explained.
\In this summer, the iSAT group aims to publish the findings of its study on different choices for in-space assembly\. |
Prompting is an instructional strategy to guide a learner’s behavior. It can help a student learn a new skill or engage in a desired goal behavior.
Prompting examples include the teacher making a verbal comment, displaying a gesture, modeling, or using a visual prompt such as pointing to a photograph or video.
Sometimes a child may need a stronger form of assistance; other types of prompts include a partial physical or full physical prompt:
- A partial physical prompt involves the teacher using their hands to adjust or guide the student’s behavior in the right direction.
- A full physical prompt involves the teacher using their hands to completely control the movements of the student through the entire sequence of the target behavior.
Prompting is often used in Applied Behavior Analysis (ABA) to assist children with learning difficulties in acquiring target behavior.
The below examples of prompting are split into three types of prompting:
- Verbal Prompts: These include all the ways a teacher, guide, or parent stimulates thinking through verbal communication.
- Nonverbal Prompts: These include nonverbal prompts that often involve the educator using their hands or body language as a guide for learners.
- Model Prompts: This involves using ideal models as something for the learner to look at and aspire toward, such as having a model picture as a stimulus when painting your own picture.
Verbal Prompt Examples
- Verbal instructions: This involves giving clear, structured, step-by-step verbal instructions that will help to guide the student through the task. For example, if teaching someone to make a sandwich, the instructor might say “First, take two slices of bread and place them on a plate.” (see also: verbal communication).
- Verbal cues: Verbal cues are less structured than verbal instructions. They are used as just-in-time instructional scaffolding to move the student in the right direction. For example, if coaching someone on how to perform a certain dance move, the coach might say “Don’t forget to step to the right before you turn.”
- Verbal praise: Praise is a form of prompting because it lets people know they are on the right track and encourages them to keep going. Positive feedback encourages and reinforces desired behaviors. For example, if a child puts away their toys, the parent might say “Great job! You put your toys away all by yourself! Let’s go get some ice cream!”
- Open-Ended Questioning: Open-ended questions are some of the most effective prompts. They require people to answer questions with full sentences rather than just “yes” or “no”. This helps them to verbalize their thoughts and get through mental blocks.
- Higher-Order Questioning: These are guiding questions that try to direct students’ attention to higher-level understanding, such as pointing them toward analytical questions, evaluations, and critical thinking.
- Verbal redirection: Using verbal cues to redirect the individual’s attention or behavior to a more appropriate activity or task. For example, if a child is playing with their toys instead of doing their homework, the parent might say “Remember, it’s time to do your homework now.”
- Verbal modeling: Providing verbal descriptions of how to perform a task or activity while demonstrating it at the same time. For example, if teaching someone how to do a certain yoga pose, the instructor might say “Extend your arms out in front of you, and then slowly lower your torso towards the ground.”
Nonverbal Prompting Examples
- Pointing: Pointing is literally what it sounds like. It involves directing the student’s attention towards something by pointing at it. This can help direct their attention and get them through a cognitive block. For example, if a student chef is trying to thing about what ingredient to include, the master chef can point in the direction of the spices to get them thinking about which spice should be included.
- Hand-over-hand guidance: Often used with young children when performing fine motor skills or writing task, this involves physically guiding the individual’s hands through the steps of a task or activity. For example, if teaching someone how to use scissors, the instructor might place their hands over the individual’s hands and guide them through the cutting motion.
- Facial expressions: Even facial expressions can be a prompt. A teacher might raise one eyebrow to show the student just made a mistake, or give a wink and a smile to silently let them know they’re on the right track and should keep going.
- Physical redirection: This involves the use of physical cues to redirect people’s attention away from something that’s distracting (or in the wrong direction) and toward a more productive line of inquiry. For example, if a student is struggling with comprehending a reading task, the teacher might point away from the text and toward the pictures to see if that helps them comprehend better.
- Visual Prompts: Using visual aids such as pictures, symbols, or written instructions to prompt the individual. For example, if teaching a child to follow a routine, the adult may use a picture schedule to visually guide the child through the steps of the routine.
- Written cues: Written cues can be strategically placed around the place for students to look to when they’re stuck. For example, if coaching someone on how to perform a certain dance move, the coach might provide written cues on the floor to remind the individual of the specific steps involved.
Model Prompting Examples
- Imitation: Teachers can demonstrate a task, then ask for the students to imitate that task. For example, if teaching a child how to say “please,” the adult might say “please” and encourage the child to repeat it.
- Modeled Instruction: A more complete version of model prompting, this involves the teacher moving from demonstrating a task, to doing the task with the class as a group, then having the students do the task individually. The initial model is the key prompt, but throughout the subsequent steps, the teacher continues to use prompting strategies such as verbal cues.
- Role-playing: This involves demonstrating to the learner how to handle a specific situation or scenario by putting yourself in the shoes of the characters in the scenario and acting it out. For example, if teaching someone how to handle conflict in a workplace, the instructor might role-play a scenario where two colleagues have a disagreement.
- Chaining: This involves breaking down a complex task or activity into smaller steps or ‘chains’ in a process. Then, you model each step in sequence. For example, if teaching someone how to tie their shoes, the instructor might first demonstrate how to make the loop, then how to tie the knot, and so on (see also: chunking).
- Spaced interval schedules: Across spaced intervals (either fixed or randomized), the teacher asks the student a question or poses a challenge that addresses a key piece of knowledge. This return to a challenge over and over again acts as a prompt that helps to ensure the information isn’t forgotten over time.
Improve your Promts with Prompting Hierarchies
The term ‘prompting hierarchy’ refers to the process of using a stage-based approach to applying prompts based upon students’ learning and development.
It may involve a series of prompts or cues provided in a specific sequence, such as in the guided practice model, or separating reinforcements across time using spaced repetition (eg: fixed interval schedules).
Generally, the educator starts with the most intrusive or direct prompts and gradually reduces the level of assistance until the individual is able to complete the task independently. This is based on the sociocultural learning theory’s model of scaffolding (see: scaffolding examples)
Some examples of prompting hierarchies include:
- Prompt Fading: This involves gradually reducing the level of assistance until the student is able to complete the task independently. For example, a parent might start with one-to-one work with the student for each question on a test. Then, the next time, they check-in on one in every five questions, then finally, they let the student take the test alone and grade it at the end.
- Most-to-Least Prompts: This version of prompt fading involves starting with the most intrusive or direct prompt and gradually reducing the level of assistance until the student is able to complete the task independently. For example, when preparing for school, a parent starts out with getting their child to go through a checklist of preparation tasks each day (get dressed, brush teeth, etc.). Then, over time, they fade to just asking the key tasks, then finally, the child does all tasks without prompting.
- Least-to-Most Prompts: This involves starting with the least intrusive prompt and gradually increasing the level of assistance if required. This model gives the most freedom and leeway to the learner to encourage their own creativity, but provides additional differentiated interventions for those students who need them.
- Time Delay Prompts: This involves introducing a delay between the initial request or instruction and the prompt to allow the individual time to think through the task without interference. For example, if asking a child how to say “Hello” in Spanish, the adult may wait a few seconds to let the child think, before providing a prompt to help them.
Case Studies of Prompting in Education
1. Visual Prompts for Autistic Individuals
For those that suffer from autism, the world can be overwhelming because of the number of stimuli present at any one time. For this reason, it is helpful for teachers and parents to reduce stimulus overload and simplify tasks as much as possible.
This is one reason that using visual prompts are so effective. A picture that is simple reduces the number of distracting elements and allows for the autistic individual to more easily process the key concept.
Similarly, it is also helpful to break a routine such as getting dressed, or eating a meal, down to its most basic steps.
As shown in this video, a child uses a specially designed apparatus to help them communicate their needs during a meal. The child can simply press on the picture card that indicates what they want, and the machine will play an audio that matches that picture card.
The video also shows several other types of visual prompts that help autistic individuals make sense of the world around them and function more easily.
2. Visual Prompts for Remote Instruction
Remote instruction is becoming increasingly popular. As technology makes online platforms more interactive and user-friendly, this form of learning will continue to grow.
Even though remote learning is very convenient, there are a number of serious obstacles. Putting the technical aspects aside, the biggest challenge is keeping students on-task.
When students are at home there are just too many distractions that compete with the teacher’s efforts. A lot of times, the teacher is going to lose that contest.
However, there are some things teachers can do to help maintain student focus.
For example, this online teacher has created several visual prompts that will help get students back on-task.
These prompts are simple to create, and silly enough to capture the interest of younger students.
Prompting is a technique in ABA designed to help children acquire a target behavior. It is often used with children that have learning difficulties, but the techniques also works with other learner profiles.
A prompt is a stimulus or action that steers the student’s behavior towards the target.
Prompts can be verbal, gestural, or involve making gentle physical contact with the child.
Teachers rely on a prompt hierarchy to identify the level of prompt that is appropriate for a particular child. At the top of the hierarchy are prompts that are the least directive, such as a making a verbal comment or slight gesture.
If necessary, a teacher will apply a prompt further down the hierarchy that provides greater assistance. This can involve gently nudging their arm or hand, or a hand-over-hand maneuver that positions the child’s hand where it needs to be to complete the task.
As the child’s behavior progresses, the teacher should implement fading so that they provide less and less assistance. Eventually, the child will be able to perform the target behavior completely independently.
Brown, A., & Cariveau, T. (2022). A systematic review of simultaneous prompting and prompt delay procedures. Journal of Behavioral Education, 1-22.
Cihak, D., Alberto, P. A., Taber-Doughty, T., & Gama, R. I. (2006). A comparison of static picture prompting and video prompting simulation strategies using group instructional procedures. Focus on Autism and Other Developmental Disabilities, 21(2), 89-99.
McLeskey, J., Barringer, M-D., Billingsley, B., Brownell, M., Jackson, D., Kennedy, M., Lewis, T., Maheady, L., Rodriguez, J., Scheeler, M. C., Winn, J., & Ziegler, D. (2017, January). High-leverage practices in special education. Arlington, VA: Council for Exceptional Children & CEEDAR Center.
Morse, T. E., & Schuster, J. W. (2004). Simultaneous prompting: A review of the literature. Education and Training in Developmental Disabilities, 153-168.
Sam, A., & AFIRM Team. (2015). Prompting. Chapel Hill, NC: National Professional Development Center on Autism Spectrum Disorder, FPG Child Development Center, University of North Carolina. Retrieved from http://afirm.fpg.unc.edu/prompting |
Scientists working with the Laser Interferometer Gravitational-Wave Observatory (LIGO) have announced their first land-mark discovery. LIGO was built to detect gravitational waves (as predicted by Einstein’s general relativity), but this discovery is actually about not detecting gravitational waves. Hold on, what’s all the fuss about then? This sounds like a null result, and in some ways it is. But on the other hand it may be one of the most important neutron star observations ever. So what has LIGO (not) seen?
The Crab Nebula has a rapidly spinning neutron star called the Crab Pulsar hiding inside. The nebula can be found 6,500 light years away in the constellation Taurus, formed after a large supernova observed in the year 1054 AD. Astronomers discovered the left-over neutron star in 1969 and have been observing the young, 900 year old pulsar ever since. However, little is known about the shape or structure of this 10 km-diameter massive stellar object. All we can see are the pulses of gamma- and X-ray radiation beaming into space, flashing at a rate of 30 pulses per second.
However, Einstein’s general relativity predicts that any massive body disturbing space-time will generate gravitational waves which propagate throughout the Universe. LIGO was built with this in mind, by using highly accurate laser interferometry to detect the passage of gravitational waves through space-time surrounding Earth. It is predicted that the direction of gravitational wave propagation may also be derived, heralding the beginning of gravitational wave astronomy. But LIGO will take a long time to collect the data, and it will need a lot more “exposure time” before we can say for certain that we can detect gravitational waves.
In today’s announcement, it is reported that the LIGO facility had monitored the neutron star from November 2005 to August 2006 by using all three LIGO interferometers. This created a very sensitive detector. By comparing Crab Pulsar rotation rate data from the Jodrell Bank Observatory with the LIGO data for this period, they watched for a synchronous gravitational-wave signal.
No signal was observed.
So what does this mean? Does it mean that gravitational waves do not exist after all? No. Although gravitational waves are still in the realms of “theory”, there is strong evidence to suggest they are out there, and that they can be observed. More time is required to collect more data so the LIGO signal-to-noise ratio can be improved.
If gravitational waves are out there, why was there no gravitational wave signal from the rapidly spinning neutron star in the Crab Nebula? This is where it gets really interesting. This is by far the most significant result to come from LIGO, it means that the neutron star is smooth. If the neutron star had any surface features, as the body rotated it would cause ripples in space-time. It is smooth, so no ripples can be generated.
A good analogy is to imagine a spinning buoyant sphere (like a volleyball) on the surface of a swimming pool – there will be very little disturbance on the water’s surface. Now spin a rugby ball on the surface – ripples will be swept out by the long ends of the ball. This analogy is also important when we consider the loss of energy in the spinning ball. Spinning the volleyball will cause minimum loss in energy (as there is little drag, and little wave production), spinning the rugby ball will cause a huge loss in energy (lots of drag, lots of waves). The volleyball will spin for much longer than the rugby ball.
This is another important observation, the LIGO group observe a slow-down in rotation rate, but less than 4 % of the energy loss can be attributed to gravitational wave production. Therefore the neutron star in the Crab Nebula has a smooth surface with very little variation in surface topography causing drag. After all, LIGO should be able to observe gravitational waves produced by a surface deformation only a few meters high.
“The physics world has been waiting eagerly for scientific results from LIGO. It is exciting that we now know something concrete about how nearly spherical a neutron star must be, and we have definite limits on the strength of its internal magnetic field.” – Nobel Prize-winning radio astronomer and professor at Princeton University, Joseph Taylor.
- Lumpy Neutron Stars can Generate Gravitational Waves (Universe Today)
- When Stars Collide: LIGO and Gravitational Wave Astronomy (Astroengine) |
Radioactive decay is the set of various processes by which unstable atomic nuclei (nuclides) emit subatomic particles. Decay is said to occur in the parent nucleus and produce a daughter nucleus.
symbol is used to indicate radioactive material. The Unicode
encoding of this symbol is U+2622 (☢).
The SI unit for measuring radioactive decay is the becquerel (Bq). If a quantity of radioactive material produces one decay event per second, it has an activity of one Bq. Since any reasonably-sized sample of radioactive material contains very many atoms, a becquerel is a tiny level of activity; numbers on the order of gigabecquerels are seen more commonly.
The neutrons and protons that constitute nuclei, as well as other particles that may approach them, are governed by several interactions. The strong nuclear force, not observed at the familiar macroscopic scale, is the most powerful force over subatomic distances. The electrostatic force is also significant. Of lesser importance are the weak nuclear force and the gravitational force.
The interplay of these forces is very complex. Some configurations of the particles in a nucleus have the property that, should they shift ever so slightly, the particles could fall into a lower-energy arrangement. One might draw an analogy with a tower of sand: while friction between the sand grains can support the tower's weight, a disturbance will unleash the force of gravity and the tower will collapse.
Such a collapse (a decay event) requires a certain activation energy. In the case of the tower of sand, this energy must come from outside the system, in the form of a gentle prod or swift kick. In the case of an atomic nucleus, it is already present. Quantum-mechanical particles are never at rest; they are in continuous random motion. Thus, if its constituent particles move in concert, the nucleus can spontaneously destabilize. The resulting transformation changes the structure of the nucleus; thus it is a nuclear reaction, in contrast to chemical reactions, which concern interactions of electrons with nuclei.
(Some nuclear reactions do involve external sources of energy, in the form of "collisions" with outside particles. However, these are not considered decay.)
As discussed above, the decay of an unstable nucleus (radionuclide) is entirely random and it is impossible to predict when a particular atom will decay. However, it is equally likely to decay at any time. Therefore, given a sample of a particular radioisotope, the number of decay events expected to occur in a small interval of time dt is proportional to the number of atoms present. If N is the number of atoms, the following first-order differential equation can be written:
Particular radionuclides decay at different rates, each having its own decay constant (λ). The negative sign indicates that N decreases with each decay event. The solution to this equation is the following function:
This function represents exponential decay. It is only an approximate solution, for two reasons. Firstly, the exponential function is continuous, but the physical quantity N can only take positive integer values. Secondly, because it describes a random process, it is only statistically true. However, in most common cases, N is a very large number and the function is a good approximation.
In addition to the decay constant, radioactive decay is sometimes characterized by the mean lifetime. Each atom "lives" for a finite amount of time before it decays, and the mean lifetime is the arithmetic mean of all the atoms' lifetimes. It is represented by the symbol τ, and is related to the decay constant as follows:
A more commonly used parameter is the half-life. Given a sample of a particular radionuclide, the half-life is the time taken for half the radionuclide's atoms to decay. The half life is related to the decay constant as follows:
This relationship between the half-life and the decay constant shows that highly radioactive substances are quickly spent, while those that radiate weakly endure longer. Half-lives of known radionuclides vary widely, from 109 years for very nearly stable nuclides, to 10-6 seconds for highly unstable ones.
Modes of decay
Radionuclides can undergo a number of different reactions. These are summarized in the following table, in rough order of increasing rarity. For brevity, neutrons, protons and electrons are represented by the symbols n, p+, and e- respectively.
Radioactive decay results in a loss of mass, which is converted to energy (the disintegration energy) according to the formula E = mc2. This energy is commonly released as photons (gamma radiation).
Decay chains and multiple modes
Many radionuclides have several different observed modes of decay. Bismuth-212, for example, has three.
The daughter nuclide of a decay event is usually also unstable, sometimes even more unstable than the parent. If this is the case, it will proceed to decay again. A sequence of several decay events, producing in the end a stable nuclide, is a decay chain.
Of the commonly occurring forms of radioactive decay, the only one that changes the number of aggregate protons and neutrons (nucleons) contained in the nuclide is alpha emission, which reduces it by four. Thus, the number of nucleons modulo 4 is preserved across any decay chain.
Occurrence and applications
According to the Big Bang theory, radioactive isotopes of the lightest elements H, He, and traces of Li) were produced very shortly after the emergence of the universe. However, these structures are so highly unstable that virtually none of these original nuclides remain today. With this exception, all unstable nuclides were formed in stars (particularly supernovae).
Radioactive decay has been put to use in the technique of radioisotopic labelling, used to track the passage of a chemical substance through a complex system (such as a living organism). A sample of the substance is synthesized with a high concentration of unstable atoms. The presence of the substance in one or another part of the system is determined by detecting the locations of decay events.
On the premise that radioactive decay is truly random (rather than merely chaotic), it has been used in hardware random-number generators.
Last updated: 08-17-2005 14:37:27 |
NCERT Solutions for Class 9 Maths Chapter 3 Coordinate Geometry are useful for students as it helps them to score well in the class exams. We, in our aim to help students, have devised detailed chapter wise solutions for the students to understand the concepts easily. The NCERT Solutions contain detailed steps explaining all the problems that come under the chapter 3 “Coordinate Geometry” of the Class 9 NCERT Textbook. We have followed the latest Syllabus, while creating the NCERT solutions and they are framed in accordance with the exam pattern of the CBSE Board.
These solutions are designed by subject matter experts who have assembled model questions covering all the exercise questions from the textbook. By solving questions from this NCERT Solutions for Class 9, students will be able to clear all their concepts about “Coordinate Geometry.” Apart from this, other resources used to help students to prepare for the exams and score good marks include the NCERT notes, sample papers, textbooks, previous year papers, exemplar questions and so on.
List of Exercises in class 9 Maths Chapter 3
Exercise 3.1 Solutions 2 Questions (1 Long Answer Question, 1 Main Questions with 2 Sub-questions under it)
Exercise 3.2 Solutions 2 Questions (1 Main Question with 3 Sub-questions, 1 Main question with 8 sub-questions)
Exercise 3.3 Solutions 2 Questions (2 Long Answer Questions)
Access Answers of Maths NCERT class 9 Chapter 3 – Coordinate Geometry
Exercise 3.1 Page: 53
1. How will you describe the position of a table lamp on your study table to another person?
For describing the position of table lamp on the study table, we take two lines, a perpendicular and a horizontal line. Considering the table as a plane(x and y axis) and taking perpendicular line as Y axis and horizontal as X axis respectively. Take one corner of table as origin where both X and Y axes intersect each other. Now, the length of table is Y axis and breadth is X axis. From The origin, join the line to the table lamp and mark a point. The distances of the point from both X and Y axes should be calculated and then should be written in terms of coordinates.
The distance of the point from X- axis and Y- axis is x and y respectively, so the table lamp will be in (x, y) coordinate.
Here, (x, y) = (15, 25)
2. (Street Plan): A city has two main roads which cross each other at the centre of the city. These two roads are along the North-South direction and East-West direction. All the other streets of the city run parallel to these roads and are 200 m apart. There are 5 streets in each direction. Using 1cm = 200 m, draw a model of the city on your notebook. Represent the roads/streets by single lines.
There are many cross- streets in your model. A particular cross-street is made by two streets, one running in the North – South direction and another in the East – West direction. Each cross street is referred to in the following manner: If the 2nd street running in the North – South direction and 5th in the East – West direction meet at some crossing, then we will call this cross-street (2, 5). Using this convention, find:
(i) how many cross – streets can be referred to as (4, 3).
(ii) how many cross – streets can be referred to as (3, 4).
- Only one street can be referred to as (4,3) (as clear from the figure).
- Only one street can be referred to as (3,4) (as we see from the figure).
Exercise 3.2 Page: 60
1. Write the answer of each of the following questions:
(i) What is the name of horizontal and the vertical lines drawn to determine the position of any point in the Cartesian plane?
(ii) What is the name of each part of the plane formed by these two lines?
(iii) Write the name of the point where these two lines intersect.
(i) The name of horizontal and vertical lines drawn to determine the position of any point in the Cartesian plane is x-axis and y-axis respectively.
(ii) The name of each part of the plane formed by these two lines x-axis and y-axis is quadrants.
(iii) The point where these two lines intersect is called the origin.
2. See Fig.3.14, and write the following:
i. The coordinates of B.
ii. The coordinates of C.
iii. The point identified by the coordinates (–3, –5).
iv. The point identified by the coordinates (2, – 4).
v. The abscissa of the point D.
vi. The ordinate of the point H.
vii. The coordinates of the point L.
viii. The coordinates of the point M.
i. The co-ordinates of B is (−5, 2).
ii. The co-ordinates of C is (5, −5).
iii. The point identified by the coordinates (−3, −5) is E.
iv. The point identified by the coordinates (2, −4) is G.
v. Abscissa means x co-ordinate of point D. So, abscissa of the point D is 6.
vi. Ordinate means y coordinate of point H. So, ordinate of point H is -3.
vii. The co-ordinates of the point L is (0, 5).
viii. The co-ordinates of the point M is (−3, 0).
Exercise 3.3 Page: 65
1. In which quadrant or on which axis do each of the points (– 2, 4), (3, – 1), (– 1, 0), (1, 2) and (– 3, – 5) lie? Verify your answer by locating them on the Cartesian plane.
- (– 2, 4): Second Quadrant (II-Quadrant)
- (3, – 1): Fourth Quadrant (IV-Quadrant)
- (– 1, 0): Negative x-axis
- (1, 2): First Quadrant (I-Quadrant)
- (– 3, – 5): Third Quadrant (III-Quadrant)
2. Plot the points (x, y) given in the following table on the plane, choosing suitable units of distance on the axes.
The points to plotted on the (x, y) are:
i. (-2, 8)
ii. (-1, 7)
iii. (0, -1.25)
iv. (1, 3)
v. (3, -1)
On the graph mark X-axis and Y-axis. Mark the meeting point as O.
Now, Let 1 unit = 1 cm
i. (-2, 8): II- Quadrant, Meeting point of the imaginary lines that starts from 2 units to the left of origin O and from 8 units above the origin O
ii. (-1, 7): II- Quadrant, Meeting point of the imaginary lines that starts from 1 units to the left of origin O and from 7 units above the origin O
iii. (0, -1.25): On the x-axis, 1.25 units to the left of origin O
iv. (1, 3): I- Quadrant, Meeting point of the imaginary lines that starts from 1 units to the right of origin O and from 3 units above the origin O
v. (3, -1): IV- Quadrant, Meeting point of the imaginary lines that starts from 3 units to the right of origin O and from 1 units below the origin O
NCERT Solutions for Class 9 Maths Chapter 3- Coordinate Geometry
Out of the 80 marks assigned for the CBSE Class 9 exams, questions of about 6 marks will be from Coordinate Geometry. Also, you can expect at least about 2-3 questions from this section to come surely for the final exam, as seen from the earlier trend. The 3 questions have been assigned with 1, 2 and 3 marks respectively, thus adding up to make the 6 marks from the units of Coordinate Geometry.
The main topics covered in this chapter include: 3.1 Introduction 3.2 Cartesian System 3.3 Plotting a Point in the Plane if its Coordinates are Given.
Coordinate geometry is an interesting subject where you get to learn about the position of an object in a plane, learn about the coordinates or concepts of cartesian plane and so on. For example, ”Imagine a situation where you know only the street number of your friend’s house. Would it be easy for you to find her house, or would it be easier if you had both the house number and the street number?” There are many other situations, in which to find a point you might be required to describe its position with reference to more than one line. You can learn more about this from the chapter 3 of NCERT Textbooks. And here we provide you with solutions to all the questions covering this topic in the NCERT Solutions for Class 9 Maths.
Key Features of NCERT Solutions for Class 9 Maths Chapter 3- Coordinate Geometry
- Help to inculcate the right attitude to studies amongst students
- Make the fundamentals of the chapter very clear to students
- Increase efficiency by solving chapter wise exercise questions
- The questions are all assembled with detailed explanations
- Students can solve these solutions at their own pace and gain practice |
Frequently asked questions
Current as at 13 June 2016
What is coral bleaching?
Corals are animals which live in a symbiotic relationship with microscopic algae called zooxanthellae. The zooxanthellae, which live within the coral tissue, convert sunlight into food, providing corals with up to 90 per cent of their energy needs. Zooxanthellae also give corals much of their colour.
Bleaching occurs when stressful conditions, such as heat, cause this relationship to break down, resulting in the corals expelling their zooxanthellae. This leaves the coral tissue mostly transparent, revealing the coral's bright white skeleton.
This loss of their symbiotic algae means bleached corals are essentially starving.
Mass coral bleaching events have only ever occurred during unusually high sea temperatures.
Why are ocean temperatures unusually warm?
Great Barrier Reef waters have warmed by approximately 0.67 degree Celsius since 1871, with most of the warmest years occurring in the past two decades.
These record-breaking temperatures occurred because of the underlying ocean warming trend caused by climate change, the recent strong El Nino and local weather conditions.
How extensive is the coral bleaching and coral mortality?
The mass bleaching follows a distinct pattern whereby the severity declines from north to south.
The most severe bleaching continues to be in the northern half of the Great Barrier Reef Marine Park. Bleaching is less severe towards the southern part of the Reef.
Overall, 22 per cent of coral on the Great Barrier Reef has died from this current mass bleaching event — about 85 per cent of that die-off has occurred between the tip of Cape York and just north of Lizard Island, 250 kilometres north of Cairns.
The level of bleaching and mortality varies within different sections of the Marine Park.
This map provides a summary of bleaching and mortality levels in each of the four sectors.
This map provides average mortality levels on surveyed reefs.
Are all bleached corals dead?
Bleached white corals are not dead. These corals are under stress from unusually hot conditions.
If the heat stress does not persist for too long, the corals can recover and regain their colour. If the heat stress persists for a month or more, the stressed corals will eventually die and be covered in algae.
What does it mean if a coral is fluorescent?
Stressed corals may initially display a striking fluorescent hue of pink, yellow or blue — this is also a sign of severe stress and bleaching.
Most corals contain fluorescent proteins that help minimise damage from ultraviolet light. When there is excessive sunlight, the fluorescent proteins can absorb and dissipate some ultraviolet light. Most fluorescent proteins are invisible in daylight, meaning bleached corals may appear completely white rather than fluorescent.
A fluorescent coral can die from heat stress without becoming a totally bleached white coral.
What are the stages of coral bleaching?
It is a common misconception that corals progress through various stages of bleaching, between being healthy, totally bleached or dead.
Corals can skip stages, even going from healthy to dead during a bleaching event without displaying any other signs of bleaching if the heat and light stress is very acute.
Photos are available of various stages of bleaching.
What has changed in the past month?
Some reefs in the far northern management area of the Marine Park were surveyed again. Mortality was found to have increased substantially, particularly on inshore and midshelf reefs.
Sea temperatures have cooled over the past month, but are still well above average for this time of year.
The effect of continued higher than usual sea temperatures during autumn and winter on potential coral recovery or further bleaching and disease is unknown. The situation is being closely monitored to determine its effects.
What is the extent of coral mortality overall?
It is too early to tell, given the bleaching event is still unfolding.
While substantial mortality has been detected on some reefs between Port Douglas and the tip of Cape York, further in-water surveys will be needed to gauge the overall extent of coral die-off.
What is the prognosis for recovery?
At 348,000 square kilometres, the Great Barrier Reef is large and resilient with the ability to recover from major events — as demonstrated by recently released data from the Australian Institute of Marine Science’s Long-term Data Monitoring Program.
The data shows between 2012 and 2015, coral cover improved in the central and southern sectors through post-cyclone coral growth, but declined in the northern sector due to more recent cyclone activity.
Since 2012, coral cover on the Reef increased to almost 20 per cent from a low point of about 17 per cent.
The new results show coral in the southern sector increased from 15 per cent in 2012 to 27 per cent in 2015. This strong coral growth was the main contributor to an increase in the Reef-wide figure.
Recovery from the current bleaching event will largely depend on how long ocean temperatures remain high locally.
If conditions return to normal, the tiny algae that give corals their food and colour can repopulate corals. However, if ocean temperatures are too high for too long, the corals will eventually starve and die.
Given the unprecedented scale and nature of the current bleaching event, it is too early to determine how long it will take for corals to recover from this period of extreme heat stress.
On the most resilient reefs and in ideal circumstances, bleached corals can regain their colour within a period of weeks to months once water temperatures return to normal.
However, corals experiencing chronic poor water quality and/or other stressors are unlikely to recover within these short timeframes and recovery will be impeded.
Even if a coral regains its colour, this does not necessarily mean it is in good health.
Research shows bleaching can deplete the corals' energy resource to the extent that corals do not reproduce for one or two years. Its weakened state means the coral is also more vulnerable to disease.
What does bleaching mean for visitors?
At 348,000 square kilometres, the Reef contains many places where visitors can experience and enjoy the marine environment. In the short to medium term, there will be minimal effect on fish and invertebrates and other diverse reef organisms.
There are also many experiences on offer throughout the Marine Park, from hiking on islands and kayaking through to swimming and boating.
We have general information about the Reef, best practices and experiences on offer.
For detailed advice about tourism and to plan your visit, please see Tourism Queensland's website.
How does this year’s coral bleaching compare to mass bleaching events on the Great Barrier Reef in 1998 and 2002?
Due to its extent and severity, the 2016 bleaching event is worse than the 1998 and 2002 mass bleaching events.
In 1998 and 2002, about 40 to 50 per cent of the Reef experienced some bleaching. Up to five per cent of reefs suffered significant coral mortality in each of these events, as sea temperatures came back down again in time for recovery to occur.
How many surveys were conducted?
Along with the Queensland Parks and Wildlife Service, we conducted 2641 reef health and impact surveys of 186 reefs across the Marine Park since the beginning of summer. Additional in-water surveys were completed by our science partners.
More than 900 reefs in the Marine Park and the Torres Strait were surveyed from the air by the ARC Centre of Excellence for Coral Reef Studies. Researchers found of 911 individual reefs only 7 per cent (68 reefs) were not bleached to some extent. Between 60 and 100 per cent of corals were severely bleached on 316 reefs, nearly all in the northern half of the Reef.
We also continue to receive additional observations from the public and tourism operators through our Eye on the Reef program.
How was bleaching measured?
Aerial surveys quickly provided information about the extent of coral bleaching (that is, how many reefs in an area exhibited any bleaching) and what proportion of the coral on an individual reef was bleached.
These results helped direct in-water survey efforts to determine how severe the bleaching was (including to what depth) and the rate of coral mortality. It also helped ground-truth aerial data.
In-water surveys measured the extent of coral bleaching based on the proportion of bleached coral on an individual reef. They also measured severity by assessing the most common appearance of bleached corals.
This included recording whether corals were bleached only on the upper surface, if corals were pale or fluorescent, or if corals were completely white. It also measured the proportion of corals that had already died from bleaching.
How will you use the data from in-water surveys?
We will use the survey information to compile a comprehensive assessment of the overall impact of the bleaching event, the percentage of bleached reefs in the Marine Park, the percentage of bleached corals on each individual reef, how severe the bleaching was, and the percentage of bleaching-induced coral mortality.
By comparing information over time, we will also know how much bleaching is occurring as our climate changes, which species and locations are most affected and how much of an impact other stressors, such as crown-of-thorns starfish, pollution and disease, are having.
Monitoring over time will also allow us to measure how quickly corals return to a healthy state.
The resulting data will guide management policies and actions designed to protect and assist recovery of corals.
What resources were devoted to conducting the surveys?
The Great Barrier Reef Marine Park Authority and the Queensland Parks and Wildlife Service (QPWS) worked with scientists from the Australian Research Council Centre of Excellence for Coral Reef Studies and the Australian Institute of Marine Science (AIMS) to conduct the surveys.
Resources committed to the task totalled approximately $3.5 million across these organisations.
The University of Queensland will also resurvey sites in the Far Northern Management Area as part of its Catlin Seaview Survey program.
What happens next?
Another round of surveys is scheduled to start in October to assess recovery rates and survivorship.
If you're heading out on the water, don't forget your free Zoning Map so you know where you can go and what you can do.
We're delighted to celebrate the 40 years of the managing the Great Barrier Reef.
Visit our Great Barrier Reef and discover its amazing animals, plants, and habitats.
Everyone has a role to play in protecting our Great Barrier Reef. Find out what you can do to help protect this great Australian icon.
If you see sick, dead or stranded marine animals please call RSPCA QLD 1300 ANIMAL (1300 264 625)
A Vulnerability Assessment: of the issues that could have far-reaching consequences for the Great Barrier Reef.
Current Conditions: Environmental and climatic forecasts for the Great Barrier Reef
The Great Barrier Reef is under pressure. Many people, including Reef Guardians, are making a difference.
Become a marine scientist for a day Download our free app to share your sightings.
Published every five years, our Outlook Report provides an overview of Reef health and management.
Learn more about how the Australian and Queensland are managing the Reef through Reef 2050. |
Out at the boundary of our solar system, pressure runs high. This pressure, the force plasma, magnetic fields and particles like ions, cosmic rays and electrons exert on one another when they flow and collide, was recently measured by scientists in totality for the first time — and it was found to be greater than expected.
Using observations of galactic cosmic rays — a type of highly energetic particle — from NASA’s Voyager spacecraft scientists calculated the total pressure from particles in the outer region of the solar system, known as the heliosheath. At nearly 9 billion miles away, this region is hard to study. But the unique positioning of the Voyager spacecraft and the opportune timing of a solar event made measurements of the heliosheath possible. And the results are helping scientists understand how the Sun interacts with its surroundings.
“In adding up the pieces known from previous studies, we found our new value is still larger than what’s been measured so far,” said Jamie Rankin, lead author on the new study and astronomer at Princeton University in New Jersey. “It says that there are some other parts to the pressure that aren’t being considered right now that could contribute.”
On Earth we have air pressure, created by air molecules drawn down by gravity. In space there’s also a pressure created by particles like ions and electrons. These particles, heated and accelerated by the Sun create a giant balloon known as the heliosphere extending millions of miles out past Pluto. The edge of this region, where the Sun’s influence is overcome by the pressures of particles from other stars and interstellar space, is where the Sun’s magnetic influence ends. (Its gravitational influence extends much farther, so the solar system itself extends farther, as well.)
In order to measure the pressure in the heliosheath, the scientists used the Voyager spacecraft, which have been travelling steadily out of the solar system since 1977. At the time of the observations, Voyager 1 was already outside of the heliosphere in interstellar space, while Voyager 2 still remained in the heliosheath.
“There was really unique timing for this event because we saw it right after Voyager 1 crossed into the local interstellar space,” Rankin said. “And while this is the first event that Voyager saw, there are more in the data that we can continue to look at to see how things in the heliosheath and interstellar space are changing over time.”
The scientists used an event known as a global merged interaction region, which is caused by activity on the Sun. The Sun periodically flares up and releases enormous bursts of particles, like in coronal mass ejections. As a series of these events travel out into space, they can merge into a giant front, creating a wave of plasma pushed by magnetic fields.
When one such wave reached the heliosheath in 2012, it was spotted by Voyager 2. The wave caused the number of galactic cosmic rays to temporarily decrease. Four months later, the scientists saw a similar decrease in observations from Voyager 1, just across the solar system’s boundary in interstellar space.
Knowing the distance between the spacecraft allowed them to calculate the pressure in the heliosheath as well as the speed of sound. In the heliosheath sound travels at around 300 kilometers per second — a thousand times faster than it moves through air.
The scientists noted that the change in galactic cosmic rays wasn’t exactly identical at both spacecraft. At Voyager 2 inside the heliosheath, the number of cosmic rays decreased in all directions around the spacecraft. But at Voyager 1, outside the solar system, only the galactic cosmic rays that were traveling perpendicular to the magnetic field in the region decreased. This asymmetry suggests that something happens as the wave transmits across the solar system’s boundary.
“Trying to understand why the change in the cosmic rays is different inside and outside of the heliosheath remains an open question,” Rankin said.
Studying the pressure and sound speeds in this region at the boundary of the solar system can help scientists understand how the Sun influences interstellar space. This not only informs us about our own solar system, but also about the dynamics around other stars and planetary systems. |
Stoichiometry- a complex term, stems from two Greek words; stoikhein which means element
and metron which stands for measuring. So, stoichiometry literally means measuring elements.
But that doesn’t explain the actual concept behind it so let’s delve into it further with an example.
If I say that water is made up by the combination of hydrogen with oxygen. Is this information
correct? Absolutely. But is this information enough? Well, not from a chemist’s point of view. The
chemist would be more satisfied if you tell him/her that one mole of water consists of a combination of two moles of hydrogen atoms and one mole oxygen atoms.
The particularity in terms of giving minute details about a specific amount of reactant A reacting
with a specific amount of reactant B to give a compound C using a balanced chemical equation is
an art. The toolkit that a chemist uses for exhibiting this art is called ‘’Stoichiometry’’.
Stoichiometry is the chemical arithmetic that is used to relate the amounts of reactants and products to each other. It describes the quantitative relationship, in terms of relative ratios of mass and/or volume, between the reactants and products of a given chemical reaction.
What is meant by a balanced chemical equation
Chemical equations are a concise way of representing chemical reactions. The reactants appear on the left side of a chemical equation while products appear on the right side of it. The physical states (solid, liquid, gas, aqueous) of the reactants as well as the products are written in parentheses right next to each compound. Both sides of the reaction are separated by a single or a double arrow that is used to signify the direction of the chemical reaction. Stoichiometric coefficients are then inserted to balance the chemical equation. A balanced chemical equation thus consists of equal number of atoms of each element on the reactant and the product sides.
A stoichiometric coefficient is the numerical value inserted in front of the atoms, ions and/or
molecules in a chemical equation to balance their numbers on each side of the equation.
Stoichiometric coefficients are important to establish the mole ratio between reactants and
products involved in the chemical reaction. Stoichiometric coefficients are ideally integers i.e.,
whole numbers. In case of a fraction, every stoichiometric coefficient in the equation should be
multiplied by the denominator of that fraction to create whole numbers while maintaining the
How to write a balanced chemical equation
To write a balanced chemical equation, sequence wise follow all the steps given below:
Step I: Determine the molecular formulae of each of the reactants employed and the products
formed in the chemical reaction.
For example: A reaction involving the following compounds
|Chemical Name||Molecular Formula|
Step II: Write a word equation for the chemical reaction.
Magnesium hydroxide +Hydrochloric acid →Magnesium chloride+ Water
(Reactants on L.H.S) (Products on R.H.S)
Step III: Write the chemical equation from this word equation.
Step IV: Add physical state symbols right next to each compound.
Step V: Add stoichiometric coefficients to balance the atoms of each element on the reactant and the product sides.
- 1 magnesium (Mg) atom on each side
- 2 chlorine (Cl) atoms on product side so insert 2 next to HCl on reactant side
- 2 oxygen (O) atoms on reactant side so insert 2 next to H2O on product side
- This automatically balances 4 hydrogen (H) atoms on both reactant and product sides
As a general rule of thumb in stoichiometric principles, balance a chemical equation by adding
stoichiometric coefficients to balance different atoms in the following order:
Significance of a balanced chemical equation in stoichiometry
A balanced chemical equation represents how all chemical reactions follow the Law of
Conservation of mass. The law of conservation of mass states that matter is neither created nor
destroyed in a chemical reaction, it can be converted from one form to another. Therefore, the
number of atoms of each element reacting on the reactant side will always be equal to the
number of atoms of each element formed on the product side. In case of chemical reactions
involving charged species, total charge on the reactant side must be equal to the total charge on
the product side.
Write balanced chemical equations for the following chemical reactions.
Hint: Follow all the steps discussed above, one at a time.
1. Zinc + Nitric acid →Zinc Nitrate +Hydrogen
2. Methane+Oxygen →Carbon dioxide+Water
3. Lead hydroxide+sulphuric acid →Lead sulfate+water
1. Step I: Writing the molecular formulae
Nitric acid: HNO3
Zinc nitrate: Zn (NO3)2
Step II: Write chemical equation with state symbols
Step III: Balance the equation by adding stoichiometric coefficients
General types of chemical reactions
In order to write a balanced chemical equation, one must be aware of the different types that
chemical reactions can be categorized into. There are six main types of chemical reactions that
can take place:
A combination reaction involves the addition of two or more simple substances to form a single, complex compound. It is also known as a synthesis reaction.
Combustion represents burning of a substance in the presence of oxygen. Hydrocarbons are
organic compounds made up of hydrogen and oxygen. Complete combustion of hydrocarbons
produces carbon dioxide and water.
Neutralization is a reaction of an acid with a base to produce salt and water.
Single Displacement Reaction
Double Displacement Reaction
A double displacement reaction is also called a salt metathesis reaction. It is a chemical reaction
involving exchange of atoms and/or ions between two chemical entities to form two new
chemical compounds at the product side.
What factors control the stoichiometry of a reaction
To find the stoichiometry of a reaction, one must know the ‘amount’ of each reactant that reacts
exactly to form particular amounts of products. Let’s discuss the different terms that a chemist
uses in determining the amount of a chemical entity.
One mole of a substance is defined as the amount of substance consisting of an Avogadro number
of particles i.e., 6.02×1023 atoms, molecules or ions. Molar mass relates the mass of a substance
(in grams) with the number of moles present in it. It is a useful chemical ratio to develop a
stoichiometric relationship. Molar mass of atoms or ions can be determined from their atomic
masses in accordance with their arrangement in the periodic table. On the other hand, the molar
mass of molecules and/or compounds can be calculated by taking sum of the atomic mass of
each element multiplied by the number of atoms of that element. For instance, molar mass of
butane C4H10 is:
4 (atomic mass of carbon) + 10 (atomic mass of hydrogen) = 4(12.01) + 10 (1.01) = 58.1 g/mol
Determining the stoichiometry of a chemical reaction using molar mass
Problem: In a chemical reaction ,200 grams of butane burnt in a sufficient supply of oxygen. Using this information and the concept of molar mass, calculate the amounts (in grams) of carbon dioxide and water expected to be produced in this reaction.
Step I: From the knowledge we acquired in the preceding sections, write a balanced chemical equation for the above reaction.
Butane+ Oxygen →Carbon dioxide+ Water
The equation written above is balanced but since 13/2 is a fraction so we ideally convert it into an integer by multiplying the whole equation with the denominator of this fraction.
Step II: Find the number of moles of butane reacted by using the equation 1 given below:
moles of butane
Step III: Find the mole ratio from the balanced chemical equation: x number of moles of reactant
that reacted to produce y number of moles of product.
|Butane (C4H10)||Carbon dioxide (CO2)||Water (H2O)|
|No of moles||2||8||10|
Step IV: Find the number of moles of each product formed using the mole ratio determined from the balanced chemical equation.
2 : 8
3.44 : x1
2 : 10
3.44 : x2
Step V: Use some basic mathematics to find the values of x1 and x2
x1 = 8/2 (3.44) = 13.76 moles of CO2
x2 = 10/2 (3.44) = 17.2 moles of H2O
Step VI: Use molar masses and equation 1 to find the amounts of both products formed
Molar mass of CO2 = Atomic mass of carbon + 2 (atomic mass of oxygen)
= 12.01 + 2 (16.0) = 44.01
Mass of CO2 produced = no of moles x molar mass
= 13.76 x 44.01 = 605.6 grams
Molar mass of H2O = 2(Atomic mass of hydrogen) + atomic mass of oxygen
= 2(1.01) + 16.0 = 18.02
Mass of H2O produced = no of moles x molar mass
= 17.2 x 18.02 = 309.9 grams
Result: The stoichiometric calculations performed for the given chemical reaction reveal that
200 grams of butane burns in an excess of oxygen to yield 605.6 grams of carbon dioxide and
309.9 grams of water.
You must have noticed that we only considered the amount of butane that reacted to produce
both the products of the reaction. Why no attention paid to the amount of oxygen used? This
question introduces us to another interesting concept related to stoichiometry.
In a chemical reaction involving two or more reactants, the reactant that gets consumed first is
called a limiting reagent or a limiting reactant. In other words, this reactant ‘limits’ the rate of
reaction by controlling the amount of product formed. The reactants other than the limiting
reagent are then automatically perceived to be present in excess amount.
Thus, in the example given above, butane is the limiting reagent while oxygen is present in excess.
We always use the number of moles of the limiting reagent present to calculate the no of moles
and hence the amount of product formed. Therefore, the stoichiometry of the above chemical
reaction was calculated with reference to butane only.
Limiting reagent paves way for the calculation of some other informative entities about a
chemical reaction called ‘reaction yields’’.
Theoretical yield of a chemical reaction is defined as the maximum amount of product expected
to form when the limiting reagent of the reaction is fully consumed. In accordance with this
definition, can we say that 605.6 grams was the theoretical yield of carbon dioxide produced
from the combustion of butane in the above example? Yes, you guessed it right.
Learn more about limiting reagents and theoretical yield with innovative examples from here and
practice further on how to calculate theoretical yield.
Percentage yield can be calculated from theoretical yield by using the formula given in equation
We should keep in mind that the actual yield of a reaction is generally smaller than the theoretical
yield expected from the reaction’s stoichiometry. None of the real-world chemical reactions hold
100% efficiency. Not all the reactants are fully converted to products, some matter is lost in the
form of energy to the surroundings as the reaction proceeds.
Say for instance, If the actual yield of carbon dioxide collected from the combustion of butane is
402.6 grams, then the percentage yield of the reaction can be calculated as:
Learn more the concept from our excellent article on percentage yield.
A balanced chemical equation is a summary of the reactants and products involved in the
reaction. Stoichiometry forms the basis of all quantitative relationships in chemistry and is thus
pivotal in decoding the information provided about any chemical reaction.
Crack some stoichiometric problems from the following sources and test the knowledge you
gained in this article. |
What are Nanomaterials?
Definition of Nanomaterials
Nanomaterials are materials with at least one external dimension that measures 100 nanometers (nm) or less or with internal structures measuring 100 nm or less. The nanomaterials that have the same composition as known materials in bulk form may have different physico-chemical properties.
Materials reduced to the nanoscale can suddenly show very different properties compared to what they show on a macroscale. For instance, opaque substances become transparent (copper); inert materials become catalysts (platinum); stable materials turn combustible (aluminum); solids turn into liquids at room temperature (gold); insulators become conductors (silicon).
Nanomaterials are not simply another step in the miniaturization of materials or particles. They often require very different production approaches. There are several processes to create various sizes of nanomaterials, classified as ‘top-down’ and ‘bottom-up’.
Nanomaterials can be constructed by top down techniques, producing very small structures from larger pieces of material, for example by etching to create circuits on the surface of a silicon microchip.
They may also be constructed by bottom up techniques, atom by atom or molecule by molecule. One way of doing this is self-assembly, in which the atoms or molecules arrange themselves into a structure due to their natural properties. Crystals grown for the semiconductor industry provide an example of self assembly, as does chemical synthesis of large molecules.
Although this 'positional assembly' offers greater control over construction, it is currently very laborious and not suitable for industrial applications.
Check out our detailed article on how nanoparticles are made.
The ISO definition of nano-objects. Included as nano-objects are nanoparticles (nanoscale in all the three dimensions), nanofibers (nanoscale in two dimensions), and nanosheets or nanolayers (nanoscale only in one dimension) that include graphene and MXenes. (© John Wiley & Sons) (click to enlarge)
Applications of Nanomaterials
If you check our links in the right column of this page you can explore lots of areas where nanotechnology and nanomaterials are currently used. So we don't have to repeat this here. Suffice it to say that nanotechnology already has become ubiquitous in daily life through commodity products and growth rates are strong.
The chart below shows how products involving nanomaterials and nanotechnology are distributed across industries:
Global nanotechnology market by industry branches. (Source: doi:10.1021/acsnano.1c03992)
Analyzing nanotechnology revenues by sector reveals that materials and manufacturing contributed the most to total nanotechnology revenue. Such a trend is expected considering that the first stage of development in creating any general purpose technology includes foundational interdisciplinary research, which in case of nanotechnology translates into the discovery of material properties and synthesis of nanoscale components. However, in the upcoming years we can expect this trend to shift more toward application sectors as nanodevices are amalgamated with existing technologies.
Overview of revenues generated by nanotechnology (USD billions). Gross revenue generated from nanotechnology products, as grouped by industry sector and in the 2010–2018 period. (Source: doi:10.1021/acsnano.1c03992)
Definition of Nanomaterials
If 50% or more of the constituent particles of a material in the number size distribution have one or more external dimensions in the size range 1 nm to 100 nm, then the material is a nanomaterial. It should be noted that a fraction of 50% with one or more external dimensions between 1 nm and 100 nm in a number size distribution is always less than 50% in any other commonly-used size distribution metric, such as surface area, volume, mass or scattered light intensity. In fact it can be a tiny fraction of the total mass of the material.
Even if a product contains nanomaterials, or when it releases nanomaterials during use or aging, the product itself is not a nanomaterial, unless it is a particulate material itself that meets the criteria of particle size and fraction.
The volume specific surface area (VSSA) can be used under specific conditions to indicate that a material is a nanomaterial. VSSA is equal to the sum of the surface areas of all particles divided by the sum of the volumes of all particles. VSSA > 60 m2/cm3 is likely to be a reliable indicator that a material is a nanomaterial unless the particles are porous or have rough surfaces, but many nanomaterials (according to the principal size-based criterion) will have a VSSA of less than 60 m2/cm3. The VSSA > 60 m2/cm3 criterion can therefore only be used to show that a material is a nanomaterial, not vice versa. The VSSA of a sample can be calculated if the particle size distribution and the particle shape(s) are known in detail. The reverse (calculating the size distribution from the VSSA value) is unfeasible.
Dimensions of Nanomaterials
Nanomaterials are primarily categorized based on the dimensional characteristics they display. These dimensions are classified as zero-dimensional (0D), one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D) nanomaterials, all of which fall within the nanoscale range.
Classification of nanoscale dimensions. (© Nanowerk)
Quantum dots and small nanoparticles are often referred to as "zero-dimensional" (0D) structures, despite having three physical dimensions. This might sound confusing at first, but it’s because we’re talking about their quantum mechanical properties rather than their geometric shape. Let's unpack this a bit:
Quantum Confinement in All Three Dimensions: In quantum dots or small nanoparticles, electrons experience quantum confinement in all three dimensions, much like being restricted in an extremely tiny room. This confinement is effective when the size of the particle is comparable to or smaller than what is called the “exciton Bohr radius” of the material, which is typically a few nanometers This confinement limits the electrons to specific energy levels, their “discret energy levels.” Think of it as a game of musical chairs at the quantum scale: the electrons, like players in the game, can only occupy certain 'seats' or energy states.
The number of these available 'seats' is determined by several factors:
As a result of these combined factors, electrons in quantum dots have a limited set of energy levels they can occupy, akin to having a set number of chairs in the room. This is a stark contrast to larger, bulk materials where electrons have a more continuous range of energy levels available.
Exciton Bohr Radius: The exciton Bohr radius is a key factor in determining the size limit. It is like a measuring stick that tells us how small we need to make our nanoparticle to see its cool quantum effects. It varies between materials but is generally in the range of a few nanometers. When the size of the nanoparticle is smaller than or similar to this radius, quantum confinement effects are significant, and the particle behaves as a 0D system.
Size and Quantum Effects: The size of quantum dots is typically 2-10 nanometers. At this scale, the quirky rules of quantum mechanics start to dominate, making these particles behave very differently from larger pieces of the same material.
Comparison with Higher Dimensions: In our room analogy above, think of 1D and 2D materials, like nanowires and thin films, as narrow hallways and wide floors. Electrons can move freely along these hallways or floors but can’t jump out of them. This partial freedom leads to different behaviors compared to the completely confined quantum dots or nanoparticles.
Transition to 3D Behavior: As the size of the nanoparticle gets bigger, beyond the exciton Bohr radius, it starts behaving more like a regular, bulk material. The electrons begin to move more freely, akin to how water starts to flow when a dam is opened, leading to a more continuous range of energy levels. This marks the transition towards 3D behavior.
Material-Dependent Threshold: The exact size at which this transition occurs depends on the material of the nanoparticle. Different materials have different exciton Bohr radii and therefore different thresholds for the transition from quantum-confined (0D) behavior to bulk-like (3D) behavior.
Gradual Transition: It's important to note that the transition from 0D to 3D behavior is not abrupt but gradual. As the nanoparticle grows, its energy levels slowly spread out, moving from distinct steps on a ladder to more of a ramp. Accordingly, a material's physical properties change as the energy levels evolve from discrete to continuous.
To sum up the dimensionality issue in nanomaterials, each class—zero-dimensional (0D), one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D)—displays unique properties due to the extent of quantum confinement. In 0D materials, electrons are confined in all dimensions, leading to atom-like behaviors. In 1D materials, such as nanowires, confinement occurs in two dimensions, allowing electron movement in one direction. 2D materials, like graphene, confine electrons in a plane, resulting in unique electronic and physical properties. Finally, 3D materials, where quantum effects diminish, resemble bulk materials but with enhanced surface properties due to their nanoscale dimensions. Each dimensionality offers distinct physical, chemical, and electronic characteristics, making them suitable for various applications in science and technology. The transition from 0D to 3D is not abrupt but gradual, with properties evolving as the dimensionality increases, leading to a diverse spectrum of behaviors and potential applications in the realm of nanotechnology.
In zero-dimensional (0D) nanomaterials, all dimensions are confined to the nanoscale, typically not exceeding 100 nm. This category primarily includes quantum dots and nanoparticles, where electrons are quantum confined in all three spatial dimensions, leading to unique optical and electronic properties.
One-dimensional (1D) nanomaterials, such as nanotubes, nanorods, and nanowires, have one dimension that extends beyond the nanoscale, allowing electron movement along their length. This unique structure endows them with distinct mechanical, electrical, and thermal properties.
Two-dimensional (2D) nanomaterials are characterized by having two dimensions beyond the nanoscale. These materials, including graphene, nanofilms, and nanocoatings, are essentially ultra-thin layers where electrons are free to move along the plane but are confined in the perpendicular direction. This results in exceptional surface area, electrical conductivity, and strength.
Three-dimensional (3D) nanomaterials are those in which none of the dimensions are confined to the nanoscale. This diverse class includes bulk powders, dispersions of nanoparticles, aggregates of nanowires and nanotubes, and layered structures. In these materials, the unique properties of nanoparticles are combined with bulk material behaviors, leading to a wide range of applications and functionalities.
The chart below shows the distribution of nanomaterial dimensionality in commercialized products. Data show that 3D nanomaterials are the most abundant (85% of all materials), in particular nanoparticles, which are currently present in 78% of all nanoproducts.
Distribution of nanomaterial dimensionality in commercialized products. (Source: doi:10.1021/acsnano.1c03992)
Properties of Nanomaterials
Below we outline some examples of nanomaterials that are aimed at understanding their properties. As we will see, the behavior of some nanomaterials is well understood, whereas others present greater challenges.
Nanoscale in One Dimension
Thin films, layers and surfaces
One-dimensional nanomaterials, such as thin films and engineered surfaces, have been developed and used for decades in fields such as electronic device manufacture, chemistry and engineering.
In the silicon integrated-circuit industry, for example, many devices rely on thin films for their operation, and control of film thicknesses approaching the atomic level is routine.
Monolayers (layers that are one atom or molecule deep) are also routinely made and used in chemistry. The most important example of this new class of materials is graphene.
The formation and properties of these layers are reasonably well understood from the atomic level upwards, even in quite complex layers (such as lubricants) and nanocoatings. Advances are being made in the control of the composition and smoothness of surfaces, and the growth of films.
Engineered surfaces with tailored properties such as large surface area or specific reactivity are used routinely in a range of applications such as in fuel cells and catalysts. The large surface area provided by nanoparticles, together with their ability to self assemble on a support surface, could be of use in all of these applications.
Although they represent incremental developments, surfaces with enhanced properties should find applications throughout the chemicals and energy sectors.
The benefits could surpass the obvious economic and resource savings achieved by higher activity and greater selectivity in reactors and separation processes, to enabling small-scale distributed processing (making chemicals as close as possible to the point of use). There is already a move in the chemical industry towards this.
Another use could be the small-scale, on-site production of high value chemicals such as pharmaceuticals.
Graphene and other single- and few-layer materials
Graphene is an atomic-scale honeycomb lattice made of carbon atoms. Graphene is undoubtedly emerging as one of the most promising nanomaterials because of its unique combination of superb properties, which opens a way for its exploitation in a wide spectrum of applications ranging from electronics to optics, sensors, and biodevices.
For instance, graphene-based nanomaterials have many promising applications in energy-related areas. Just some recent examples: Graphene improves both energy capacity and charge rate in rechargeable batteries; activated graphene makes superior supercapacitors for energy storage; graphene electrodes may lead to a promising approach for making solar cells that are inexpensive, lightweight and flexible; and multifunctional graphene mats are promising substrates for catalytic systems (read more:graphene nanotechnology in energy).
We also compiled a primer on graphene applications and uses. And don't forget to read our much more extensive explainer What is graphene?
The fascination with atomic-layer materials that has started with graphene has spurred researchers to look for other 2D structures like for instance metal carbides and nitrides.
One particularly interesting analogue to graphene would be 2D silicon – silicene – because it could be synthesized and processed using mature semiconductor techniques, and more easily integrated into existing electronics than graphene is currently.
Another material of interest is 2D boron, an element with worlds of unexplored potential. And yet another new two-dimensional material – made up of layers of crystal known as molybdenum oxides – has unique properties that encourage the free flow of electrons at ultra-high speeds.
Nanoscale in Two Dimension
Two dimensional nanomaterials such as tubes and wires have generated considerable interest among the scientific community in recent years. In particular, their novel electrical and mechanical properties are the subject of intense research.
Carbon nanotubes (CNTs) were first observed by Sumio Iijima in 1991. CNTs are extended tubes of rolled graphene sheets. There are two types of CNT: single-walled (one tube) or multi-walled (several concentric tubes). Both of these are typically a few nanometers in diameter and several micrometers to centimeters long.
CNTs have assumed an important role in the context of nanomaterials, because of their novel chemical and physical properties. They are mechanically very strong (their Young’s modulus is over 1 terapascal, making CNTs as stiff as diamond), flexible (about their axis), and can conduct electricity extremely well (the helicity of the graphene sheet determines whether the CNT is a semiconductor or metallic). All of these remarkable properties give CNTs a range of potential applications: for example, in reinforced composites, sensors, nanoelectronics and display devices.
CNTs are now available commercially in limited quantities. They can be grown by several techniques. However, the selective and uniform production of CNTs with specific dimensions and physical properties is yet to be achieved. The potential similarity in size and shape between CNTs and asbestos fibers has led to concerns about their safety.
Inorganic nanotubes and inorganic fullerene-like materials based on layered compounds such as molybdenum disulphide were discovered shortly after CNTs. They have excellent tribological (lubricating) properties, resistance to shockwave impact, catalytic reactivity, and high capacity for hydrogen and lithium storage, which suggest a range of promising applications. Oxide-based nanotubes (such as titanium dioxide) are being explored for their applications in catalysis, photo-catalysis and energy storage.
Nanowires are ultrafine wires or linear arrays of dots, formed by self-assembly. They can be made from a wide range of materials. Semiconductor nanowires made of silicon, gallium nitride and indium phosphide have demonstrated remarkable optical, electronic and magnetic characteristics (for example, silica nanowires can bend light around very tight corners).
Nanowires have potential applications in high-density data storage, either as magnetic read heads or as patterned storage media, and electronic and opto-electronic nanodevices, for metallic interconnects of quantum devices and nanodevices.
The preparation of these nanowires relies on sophisticated growth techniques, which include self-assembly processes, where atoms arrange themselves naturally on stepped surfaces, chemical vapor deposition (CVD) onto patterned substrates, electroplating or molecular beam epitaxy (MBE). The ‘molecular beams’ are typically from thermally evaporated elemental sources.
The variability and site recognition of biopolymers, such as DNA molecules, offer a wide range of opportunities for the self-organization of wire nanostructures into much more complex patterns. The DNA backbones may then, for example, be coated in metal. They also offer opportunities to link nano- and biotechnology in, for example, biocompatible sensors and small, simple motors.
Such self-assembly of organic backbone nanostructures is often controlled by weak interactions, such as hydrogen bonds, hydrophobic, or van der Waals interactions (generally in aqueous environments) and hence requires quite different synthesis strategies to CNTs, for example.
The combination of one-dimensional nanostructures consisting of biopolymers and inorganic compounds opens up a number of scientific and technological opportunities.
Nanoscale in Three Dimensions
Nanoparticles are often defined as particles of less than 100nm in diameter. We classify nanoparticles to be particles less than 100nm in diameter that exhibit new or enhanced size-dependent properties compared with larger particles of the same material.
Nanoparticles exist widely in the natural world: for example as the products of photochemical and volcanic activity, and created by plants and algae. They have also been created for thousands of years as products of combustion and food cooking, and more recently from vehicle exhausts. Deliberately manufactured nanoparticles, such as metal oxides, are by comparison in the minority.
Nanoparticles are of interest because of the new properties (such as chemical reactivity and optical behavior) that they exhibit compared with larger particles of the same materials.
For example, titanium dioxide and zinc oxide become transparent at the nanoscale, however are able to absorb and reflect UV light, and have found application in sunscreens.
Nanoparticles have a range of potential applications: in the short-term in new cosmetics, textiles and paints; in the longer term, in methods of targeted drug delivery where they could be to used deliver drugs to a specific site in the body.
Nanoparticles can also be arranged into layers on surfaces, providing a large surface area and hence enhanced activity, relevant to a range of potential applications such as catalysts.
Manufactured nanoparticles are typically not products in their own right, but generally serve as raw materials, ingredients or additives in existing products.
Nanoparticles are currently in a number of consumer products such as cosmetics and their enhanced or novel properties may have implications for their toxicity.
For most applications, nanoparticles will be fixed (for example, attached to a surface or within in a composite) although in others they will be free or suspended in fluid. Whether they are fixed or free will have a significant affect on their potential health, safety and environmental impacts.
Fullerenes (carbon 60)
The C60 "buckyball" fullerene
In the mid-1980s a new class of carbon material was discovered called carbon 60 (C60). Harry Kroto and Richard Smalley, the experimental chemists who discovered C60 named it "buckminsterfullerene", in recognition of the architect Buckminster Fuller, who was well-known for building geodesic domes, and the term fullerenes was then given to any closed carbon cage. C60 are spherical molecules about 1nm in diameter, comprising 60 carbon atoms arranged as 20 hexagons and 12 pentagons: the configuration of a football.
In 1990, a technique to produce larger quantities of C60 was developed by resistively heating graphite rods in a helium atmosphere.
Several applications are envisaged for fullerenes, such as miniature ‘ball bearings’ to lubricate surfaces, drug delivery vehicles and in electronic circuits.
Dendrimers are spherical polymeric molecules, formed through a nanoscale hierarchical self-assembly process. There are many types of dendrimer; the smallest is several nanometers in size. Dendrimers are used in conventional applications such as coatings and inks, but they also have a range of interesting properties which could lead to useful applications.
For example, dendrimers can act as nanoscale carrier molecules and as such could be used in drug delivery. Environmental clean-up could be assisted by dendrimers as they can trap metal ions, which could then be filtered out of water with ultra-filtration techniques.
Nanoparticles of semiconductors (quantum dots) were theorized in the 1970s and initially created in the early 1980s. If semiconductor particles are made small enough, quantum effects come into play, which limit the energies at which electrons and holes (the absence of an electron) can exist in the particles. As energy is related to wavelength (or color), this means that the optical properties of the particle can be finely tuned depending on its size. Thus, particles can be made to emit or absorb specific wavelengths (colors) of light, merely by controlling their size.
Recently, quantum dots have found applications in composites, solar cells (Grätzel cells) and fluorescent biological labels (for example to trace a biological molecule) which use both the small particle size and tuneable energy levels.
Recent advances in chemistry have resulted in the preparation of monolayer-protected, high-quality, monodispersed, crystalline quantum dots as small as 2nm in diameter, which can be conveniently treated and processed as a typical chemical reagent.
The Key Differences Between Nanomaterials and Bulk Materials
Two principal factors cause the properties of nanomaterials to differ significantly from other materials: increased relative surface area, and quantum effects. These factors can change or enhance properties such as reactivity, strength and electrical characteristics.
As a particle decreases in size, a greater proportion of atoms are found at the surface compared to those inside. For example, a particle of size 30 nm has 5% of its atoms on its surface, at 10 nm 20% of its atoms, and at 3 nm 50% of its atoms.
Thus nanoparticles have a much greater surface area per unit mass compared with larger particles. As growth and catalytic chemical reactions occur at surfaces, this means that a given mass of material in nanoparticulate form will be much more reactive than the same mass of material made up of larger particles.
To understand the effect of particle size on surface area, consider an American Silver Eagle coin. This silver dollar contains 31 grams of coin silver and has a total surface area of approximately 3000 square millimeters. If the same amount of coin silver were divided into tiny particles – say 10 nanometer in diameter – the total surface area of those particles would be 7000 square meters (which is equal to the size of a soccer field – or larger than the floor space of the White House, which is 5100 square meters). In other words: when the amount of coin silver contained in a silver dollar is rendered into 10 nm particles, the surface area of those particles is over 2 million times greater than the surface area of the silver dollar!
Frequently Asked Questions (FAQs) About Nanomaterials
What are nanomaterials?
Nanomaterials are materials that have at least one dimension (height, width, or length) that measures between 1 and 100 nanometers (nm). At this scale, materials can exhibit unique properties that are different from those seen at the micro or macro scale. This can include changes in physical, chemical, or biological properties. Nanomaterials can be composed of various substances, including metals, semiconductors, or organic compounds.
What are the types of nanomaterials?
Nanomaterials can be categorized into four basic types: nanoplates (one dimension under 100 nm), nanorods (two dimensions under 100 nm), nanoparticles (three dimensions under 100 nm), and nanoporous materials. They can also be grouped based on their composition, such as carbon-based, metal-based, dendrimers, composites, and unique substances like quantum dots or liposomes.
How are nanomaterials made?
Nanomaterials can be produced through a variety of methods. Top-down methods involve the reduction of larger materials to the nanoscale, often through physical processes like milling. Bottom-up methods involve the assembly of nanomaterials from atomic or molecular components through processes such as chemical vapor deposition, sol-gel synthesis, or self-assembly.
What properties do nanomaterials have?
Nanomaterials can exhibit a wide range of unique properties depending on their size, shape, and composition. These can include increased strength, light weight, increased control over light spectrum, enhanced magnetic properties, increased reactivity, and unique quantum effects. These properties make nanomaterials useful in a variety of applications.
What are some applications of nanomaterials?
Nanomaterials have diverse applications across many fields. In medicine, they are used in drug delivery systems, imaging, and therapies. In electronics, they're used in the manufacture of transistors, sensors, and other components. Nanomaterials also have uses in energy production and storage, such as in solar panels and batteries. They can be found in consumer products like cosmetics and sunscreens, and in materials science for the creation of stronger, lighter materials.
What are the safety concerns associated with nanomaterials?
Due to their small size and high reactivity, nanomaterials can interact with biological systems in unexpected ways, potentially leading to toxicity. Some nanomaterials can accumulate in the body or environment and their long-term effects are not fully understood. Therefore, there is a need for careful study and regulation of nanomaterials to ensure their safe use.
What is the future of nanomaterials?
The field of nanomaterials is rapidly evolving and holds great promise for the future. Advances in nanotechnology will likely lead to the development of new nanomaterials with tailored properties for specific applications. These could revolutionize fields such as medicine, electronics, energy, and materials science, among others. However, the safe and responsible development and use of nanomaterials will be crucial to their success. |
|Part of a series of articles about|
is geometric, because each successive term can be obtained by multiplying the previous term by 1/2.
Geometric series are among the simplest examples of infinite series with finite sums, although not all of them have this property. Historically, geometric series played an important role in the early development of calculus, and they continue to be central in the study of convergence of series. Geometric series are used throughout mathematics, and they have important applications in physics, engineering, biology, economics, computer science, queueing theory, and finance.
- 1 Common ratio
- 2 Sum
- 3 Applications
- 4 See also
- 5 References
- 6 External links
The terms of a geometric series form a geometric progression, meaning that the ratio of successive terms in the series is constant. This relationship allows for the representation of a geometric series using only two terms, r and a. The term r is the common ratio, and a is the first term of the series. As an example the geometric series given in the introduction,
may simply be written as
- , with and .
The following table shows several geometric series with different common ratios:
|Common ratio, r||Start term, a||Example series|
|10||4||4 + 40 + 400 + 4000 + 40,000 + ···|
|1/3||9||9 + 3 + 1 + 1/3 + 1/9 + ···|
|1/10||7||7 + 0.7 + 0.07 + 0.007 + 0.0007 + ···|
|1||3||3 + 3 + 3 + 3 + 3 + ···|
|−1/2||1||1 − 1/2 + 1/4 − 1/8 + 1/16 − 1/32 + ···|
|–1||3||3 − 3 + 3 − 3 + 3 − ···|
The behavior of the terms depends on the common ratio r:
- If r is between −1 and +1, the terms of the series become smaller and smaller, approaching zero in the limit and the series converges to a sum. In the case above, where r is one half, the series has the sum one.
- If r is greater than one or less than minus one the terms of the series become larger and larger in magnitude. The sum of the terms also gets larger and larger, and the series has no sum. (The series diverges.)
- If r is equal to one, all of the terms of the series are the same. The series diverges.
- If r is minus one the terms take two values alternately (e.g. 2, −2, 2, −2, 2,... ). The sum of the terms oscillates between two values (e.g. 2, 0, 2, 0, 2,... ). This is a different type of divergence and again the series has no sum. See for example Grandi's series: 1 − 1 + 1 − 1 + ···.
The sum of a geometric series is finite as long as the absolute value of the ratio is less than 1; as the numbers near zero, they become insignificantly small, allowing a sum to be calculated despite the series containing infinitely many terms. The sum can be computed using the self-similarity of the series.
Consider the sum of the following geometric series:
This series has common ratio 2/3. If we multiply through by this common ratio, then the initial 1 becomes a 2/3, the 2/3 becomes a 4/9, and so on:
This new series is the same as the original, except that the first term is missing. Subtracting the new series (2/3)s from the original series s cancels every term in the original but the first,
A similar technique can be used to evaluate any self-similar expression.
For , the sum of the first n terms of a geometric series is
where a is the first term of the series, and r is the common ratio. We can derive this formula as follows:
As n goes to infinity, the absolute value of r must be less than one for the series to converge. The sum then becomes
When a = 1, this can be simplified to
the left-hand side being a geometric series with common ratio r.
The formula also holds for complex r, with the corresponding restriction, the modulus of r is strictly less than one.
Proof of convergence
Since (1 + r + r2 + ... + rn)(1−r) = 1−rn+1 and rn+1 → 0 for | r | < 1.
Convergence of geometric series can also be demonstrated by rewriting the series as an equivalent telescoping series. Consider the function,
So S converges to
For , the sum of the first n terms of a geometric series is:
This formula can be derived as follows:
A repeating decimal can be thought of as a geometric series whose common ratio is a power of 1/10. For example:
The formula for the sum of a geometric series can be used to convert the decimal to a fraction,
The formula works not only for a single repeating figure, but also for a repeating group of figures. For example:
Note that every series of repeating consecutive decimals can be conveniently simplified with the following:
That is, a repeating decimal with repeat length n is equal to the quotient of the repeating part (as an integer) and 10n - 1.
Archimedes' quadrature of the parabola
Archimedes' Theorem states that the total area under the parabola is 4/3 of the area of the blue triangle.
Archimedes determined that each green triangle has 1/8 the area of the blue triangle, each yellow triangle has 1/8 the area of a green triangle, and so forth.
Assuming that the blue triangle has area 1, the total area is an infinite sum:
The first term represents the area of the blue triangle, the second term the areas of the two green triangles, the third term the areas of the four yellow triangles, and so on. Simplifying the fractions gives
This is a geometric series with common ratio 1/4 and the fractional part is equal to
The sum is
For example, the area inside the Koch snowflake can be described as the union of infinitely many equilateral triangles (see figure). Each side of the green triangle is exactly 1/3 the size of a side of the large blue triangle, and therefore has exactly 1/9 the area. Similarly, each yellow triangle has 1/9 the area of a green triangle, and so forth. Taking the blue triangle as a unit of area, the total area of the snowflake is
The first term of this series represents the area of the blue triangle, the second term the total area of the three green triangles, the third term the total area of the twelve yellow triangles, and so forth. Excluding the initial 1, this series is geometric with constant ratio r = 4/9. The first term of the geometric series is a = 3(1/9) = 1/3, so the sum is
Thus the Koch snowflake has 8/5 of the area of the base triangle.
The convergence of a geometric series reveals that a sum involving an infinite number of summands can indeed be finite, and so allows one to resolve many of Zeno's paradoxes. For example, Zeno's dichotomy paradox maintains that movement is impossible, as one can divide any finite path into an infinite number of steps wherein each step is taken to be half the remaining distance. Zeno's mistake is in the assumption that the sum of an infinite number of finite steps cannot be finite. This is of course not true, as evidenced by the convergence of the geometric series with .
For example, suppose that a payment of $100 will be made to the owner of the annuity once per year (at the end of the year) in perpetuity. Receiving $100 a year from now is worth less than an immediate $100, because one cannot invest the money until one receives it. In particular, the present value of $100 one year in the future is $100 / (1 + ), where is the yearly interest rate.
Similarly, a payment of $100 two years in the future has a present value of $100 / (1 + )2 (squared because two years' worth of interest is lost by not receiving the money right now). Therefore, the present value of receiving $100 per year in perpetuity is
which is the infinite series:
This is a geometric series with common ratio 1 / (1 + ). The sum is the first term divided by (one minus the common ratio):
For example, if the yearly interest rate is 10% ( = 0.10), then the entire annuity has a present value of $100 / 0.10 = $1000.
This sort of calculation is used to compute the APR of a loan (such as a mortgage loan). It can also be used to estimate the present value of expected stock dividends, or the terminal value of a security.
Geometric power series
The formula for a geometric series
By differentiating the geometric series, one obtains the variant
Similarly obtained are:
- Divergent geometric series
- Generalized hypergeometric function
- Geometric progression
- Neumann series
- Ratio test
- Root test
- Series (mathematics)
- Tower of Hanoi
Specific geometric series
- Grandi's series: 1 − 1 + 1 − 1 + ⋯
- 1 + 2 + 4 + 8 + ⋯
- 1 − 2 + 4 − 8 + ⋯
- 1/2 + 1/4 + 1/8 + 1/16 + ⋯
- 1/2 − 1/4 + 1/8 − 1/16 + ⋯
- 1/4 + 1/16 + 1/64 + 1/256 + ⋯
- Abramowitz, M. and Stegun, I. A. (Eds.). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. New York: Dover, p. 10, 1972.
- Arfken, G. Mathematical Methods for Physicists, 3rd ed. Orlando, FL: Academic Press, pp. 278–279, 1985.
- Beyer, W. H. CRC Standard Mathematical Tables, 28th ed. Boca Raton, FL: CRC Press, p. 8, 1987.
- Courant, R. and Robbins, H. "The Geometric Progression." §1.2.3 in What Is Mathematics?: An Elementary Approach to Ideas and Methods, 2nd ed. Oxford, England: Oxford University Press, pp. 13–14, 1996.
- Pappas, T. "Perimeter, Area & the Infinite Series." The Joy of Mathematics. San Carlos, CA: Wide World Publ./Tetra, pp. 134–135, 1989.
- James Stewart (2002). Calculus, 5th ed., Brooks Cole. ISBN 978-0-534-39339-7
- Larson, Hostetler, and Edwards (2005). Calculus with Analytic Geometry, 8th ed., Houghton Mifflin Company. ISBN 978-0-618-50298-1
- Roger B. Nelsen (1997). Proofs without Words: Exercises in Visual Thinking, The Mathematical Association of America. ISBN 978-0-88385-700-7
- Andrews, George E. (1998). "The geometric series in calculus". The American Mathematical Monthly. Mathematical Association of America. 105 (1): 36–40. doi:10.2307/2589524. JSTOR 2589524.
History and philosophy
- C. H. Edwards, Jr. (1994). The Historical Development of the Calculus, 3rd ed., Springer. ISBN 978-0-387-94313-8.
- Swain, Gordon and Thomas Dence (April 1998). "Archimedes' Quadrature of the Parabola Revisited". Mathematics Magazine. 71 (2): 123–30. doi:10.2307/2691014. JSTOR 2691014.
- Eli Maor (1991). To Infinity and Beyond: A Cultural History of the Infinite, Princeton University Press. ISBN 978-0-691-02511-7
- Morr Lazerowitz (2000). The Structure of Metaphysics (International Library of Philosophy), Routledge. ISBN 978-0-415-22526-7
- Carl P. Simon and Lawrence Blume (1994). Mathematics for Economists, W. W. Norton & Company. ISBN 978-0-393-95733-4
- Mike Rosser (2003). Basic Mathematics for Economists, 2nd ed., Routledge. ISBN 978-0-415-26784-7
- Edward Batschelet (1992). Introduction to Mathematics for Life Scientists, 3rd ed., Springer. ISBN 978-0-387-09648-3
- Richard F. Burton (1998). Biology by Numbers: An Encouragement to Quantitative Thinking, Cambridge University Press. ISBN 978-0-521-57698-7
- John Rast Hubbard (2000). Schaum's Outline of Theory and Problems of Data Structures With Java, McGraw-Hill. ISBN 978-0-07-137870-3
- Hazewinkel, Michiel, ed. (2001), "Geometric progression", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Weisstein, Eric W. "Geometric Series". MathWorld.
- Geometric Series at PlanetMath.org.
- Peppard, Kim. "College Algebra Tutorial on Geometric Sequences and Series". West Texas A&M University.
- Casselman, Bill. "A Geometric Interpretation of the Geometric Series". Archived from the original (Applet) on 2007-09-29.
- "Geometric Series" by Michael Schreiber, Wolfram Demonstrations Project, 2007. |
These are the required objectives for 5th graders. It may be found https://www.louisianabelieves.com/docs/default-source/teacher-toolbox-resources/louisiana-student-standards-for-k-12-math.pdf?sfvrsn=86bb8a1f_62
By the end of the year the students should be proficient in each category.
Operations and Algebraic Thinking 5.OA
A. Write and interpret numerical expressions.
1. Use parentheses or brackets in numerical expressions, and evaluate expressions with these symbols.
2. Write simple expressions that record calculations with whole numbers, fractions, and decimals, and interpret numerical expressions without evaluating them. For example, express the calculation “add 8 and 7, then multiply by 2” as 2 (8 + 7). Recognize that 3 x (18,932 + 9.21) is three times as large as 18,932 + 9.21, without having to calculate the indicated sum or product. B. Analyze patterns and relationships.
3. Generate two numerical patterns using two given rules. Identify apparent relationships between corresponding terms. Form ordered pairs consisting of corresponding terms from the two patterns, and graph the ordered pairs on a coordinate plane. For example, given the rule “Add 3” and the starting number 0, and given the rule “Add 6” and the starting number 0, generate terms in the resulting sequences, and observe that the terms in one sequence are twice the corresponding terms in the other sequence. Explain informally why this is so.
Number and Operations in Base Ten 5.NBT
A. Understand the place value system.
1. Recognize that in a multi-digit number, a digit in one place represents 10 times as much as it represents in the place to its right and 1/10 of what it represents in the place to its left.
2. Explain and apply patterns in the number of zeros of the product when multiplying a number by powers of 10. Explain and apply patterns in the values of the digits in the product or the quotient, when a decimal is multiplied or divided by a power of 10. Use whole-number exponents to denote powers of 10. For example, 100 = 1, 101 = 10 ... and 2.1 x 102 = 210.
3. Read, write, and compare decimals to thousandths.
a. Read and write decimals to thousandths using base-ten numerals, number names, and expanded form, e.g., 347.392 = 3 100 + 4 10 + 7 1 + 3 (1/10) + 9 (1/100) + 2 (1/1000).
b. Compare two decimals to thousandths based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons.
4. Use place value understanding to round decimals to any place. B. Perform operations with multi-digit whole numbers and with decimals to hundredths.
5. Fluently multiply multi-digit whole numbers using the standard algorithm.
6. Find whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors, using strategies based on place value, the properties of operations, subtracting multiples of the divisor, and/or the relationship between multiplication and division. Illustrate and/or explain the calculation by using equations, rectangular arrays, area models, or other strategies based on place value.
7. Add, subtract, multiply, and divide decimals to hundredths, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; justify the reasoning used with a written explanation.
Number and Operations—Fractions 5.NF
A. Use equivalent fractions as a strategy to add and subtract fractions.
1. Add and subtract fractions with unlike denominators (including mixed numbers) by replacing given fractions with equivalent fractions in such a way as to produce an equivalent sum or difference of fractions with like denominators. For example, 2/3 + 5/4 = 8/12 + 15/12 = 23/12. (In general, a/b + c/d = (ad + bc)/bd.)
2. Solve word problems involving addition and subtraction of fractions.
a. Solve word problems involving addition and subtraction of fractions referring to the same whole, including cases of unlike denominators, e.g., by using visual fraction models or equations to represent the problem.
b. Use benchmark fractions and number sense of fractions to estimate mentally and justify the reasonableness of answers. For example, recognize an incorrect result 2/5 + 1/2 = 3/7, by observing that 3/7 < 1/2.
B. Apply and extend previous understandings of multiplication and division to multiply and divide fractions.
3. Interpret a fraction as division of the numerator by the denominator (a/b = a b). Solve word problems involving division of whole numbers leading to answers in the form of fractions or mixed numbers, e.g., by using visual fraction models or equations to represent the problem. For example, interpret 3/4 as the result of dividing 3 by 4, noting that 3/4 multiplied by 4 equals 3, and that when 3 wholes are shared equally among 4 people: 28 person has a share of size 3/4. If 9 people want to share a 50-pound sack of rice equally by weight, how many pounds of rice should each person get? Between what two whole numbers does your answer lie?
4. Apply and extend previous understandings of multiplication to multiply a fraction or whole number by a fraction. a. Interpret the product (m/n) x q as m parts of a partition of q into n equal parts; equivalently, as the result of a sequence of operations, m x q ÷ n. For example, use a visual fraction model to show understanding, and create a story context for (m/n) x q.
b. Construct a model to develop understanding of the concept of multiplying two fractions and create a story context for the equation. [In general, (m/n) x (c/d) = (mc)/(nd).]
c. Find the area of a rectangle with fractional side lengths by tiling it with unit squares of the appropriate unit fraction side lengths, and show that the area is the same as would be found by multiplying the side lengths.
d. Multiply fractional side lengths to find areas of rectangles, and represent fraction products as rectangular areas.
5. Interpret multiplication as scaling (resizing), by:
a. Comparing the size of a product to the size of one factor on the basis of the size of the other factor, without performing the indicated multiplication.
b. Explaining why multiplying a given number by a fraction greater than 1 results in a product greater than the given number (recognizing multiplication by whole numbers greater than 1 as a familiar case).
c. Explaining why multiplying a given number by a fraction less than 1 results in a product smaller than the given number.
d. Relating the principle of fraction equivalence a/b = (n x a)/(n x b) to the effect of multiplying a/b by 1.
6. Solve real-world problems involving multiplication of fractions and mixed numbers, e.g., by using visual fraction models or equations to represent the problem.
7. Apply and extend previous understandings of division to divide unit fractions by whole numbers and whole numbers by unit fractions.1
a. Interpret division of a unit fraction by a non-zero whole number, and compute such quotients. For example, create a story context for (1/3) ÷ 4, and use a visual fraction model to show the quotient. Use the relationship between multiplication and division to explain that (1/3) ÷ 4 = 1/12 because (1/12) 4 = 1/3.
b. Interpret division of a whole number by a unit fraction, and compute such quotients. For example, create a story context for 4 ÷ (1/5), and use a visual fraction model to show the quotient. Use the relationship between multiplication and division to explain that 4 ÷ (1/5) = 20 because 20 (1/5) = 4.
c. Solve real-world problems involving division of unit fractions by non-zero whole numbers and division of whole numbers by unit fractions, e.g., by using visual fraction models and equations to represent the problem. For example, how much chocolate will each person get if 3 people share 1/2 lb of chocolate equally? How many 1/3-cup servings are in 2 cups of raisins?
Measurement and Data 5.MD
A. Convert like measurement units within a given measurement system.
1. Convert among different-sized standard measurement units within a given measurement system, and use these conversions in solving multi-step, real-world problems (e.g., convert 5 cm to 0.05 m; 9 ft to 108 in). 1 Students able to multiply fractions in general can develop strategies to divide fractions in general, by reasoning about the relationship between multiplication and division. But division of a fraction by a fraction is not a requirement at this grade. Student Content Standards for Mathematics: Grade 5 29 B. Represent and interpret data.
2. Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Use operations on fractions for this grade to solve problems involving information presented in line plots. For example, given different measurements of liquid in identical beakers, find the amount of liquid each beaker would contain if the total amount in all the beakers were redistributed equally. C. Geometric measurement: understand concepts of volume and relate volume to multiplication and to addition.
3. Recognize volume as an attribute of solid figures and understand concepts of volume measurement.
a. A cube with side length 1 unit, called a “unit cube,” is said to have “one cubic unit” of volume, and can be used to measure volume.
b. A solid figure that can be packed without gaps or overlaps using n unit cubes is said to have a volume of n cubic units.
4. Measure volumes by counting unit cubes, using cubic cm, cubic in, cubic ft, and improvised units.
5. Relate volume to the operations of multiplication and addition and solve real-world and mathematical problems involving volume.
a. Find the volume of a right rectangular prism with whole-number side lengths by packing it with unit cubes, and show that the volume is the same as would be found by multiplying the edge lengths, equivalently by multiplying the height by the area of the base. Represent threefold whole-number products as volumes, e.g., to represent the associative property of multiplication.
b. Apply the formulas V = l w h and V = B h for rectangular prisms to find volumes of right rectangular prisms with whole-number edge lengths in the context of solving real-world and mathematical problems.
c. Recognize volume as additive. Find volumes of solid figures composed of two non-overlapping right rectangular prisms by adding the volumes of the non-overlapping parts, applying this technique to solve real-world problems.
A. Graph points on the coordinate plane to solve real-world and mathematical problems.
1. Use a pair of perpendicular number lines, called axes, to define a coordinate system, with the intersection of the lines (the origin) arranged to coincide with the 0 on each line and a given point in the plane located by using an ordered pair of numbers, called its coordinates. Understand that the first number in the ordered pair indicates how far to travel from the origin in the direction of one axis, and the second number in the ordered pair indicates how far to travel in the direction of the second axis, with the convention that the names of the two axes and the coordinates correspond (e.g., x-axis and x-coordinate, y-axis and y-coordinate).
2. Represent real-world and mathematical problems by graphing points in the first quadrant of the coordinate plane, and interpret coordinate values of points in the context of the situation.
B. Classify two-dimensional figures into categories based on their properties.
3. Understand that attributes belonging to a category of two-dimensional figures also belong to all subcategories of that category. For example, all rectangles have four right angles and squares are rectangles, so all squares have four right angles.
4. Classify quadrilaterals in a hierarchy based on properties. (Students will define a trapezoid as a quadrilateral with at least one pair of parallel sides.) |
The new SAT requires you to know a number of special equation forms—to know which one you need to use in a given situation, and to know how to get into that form if it’s not the one you’re given by using algebraic manipulation. Some equation forms (vertex form of a parabola and the standard circle equation immediately spring to mind) contain binomial squares, e.g. , as essential ingredients. To get a non-standard equation into these forms, you’ll often have to complete the square. I know, I know, you’ve done this a million times in school. Still, I often find students haven’t done this in a long time and need a little bit of a refresher. So here we are.
First, the equations in question.
Vertex form of a parabola: , where the vertex of the parabola is at .
Standard circle equation: , where a circle with radius r has its center at .
Say you’re given a parabola that’s not in vertex form and you need to put it in vertex form. How do you do that?
No calculator; grid-in
The parabola formed when the equation above is graphed in the xy-plane has its vertex at . What is the value of ?
Completing the square isn’t the only way to solve this question, but I’d argue it’s the fastest. All we need to do to go from the given form to the vertex form is figure out which binomial square the part of the equation is the beginning of. With practice, this becomes second nature and you probably won’t need the rule, but the rule is that is the beginning of .* In this case, that means that is the beginning of .
Now, what do you get when you FOIL out ? You get . That’s not what we have above—we have instead. Luckily, we can do anything we want to the right side of the equation provided that we keep the equation balanced by doing the same thing to the left, so we can just add 10 to both sides!
From there, we’re almost done. Now we can convert the right side to the binomial square we wanted, and then get y by itself again to land in vertex form.
So, there you have it: the parabola in question has a vertex of . Since the question said the vertex was at , we know that , , and . So, 14 is the answer.
Let’s practice with a few more, shall we? Try to do the following drill without a calculator. All three questions are grid-ins.
* I’m intentionally limiting this post to scenarios where the leading coefficient in the square being completed is 1. So far, I have not seen an official question of this type where that is not the case.
For the second grid-in, when the (x+3)(-x+5) is foiled, how can you tell -x * x is -x^2 instead of x^2? I know that it has something to do with the x being in parentheses but can you clarify?
I think it’s easiest to see this by looking at the same situation with real numbers. When you multiply 3 and 3, you get 9, which is 3^2. When you multiply –3 and 3, you get –9, which is –3^2. The same thing is happening when you multiply x and –x. You get –x^2. Is this helpful?
Ah I see, this makes more sense thanks! |
The Senate of the United States shall be composed of two Senators from each State, elected by the people thereof, for six years; and each Senator shall have one vote. The electors in each State shall have the qualifications requisite for electors of the most numerous branch of the State legislatures.
When vacancies happen in the representation of any State in the Senate, the executive authority of such State shall issue writs of election to fill such vacancies: Provided, That the legislature of any State may empower the executive thereof to make temporary appointments until the people fill the vacancies by election as the legislature may direct.
This amendment shall not be so construed as to affect the election or term of any Senator chosen before it becomes valid as part of the Constitution.
The Seventeenth Amendment is the only constitutional amendment to change the fundamental structure of the government as originally drafted in the Constitution. The Seventeenth Amendment increased the American public’s ability to control the federal government, because it granted voters the opportunity to directly elect their representatives to the Senate. Before the amendment was ratified in 1913, state legislatures chose senators.
When the Constitution was written in 1787, many citizens wanted a “loose” union between the former colonies. This left the states with considerable powers to rule themselves as they wished. Under the original terms of the Constitution, the congressional houses divided the government’s power between the people and the states. Members popularly elected to the House of Representatives represented the American people, and states chose senators to represent them. The Seventeenth Amendment shifted the division of power in the government and gave the voters direct control over who represented their state.
Submitted by Congress to the states on May 13, 1912.
Ratified by the required three-fourths of states (thirty-six of forty-eight) by April 8, 1913, and by nine more states by March 9, 1922. Declared to be part of the Constitution on May 31, 1913.
Massachusetts, May 22, 1912; Arizona, June 3, 1912; Minnesota, June 10, 1912; New York, January 15, 1913; Kansas, January 17, 1913; Oregon, January 23, 1913; North Carolina, January 25, 1913; California, January 28, 1913; Michigan, January 28, 1913; Iowa, January 30, 1913; Montana, January 30, 1913; Idaho, January 31, 1913; West Virginia, February 4, 1913; Colorado, February 5, 1913; Nevada, February 6, 1913; Texas, February 7, 1913; Washington, February 7, 1913; Wyoming, February 8, 1913; Arkansas, February 11, 1913; Maine, February 11, 1913; Illinois, February 13, 1913; North Dakota, February 14, 1913; Wisconsin, February 18, 1913; Indiana, February 19, 1913; New Hampshire, February 19, 1913; Vermont, February 19, 1913; South Dakota, February 19, 1913; Oklahoma, February 24, 1913; Ohio, February 25, 1913; Missouri, March 7, 1913; New Mexico, March 13, 1913; Nebraska, March 14, 1913; New Jersey, March 17, 1913; Tennessee, April 1, 1913; Pennsylvania, April 2, 1913; Connecticut, April 8, 1913.
Origins of the Seventeenth Amendment
When statesmen gathered at Independence Hall in Philadelphia in 1787, they intended to alter the Articles of Confederation and provide the framework for the new nation’s government. The Articles of Confederation outlined a country united by a weak federal government and strong states. However, the statesmen soon realized that a much stronger central government was needed in order to keep the union stable and at peace. They defined this new government in the Constitution.
Checks and balances
Having just won freedom from the unresponsive British monarchy, the American statesmen worked to create a responsive government of and by the people. Some statesmen feared that giving the public too much authority would subject the government to popular whims and lead to chaos and instability. These statesmen favored a government that included a system of checks and balances between the governing bodies.
The checks and balances would protect the American people from themselves by allowing separate parts of the government to discuss an issue before committing the country to action. The resulting Constitution detailed such a system of checks and balances. In essays known as The Federalist, Alexander Hamilton, James Madison, and John Jay wrote that the checks and balances would protect the government and American people from “the blow meditated by the people against themselves, until reason, justice, and truth can regain their authority over the public mind.”
Protecting the people from themselves.
The Senate’s primary role in the new government was to provide a check on all legislation passed by Congress, or the lower house. It could also reject any treaty or political appointment initiated by the president. The framers of the Constitution reasoned that citizens were qualified to make good decisions about their representatives in the state legislatures. However, the framers felt those same citizens would not make good choices of senators to represent their state in the federal government.
The framers feared that citizens might vote for undeserving politicians or that corrupt political interest groups who would take “advantage of the [indifference], the ignorance, and the hopes and fears of the unwary and interested.” The framers of the Constitution considered the Senate an anchor against such corruption.
In addition, the framers considered a Senate chosen by state legislatures a good balance against a popularly elected Congress. The framers thought it unlikely that special interest groups would gain control if both houses were elected by different means. This would let the Senate and the Congress form a barrier against an interest group gaining control of legislative power. During the deliberations in Independence Hall, only James Wilson of Pennsylvania argued for the direct election (election by the people) of senators. But the decision to let state legislatures vote for senators was eventually adopted unanimously.
The “Great Compromise.”
The number of senators serving in the Senate was determined during the Federal Convention of 1787 in what is known as the “Great Compromise.” Individual states differed in size. They argued about how to select government representatives: depending on the state’s population or regardless of size. They compromised and created the House of Representatives. The House would represent the people of the states, and the Senate would represent the individual states. Two senators would represent each state. Large and small states alike were satisfied with the arrangement and never again competed for more representation at the federal level.
Corruption in the Senate
Until the end of the Civil War (1861-65), the U.S. Senate enjoyed a reputation as the “greatest deliberative (thinking) body in the Western world.” The Senate garnered the attention and praise of influential foreign dignitaries. After the Civil War, however, the Senate turned into a place where the interests of big businesses soon carried more weight than reasoned debate.
After the Civil War, great wealth was generated for a small group of businessmen as industries combined and started to serve national markets. Railroads and other transportation companies were among those corporations that grew to serve a national market. Never before had so few companies controlled so much economic power. For example, the areas where railroads laid their tracks enjoyed more jobs, more tourists, and more access to markets than other areas.
Big business takes over
State legislatures quickly learned that they could use senatorial seats to gain favor with the influential businesses that brought wealth and jobs to their states. In turn, businessmen pursued senatorial seats when federal regulations started to impose limits on, or to provide benefits to, their businesses. In large cities such as Chicago, Kansas City, and New York City political bosses (businessmen who exerted a controlling force on political decisions) soon gained enough power to influence senatorial elections.
Political bosses and other influential politicians and businessmen bought the votes of state legislatures or strong-armed (bullied) them, to effectively control who became a U.S. senator. If a candidate was not favored, wealthy businessmen or lobbyists for certain industries gave huge sums of money to all candidates. The money did not support a certain political viewpoint. Instead, the winning candidate was obligated to whoever gave him the money.
Senator Chauncey Mitchell Depew was a prime example of this corruption. David Graham Phillips (see sidebar) wrote a muckraking article. He called Senator Depew “an ideal lieutenant for a plutocrat” and exposed Senator Depew’s connection with furthering the interests of the powerful William H. Vanderbilt family. (The Vanderbilts’ tried to push through legislation that would benefit their railroad between New York City and Buffalo, New York.) Phillips also linked corrupt senators and businessmen to legislation concerning beef inspection, food and drug purity standards, railroad regulations, and sugar subsidies (government granted money), among other things.
Muckrakers expose big business
The American public was not blind to the Senate’s corruption. Articles appeared in magazines and newspapers, telling tales of the corruption. The stories of the purchase of senatorial votes by political interest groups or wealthy businessmen were fantastic and sometimes exaggerated. However, they generated intense scrutiny of the Senate, and the voting records of its members. Skeptical people rejected the notion that many senators could be bribed, but they were also not blind to the number of millionaires in the Senate.
The “Millionaire’s Club.”
The Senate became known as “The Millionaire’s Club,” “The Rich Man’s Club,” and the “House of Dollars.” Among the millionaire senators were Philetus Sawyer of Wisconsin, who made his millions in lumber; California railroad magnate Leland Stanford; Arthur Gorman of Maryland, who ran the Chesapeake and Ohio Canal Company; and Nelson Aldrich of Rhode Island, who made millions in banking. The opinion that the Senate represented corporate wealth grew in popularity.
State legislatures hinder senatorial elections.
Corporate influence in the Senate was not the only problem with the legislative body. In the early 1800s, some states provided senators with written instructions for how they should vote on certain issues. Some senators resigned when their own beliefs did not coincide with their state’s wishes. Among them were John Quincy Adams in 1807 and John Tyler of Virginia in 1836. By the mid-1800s, states stopped making rules for how senators voted.
The influence of political parties in senatorial elections.
Some historians trace the influence of political parties in senatorial elections back to the first one in 1789. Early political parties represented different factions or commercial interests within the state. By the mid-1800s, political parties were more nationally organized and dominated senatorial elections.
In some states, political parties would hold state conventions to choose their party’s nominee. The party’s representatives in the state legislature then promised to vote for their party’s nominee. Sometimes politicians were unwilling to vote for someone of another political party, and debates in state legislatures dragged on for months. Occasionally, this left states without representatives during some Congresses.
George H. Haynes, a respected Senate historian, wrote that the sessions would sometimes degenerate into “riotous demonstrations more appropriate to a prize-fight than to a senatorial election.” In one particularly colorful instance, the Missouri state legislators threw fists, desks, and books at each other. The fight erupted over whether to break the wall clock, and allow deliberations over senatorial selections to continue after the scheduled hour of adjournment (closing). One member finally broke the clock by hurling ink bottles at it. “It is ridiculous to suggest that amid scenes like these the choice of a senator retains anything of the character of an exercise of cool judgment.”
These types of sessions resulted in poor decisions, or no decisions at all. The most extreme case of state legislature paralysis was in Delaware. The Delaware state legislature’s inability to elect even one senator left the state without any representation in the federal government between 1901 and 1903. Other states succeeded in electing only one senator or elected senators after the Senate had already started a session.
A Push for Direct Election of Senators
Between 1826 and 1912, more than 197 resolutions for direct election of senators were introduced in the House of Representatives. Only six received the necessary two-thirds majority vote needed to reach the Senate. Once in the Senate, all of these resolutions were ignored. In 1910, the only resolution ever debated lost by a narrow margin.
As the Senate continued to ignore the public’s will over the years, reformers of the election process were forced to unite across political parties. Though movement in the federal legislature was slow, political pressures to change the election process for senators gradually built momentum.
As early as the 1880s and 1890s, reform advocates (supporters) declared that[MM1] “special interests had conspired to hold the Senate hostage,” and “the documentation they presented to the public painted a horrifying picture of a widespread network of corrupt bargains, in which wealth and power were exchanged for influence and votes.”
Political parties made direct election of senators a part of their presidential platforms. It started with the People’s Party between 1892 and 1904 and was followed by the Democratic Party in 1900 and 1904 and then the Prohibition Party in 1904. The Pennsylvania legislature proposed a second Constitutional Convention in 1900. By 1905, thirty-one states had either passed referendums proposing that Congress consider a constitutional amendment or otherwise voiced their support for direct election.
With the publication of muckraking articles such as David Graham Phillips’s “The Treason of the Senate” that appeared in Cosmopolitan Magazine in 1906, public interest in direct election mounted. Demand for reform was so great that the 57th Congress printed an additional 5,000 copies of the Senate committee report on direct election.
A report of the Senate conducted during the 58th Congress (1903-1905) was published in 1906. It revealed that “One senator out of every three owes his election to his personal wealth, to his being the candidate satisfactory to what is coming to be called the ‘System,’ or to his expertness in political manipulation—qualifications which make their usefulness as members of the dominant branch of Congress decidedly open to question.”
In 1906 alone, nine resolutions for direct election were put before Congress. The American public agreed that direct election would free the Senate from corrupting influences. In 1911, Indiana representative John A. Adair argued in Congress that direct election would fill the Senate with people with “rugged honesty, recognized ability, admitted capacity, and wide experience.”
Popular election of Senators without an amendment
Reformers looked for alternatives when they were unable to pass amendment proposals through the Senate or to gain enough support for a second Constitutional Convention. To appease popular pressure for direct election of senators, some states invented a new voting method to allow the public to select senators. The new method was called a primary election.
Oregon developed the first primary elections between 1901 and 1904. With this method, people voted for candidates in primary elections. The state legislatures would then officially elect the primary’s winner to the Senate. Candidates pledged to uphold the popular primary elections during their tenure. Primary elections quickly delivered the desired reduction in deadlock and corruption in senatorial elections. An Oregon paper reported in 1907 that “On the first ballot, in twenty minutes, we elected two Senators, without boodle, or booze, or even a cigar!” By 1910, fourteen of the thirty states used primary elections to select senators.
Muckraking and the Seventeenth Amendment
Muckraking is a form of journalism. It was used in the early 1900s and boldly attempted to reveal some essential truth about public figures, political issues, or institutions. Muckrakers published articles that exposed corruptions in American government. According to George E. Mowry and Judson A. Grenier, muckrakers followed the advice of a biblical passage from St. John: “And ye shall know the truth and the truth shall make you free.”
The articles enjoyed a great deal of popularity. Journalists such as Samuel Hopkins Adams, Ray Stannard Baker, Charles Edward Russell, Upton Sinclair, and Ida M. Tarbell became household names. Their vivid articles reached the readers of magazines such as McClure’s,Collier’s, the American,and Cosmopolitan.
Muckrakers were primarily concerned with exposing the privileges that money bought in political life. The Industrial Revolution had created great wealth, but only for a very few. Muckrakers worked hard to show just how much influence this new wealth affected the government.
The villains in the thousands of muckraking stories were greedy businessmen or what muckrakers called “predatory wealth.” To stop these corrupt businessmen, muckrakers called for greater democracy. They considered it “the inevitable sequence of widespread intelligence.” Publishers also made greater democracy a rallying cry. As early as February 5, 1899, William Randolph Hearst’s newspapers made the direct election of U.S. senators a firm editorial quest.
The height of the Muckraking Era came in 1906 when David Graham Phillips published “The Treason of the Senate” articles in Cosmopolitan. In his articles, Phillips examined the corruption of the Senate. One of the most important issues Phillips tackled was the millionaires in the Senate.
Charles Edward Russell was a fellow muckraker who believed the Senate was made up of senators who used their seats to make millions, Russell called the Senate a house made up of “butlers for industrialists and financiers.”
Phillips reported that as many as twenty-five senators were millionaires at the time of his writing. (Some scholars estimate the number was actually closer to ten.) Powerful businessmen with great interest in creating laws to protect their companies often sought election. Phillips wondered whether it was better to elect these millionaires or to have them bribe senators.
Phillips profiled the careers of twenty-one senators and declared that senatorial selection was based on private wealth and power in party organizations. Phillips called senators “grafters,” “bribers,” and “perjurers,” and exposed decisions in which senators favored the interests of their corporate backers over the interests of the public.
President Roosevelt grew more frustrated with each monthly muckraking installment. He told the Post editor:
“Phillips takes certain facts that are true in themselves, and by ignoring utterly a very much larger mass of facts that are just as true and just as important, and by downright perversion of truth both in the way of misstatement and of omission, succeeds in giving a totally false picture … [The articles] give no accurate guide for those who are really anxious to war against corruption, and they do excite a hysterical and ignorant feeling against everything existing, good or bad.”
But Phillips responded to critics by stating that “these articles have been attacked, but their facts—the facts of the treason of the Senate, taken from the records—have not been attacked. Abuse is not refutation (a proof of being wrong); it is confession.” He added, “The exposed cry out that these exposures endanger the Republic. What a ludicrous inversion—the burglar shouting that the house is falling because he is being ejected from it? The Republic is not in danger; it is its enemies that are in danger.”
In the end, “The Treason of the Senate” and other muckraking articles stirred what President Roosevelt labeled “a revolutionary feeling in the country.” The muckraking articles were read by hundreds of thousands of people and helped reformers generate popular consensus for the eventual ratification of the Seventeenth Amendment.
Race and Ratification
Another issue complicated the political power struggle over senatorial elections between the American public and corporate interests: the power of African American votes. Black votes could influence senatorial elections, and this was why some southern senators were reluctant about direct election proposals.
The South had a long history of denying blacks the right to vote. The Fourteenth Amendment established equal rights for all citizens. The Fifteenth Amendment granted people of all races and colors the right to vote. Even after these amendments were ratified, southern states implemented poll taxes and literacy tests to keep as many blacks as possible from voting. To ensure that whites retained control at the polls, southern Democrats insisted that every proposal for direct election gave states the authority to regulate elections.
The race rider
A proposal for the first vote on direct election in the full Senate emerged from the Senate’s Judiciary Committee in 1910. The proposal had a provision that proved so controversial that it doomed the proposal from a passing vote. The provision was nicknamed the “race rider.”
The race rider ensured that states would control election regulations. It had been added in the Judiciary Committee as a compromise to win Democratic votes. But when senators in the full Senate debated the proposal, many balked at the race rider. They declared it would effectively reverse the Fifteenth Amendment by allowing states to racially discriminate against some voters. Chauncey Depew of New York opposed it, saying that passing the proposal would be “deliberately voting to undo the results of the Civil War.”
The direct election amendment failed to pass in the Senate in 1910, but several senators encouraged continued discussion of the issue. Over the next year, the House and the Senate continued to debate the race rider. The rider was the only real obstacle to the direct election proposal passing in both houses. After two months of intense debate, the Senate almost passed the proposal without the race rider on February 28, 1911. It was a fifty-four to thirty-three margin (four not voting)—just four votes short of the needed two-thirds majority.
Congress went back and forth over the addition or removal of the race rider. The House voted on the amendment issue again in April 1911. The amendment passed with a race rider by a margin of 296 to 16 (70 not voting). The Senate discarded the race rider and passed the revised proposal with the required two-thirds supermajority by a margin of 64 to 24 (three not voting). Instead of quickly passing the revised version of the proposal, the House entered into yet further debate. Representative Walter Rucker proposed to abandon the race rider on May 13, 1912. The House voted again, and this time passed the revised amendment proposal by a margin of 238 to 39 (five voted “present,” and 110 not voting).
Unlike the congressional houses, the states did not wrangle over the direct-election amendment. The amendment swiftly passed through state legislatures in less than a year and became the Seventeenth Amendment on April 8, 1913. Although the amendment decreased the power of the states, the state legislatures seemed more than happy to pass it. With the use of primary elections, many of the U.S. senators had already been selected by popular vote. To many, the ratification of the Seventeenth Amendment simply formalized a practice of direct election that was already widely used.
Although the Seventeenth Amendment was ratified without a race rider, many states found ways to limit black voters from participating in senatorial elections. To keep minorities from the polls, southern states instituted poll taxes and literacy tests. These taxes made voting too expensive for poorer blacks. The literacy tests tested people’s knowledge of the meaning of the state constitution. By the 1920s, the “white primary” was the main way southern states blocked blacks and other minorities from voting for senators. The white primary was named as such after southern political parties made exclusive rules to forbid minority membership.
In the South, the Democratic Party did not allow black members. To make matters even more unfair, state legislatures only allowed party members to vote in primary elections. Up until the 1960s, the Democratic Party was so strong in the South that Democratic nominees nearly always won election. Sometimes the Republican Party did not even bother to select a party candidate. Because the Democratic Party dominated southern politics, whites were the only people allowed to vote in most primary elections.
It took several decades and several court cases to stop discriminatory voting practices in the South. One of the most influential cases was Chapman v. King in 1946. In Chapman, the Fifth Circuit Court of Appeals determined that the Democratic primary in Georgia violated black people’s constitutional right to vote.
The court ruled that since the Georgia state legislature accepted the Democratic primary candidate, the state also endorsed the Democratic Party’s discriminatory rules against black participation. By doing this, the Georgia state legislature denied African Americans their constitutional right to vote for their own representatives. Though the court’s decision was favorable, southern blacks endured many more decades of discrimination before they could fully exercise their right to vote for U.S. senators.
The white primary effectively stopped black voters from voting until the Voting Rights Act was passed in 1965. The Voting Rights Act came at the crest of the civil rights movement in the 1960s. The Twenty-fourth Amendment abolished the poll tax in 1962, but it did not end all obstacles for minority voters. The amendment helped rally support for further guarantees for minorities. President Lyndon B. Johnson supported both the Civil Rights Act of 1964 and the Voting Rights Act. The Civil Rights Act made discriminatory practices in employment, education, and public places illegal The Voting Rights Act similarly rendered illegal the remaining deterrents that stopped minorities from voting.
The Progressive Amendments
The Seventeenth Amendment is tied to a time between when Americans united and gave themselves more control over their public and private lives. The time is called the Progressive Era, and it lasted from about 1900 to 1920. The “Progressive” amendments are: the Sixteenth Amendment, the Seventeenth Amendment, and the Nineteenth Amendment. (The Sixteenth Amendment established a federal income tax, and the Nineteenth Amendment gave women the right to vote.)
The Seventeenth Amendment made the federal government more democratic. It was the greatest alteration to the workings of the state and federal governments since the Civil War amendments—the Thirteenth, Fourteenth, and Fifteenth. (Together, they abolished slavery, established equal rights for all citizens, and granted all races the right to vote.)
New ways to use amendments
The passage of the Sixteenth and the Seventeenth Amendments in 1913 ushered in a wave of new thinking about the purpose of constitutional amendments. During the forty years following the Civil War, no constitutional amendments were ratified. Hundreds of amendments were proposed during those forty years. However, none could gain the two-thirds majority vote needed to ratify them. Politicians and social advocates questioned the usefulness of the Constitution, thinking it was inflexible.
But when two constitutional amendments were ratified in the same year, constitutional amendments seemed like realistic solutions to a variety of social and political problems. The American public recognized the power constitutional amendments had to redirect the activities of government. Perhaps more important, Americans also realized their own ability to effectively change the Constitution to create a government that was responsive to current needs and values.
The ratification of the Sixteenth and Seventeenth Amendments in the same year was “a political reaction to the great concentration of wealth and its alleged corrupting influence on the political system.” In reaction to the alterations in the country’s economic structure brought about by the Civil War, the American public demanded a more equal tax structure. The Sixteenth Amendment provided that tax structure and laid the groundwork for weakening the concentration of wealth in the country.
But the public grew frustrated as it heard about the Senate debates over the Sixteenth Amendment. Senators who owned or were influenced by the large corporations tried to block the amendment. This situation drew the public’s attention to the great number of millionaires in the Senate. The public was angry that the wealthy senators would not pass a more equitable tax system, and they wondered how to make the senators more responsive to their views. Public outcry to the senators’ attempts to block the Sixteenth Amendment helped to push the Seventeenth Amendment through the Senate.
The Sixteenth and Seventeenth Amendment paved the way for Progressive advocates to succeed in passing the other Progressive amendments. The other amendments were designed to take control of government from large corporations and give it back to the people. Constitutional amendments fixed more than just governmental procedural problems. They were ways to restrict alcohol use, grant voting rights for women, and regulate child labor.
Effects of the Seventeenth Amendment
The Seventeenth Amendment succeeded in some ways, but failed in others. The Seventeenth Amendment made the Senate more responsive to the American public. The American public gained greater access to senators and influence in senators’ decisions on many issues of national and international importance. Senators, who must run for election and reelection, discovered that the best policy is to aim for openness and responsiveness to public opinion.
One area in which the Senate has been accused of being too responsive to the public is in its handling of presidential appointments to the Supreme Court. Public opinion heavily influenced Senate reactions during two appointment processes. First, in 1987 when President Ronald Reagan (1911–2004; served 1981–89) nominated Robert H. Bork, and next in 1991 when President George H. Bush (1924–; served 1989–93) successfully appointed Clarence Thomas to the Supreme Court.
The Senate’s Judiciary Committee considers Supreme Court candidates and can also generate strong public reactions. News stories triggered heated public opposition to the Bork and Thomas nominations. The network news reported about Judge Bork’s perceived (by Democrats) controversial legal opinions and theories on constitutional law, and there were televised hearings of sexual harassment charges brought against Clarence Thomas by law professor Anita Hill, a former employee who worked for Thomas at the Equal Employment Opportunity Commission (EEOC).
Although many anticipated that direct election would make U.S. senators prone to popular whims, the Seventeenth Amendment also changed the Senate in unexpected ways. Rather than ridding the Senate of millionaires, the amendment heightened the importance of money in the Senate.
Compared to the ten millionaires in the Senate in 1900, there were more than twenty-five by the mid-1990s. Senators must constantly campaign in order to raise the funds needed for reelection. It is estimated that senators must raise approximately $15,000 for each week of their six-year terms. Candidates in California have spent more than $10 million to secure a Senate seat. By the 1990s, an average expenditure per seat in other states had reached $5 million.
Criticism in the early 2000s of the Seventeenth Amendment
In the early 2000s, many groups and commentators have called for the repeal of the Seventeenth Amendment, believing that modern elections have become too costly and allowed special interest groups to exert too much influence over the process. In the view of these critics, the Seventeenth Amendment reduced the influence of states and expanded the role of special-interest groups. Others believe that the Seventeenth Amendment has elevated the power of the federal government too far above that of the states. The reasoning is that before the Seventeenth Amendment, state legislatures had the power to decide who served in the Senate. Such senators naturally would look more to state interests. The Seventeenth Amendment removed the direct pipeline from state legislatures to the U.S. Senate. Some have argued that another impact of the Seventeenth Amendment: It causes federal courts to be more likely to strike down state laws. Professor Donald J. Kochan wrote in 2003: “there is substantial empirical evidence that suggests that the Seventeenth Amendment may have altered the relationship between state legislatures and federal courts.”
In April 2004, Senator Zell Miller (Democrat, Georgia) introduced Senate Joint Resolution 135, which called for the repeal of the Seventeenth Amendment. In introducing the measure to his colleagues on the Senate floor, Miller said:
Perhaps, then, the answer is a return to the original thinking of those wisest of all men, and how they intended for this government to function. Federalism, for all practical purposes, has become to this generation of leaders, some vague philosophy of the past that is dead, dead, dead. It isn’t even on life support. The line on that monitor went flat some time ago. You see, the reformers of the early 1900s killed it dead and cremated the body when they allowed for the direct election of U.S. senators. Up until then, senators were chosen by State legislatures, as James Madison and Alexander Hamilton had so carefully crafted. Direct elections of senators, as great and as good as that sounds, allowed Washington’s special interests to call the shots, whether it is filling judicial vacancies, passing laws, or issuing regulations. The State governments aided in their own collective suicide by going along with that popular fad at the time. … The Seventeenth Amendment was the death of the careful balance between State and Federal Government. As designed by that brilliant and very practical group of Founding Fathers, the two governments would be in competition with each other and neither could abuse or threaten the other. The election of senators by the state legislatures was the lynchpin that guaranteed the interests of the states would be protected.
Efforts have also been made in state legislatures, including Montana, to repeal the Seventeenth Amendment.
FOR MORE INFORMATION
Chopra, Pram. Supreme Court versus the Constitution: A Challenge to Federalism. New Delhi: Sage Publications India, 2006.
Haskell, John. Direct Democracy or Representative Government?: Dispelling the Populist Myth. Boulder, CO: Westview Press, 2001.
Holzer, Henry Mark. Supreme Court Opinions of Clarence Thomas (1991–2006): A Conservative’s Perspective. Jefferson, NC: McFarland, 2007.
Merida, Kevin, and Michael Fletcher. Supreme Discomfort: The Divided Soul of Clarence Thomas. New York: Doubleday, 2007.
Rossum, Ralph A. Federalism, the Supreme Court, and the Seventeenth Amendment: The Irony of Constitutional Democracy. Lexington, MA: Lexington Press, 2001.
Kochan, Donald J. “State Laws and the Independent Judiciary: An Analysis of the Effects of the Seventeenth Amendment on the Number of Supreme Court Cases Holding State Laws Unconstitutional.” Albany Law Review, vol. 66, (2003).
Rossum, Ralph A. “The Irony of Constitutional Democracy: Federalism, the Supreme Court, and the Seventeenth Amendment.” San Diego Law Review, vol. 36, (1999).
The Center for Constitutional Studies. (accessed August 22, 2007) .
Dean, John W. "The Seventeenth Amendment: Should It Be Repealed?" Findlaw’s Writ, (accessed August 22, 2007).
“Repeal the Seventeen Amendment.” (accessed August 22, 2007).
“Things that Are Not in the U.S. Constitution.” The U.S. Constitution Online. http:/www.usconstitution.net/constnot.html (accessed August 22, 2007). |
Sexual orientation is an enduring pattern of romantic or sexual attraction (or a combination of these) to persons of the opposite sex or gender, the same sex or gender, or to both sexes or more than one gender. These attractions are generally subsumed under heterosexuality, homosexuality, and bisexuality, while asexuality (the lack of sexual attraction to others) is sometimes identified as the fourth category.
These categories are aspects of the more nuanced nature of sexual identity and terminology. For example, people may use other labels, such as pansexual or polysexual, or none at all. According to the American Psychological Association, sexual orientation "also refers to a person's sense of identity based on those attractions, related behaviors, and membership in a community of others who share those attractions". Androphilia and gynephilia are terms used in behavioral science to describe sexual orientation as an alternative to a gender binary conceptualization. Androphilia describes sexual attraction to masculinity; gynephilia describes the sexual attraction to femininity. The term sexual preference largely overlaps with sexual orientation, but is generally distinguished in psychological research. A person who identifies as bisexual, for example, may sexually prefer one sex over the other. Sexual preference may also suggest a degree of voluntary choice, whereas the scientific consensus is that sexual orientation is not a choice.
Scientists do not know the exact causes of sexual orientation, but they believe that it is caused by a complex interplay of genetic, hormonal, and environmental influences. They favor biologically-based theories, which point to genetic factors, the early uterine environment, both, or the inclusion of genetic and social factors. There is no substantive evidence which suggests parenting or early childhood experiences play a role when it comes to sexual orientation. Research over several decades has demonstrated that sexual orientation ranges along a continuum, from exclusive attraction to the opposite sex to exclusive attraction to the same sex.
Sexual orientation is reported primarily within biology and psychology (including sexology), but it is also a subject area in anthropology, history (including social constructionism), and law, and there are other explanations that relate to sexual orientation and culture.
- 1 Definitions and distinguishing from sexual identity and behavior
- 2 Causes
- 3 Efforts to change sexual orientation
- 4 Assessment and measurement
- 5 Culture
- 6 Demographics
- 7 Social constructionism and Western societies
- 8 Law, politics and theology
- 9 See also
- 10 References
- 11 Further reading
- 12 External links
Definitions and distinguishing from sexual identity and behavior
Sexual orientation is traditionally defined as including heterosexuality, bisexuality, and homosexuality, while asexuality is considered the fourth category of sexual orientation by some researchers and has been defined as the absence of a traditional sexual orientation. An asexual has little to no sexual attraction to people. It may be considered a lack of a sexual orientation, and there is significant debate over whether or not it is a sexual orientation.
Most definitions of sexual orientation include a psychological component, such as the direction of an individual's erotic desires, or a behavioral component, which focuses on the sex of the individual's sexual partner/s. Some people prefer simply to follow an individual's self-definition or identity. Scientific and professional understanding is that "the core attractions that form the basis for adult sexual orientation typically emerge between middle childhood and early adolescence". Sexual orientation differs from sexual identity in that it encompasses relationships with others, while sexual identity is a concept of self.
The American Psychological Association states that "[s]exual orientation refers to an enduring pattern of emotional, romantic, and/or sexual attractions to men, women, or both sexes" and that "[t]his range of behaviors and attractions has been described in various cultures and nations throughout the world. Many cultures use identity labels to describe people who express these attractions. In the United States, the most frequent labels are lesbians (women attracted to women), gay men (men attracted to men), and bisexual people (men or women attracted to both sexes). However, some people may use different labels or none at all". They additionally state that sexual orientation "is distinct from other components of sex and gender, including biological sex (the anatomical, physiological, and genetic characteristics associated with being male or female), gender identity (the psychological sense of being male or female), and social gender role (the cultural norms that define feminine and masculine behavior)". According to psychologists, sexual orientation also refers to a person’s choice of sexual partners, who may be homosexual, heterosexual, or bisexual.
Sexual identity and sexual behavior are closely related to sexual orientation, but they are distinguished, with sexual identity referring to an individual's conception of themselves, behavior referring to actual sexual acts performed by the individual, and orientation referring to "fantasies, attachments and longings." Individuals may or may not express their sexual orientation in their behaviors. People who have a homosexual sexual orientation that does not align with their sexual identity are sometimes referred to as 'closeted'. The term may, however, reflect a certain cultural context and particular stage of transition in societies which are gradually dealing with integrating sexual minorities. In studies related to sexual orientation, when dealing with the degree to which a person's sexual attractions, behaviors and identity match, scientists usually use the terms concordance or discordance. Thus, a woman who is attracted to other women, but calls herself heterosexual and only has sexual relations with men, can be said to experience discordance between her sexual orientation (homosexual or lesbian) and her sexual identity and behaviors (heterosexual).
Sexual identity may also be used to describe a person's perception of his or her own sex, rather than sexual orientation. The term sexual preference has a similar meaning to sexual orientation, and the two terms are often used interchangeably, but sexual preference suggests a degree of voluntary choice. The term has been a listed by the American Psychological Association's Committee on Gay and Lesbian Concerns as a wording that advances a "heterosexual bias".
Androphilia, gynephilia and other terms
Androphilia and gynephilia (or gynecophilia) are terms used in behavioral science to describe sexual attraction, as an alternative to a homosexual and heterosexual conceptualization. They are used for identifying a subject's object of attraction without attributing a sex assignment or gender identity to the subject. Related terms such as pansexual and polysexual do not make any such assignations to the subject. People may also use terms such as queer, pansensual, polyfidelitous, ambisexual, or personalized identities such as byke or biphilic.
Same gender loving (SGL) is considered to be more than a different term for gay; it introduces the concept of love into the discussion. SGL also acknowledges relationships between people of like identities; for example, third gender individuals who may be oriented toward each other, and expands the discussion of sexuality beyond the original man/woman gender duality. The complexity of transgender orientation is also more completely understood within this perspective.
Using androphilia and gynephilia can avoid confusion and offense when describing people in non-western cultures, as well as when describing intersex and transgender people. Psychiatrist Anil Aggrawal explains that androphilia, along with gynephilia, "is needed to overcome immense difficulties in characterizing the sexual orientation of trans men and trans women. For instance, it is difficult to decide whether a trans man erotically attracted to males is a heterosexual female or a homosexual male; or a trans woman erotically attracted to females is a heterosexual male or a lesbian female. Any attempt to classify them may not only cause confusion but arouse offense among the affected subjects. In such cases, while defining sexual attraction, it is best to focus on the object of their attraction rather than on the sex or gender of the subject." Sexologist Milton Diamond writes, "The terms heterosexual, homosexual, and bisexual are better used as adjectives, not nouns, and are better applied to behaviors, not people. This usage is particularly advantageous when discussing the partners of transsexual or intersexed individuals. These newer terms also do not carry the social weight of the former ones."
Some researchers advocate use of the terminology to avoid bias inherent in Western conceptualizations of human sexuality. Writing about the Samoan fa'afafine demographic, sociologist Johanna Schmidt writes that in cultures where a third gender is recognized, a term like "homosexual transsexual" does not align with cultural categories.
Some researchers, such as Bruce Bagemihl, have criticized the labels "heterosexual" and "homosexual" as confusing and degrading. Bagemihl writes, "...the point of reference for 'heterosexual' or 'homosexual' orientation in this nomenclature is solely the individual's genetic sex prior to reassignment (see for example, Blanchard et al. 1987, Coleman and Bockting, 1988, Blanchard, 1989). These labels thereby ignore the individual's personal sense of gender identity taking precedence over biological sex, rather than the other way around." Bagemihl goes on to take issue with the way this terminology makes it easy to claim transsexuals are really homosexual males seeking to escape from stigma.
Gender, transgender, cisgender, and conformance
The earliest writers on sexual orientation usually understood it to be intrinsically linked to the subject's own sex. For example, it was thought that a typical female-bodied person who is attracted to female-bodied persons would have masculine attributes, and vice versa. This understanding was shared by most of the significant theorists of sexual orientation from the mid nineteenth to early twentieth century, such as Karl Heinrich Ulrichs, Richard von Krafft-Ebing, Magnus Hirschfeld, Havelock Ellis, Carl Jung, and Sigmund Freud, as well as many gender-variant homosexual people themselves. However, this understanding of homosexuality as sexual inversion was disputed at the time, and, through the second half of the twentieth century, gender identity came to be increasingly seen as a phenomenon distinct from sexual orientation. Transgender and cisgender people may be attracted to men, women, or both, although the prevalence of different sexual orientations is quite different in these two populations. An individual homosexual, heterosexual or bisexual person may be masculine, feminine, or androgynous, and in addition, many members and supporters of lesbian and gay communities now see the "gender-conforming heterosexual" and the "gender-nonconforming homosexual" as negative stereotypes. Nevertheless, studies by J. Michael Bailey and Kenneth Zucker found a majority of the gay men and lesbians sampled reporting various degrees of gender-nonconformity during their childhood years.
Transgender people today identify with the sexual orientation that corresponds with their gender; meaning that a trans woman who is solely attracted to women would often identify as a lesbian. A trans man solely attracted to women would be a straight man.
Sexual orientation sees greater intricacy when non-binary understandings of both sex (male, female, or intersex) and gender (man, woman, transgender, third gender, etc. are considered. Sociologist Paula Rodriguez Rust (2000) argues for a more multifaceted definition of sexual orientation:
...Most alternative models of sexuality... define sexual orientation in terms of dichotomous biological sex or gender... Most theorists would not eliminate the reference to sex or gender, but instead advocate incorporating more complex nonbinary concepts of sex or gender, more complex relationships between sex, gender, and sexuality, and/or additional nongendered dimensions into models of sexuality.— Paula C. Rodriguez Rust
Relationships outside of orientation
Gay and lesbian people can have sexual relationships with someone of the opposite sex for a variety of reasons, including the desire for a perceived traditional family and concerns of discrimination and religious ostracism. While some LGBT people hide their respective orientations from their spouses, others develop positive gay and lesbian identities while maintaining successful heterosexual marriages. Coming out of the closet to oneself, a spouse of the opposite sex, and children can present challenges that are not faced by gay and lesbian people who are not married to people of the opposite sex or do not have children.
Often, sexual orientation and sexual orientation identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual orientation. While the Centre for Addiction and Mental Health and American Psychiatric Association state that sexual orientation is innate, continuous or fixed throughout their lives for some people, but is fluid or changes over time for others, the American Psychological Association distinguishes between sexual orientation (an innate attraction) and sexual orientation identity (which may change at any point in a person's life).
Some research suggests that "[f]or some [people] the focus of sexual interest will shift at various points through the life span..." "There... [was, as of 1995,] essentially no research on the longitudinal stability of sexual orientation over the adult life span... It [was]... still an unanswered question whether... [the] measure [of 'the complex components of sexual orientation as differentiated from other aspects of sexual identity at one point in time'] will predict future behavior or orientation. Certainly, it is... not a good predictor of past behavior and self-identity, given the developmental process common to most gay men and lesbians (i.e., denial of homosexual interests and heterosexual experimentation prior to the coming-out process)." Some studies report that "[a number of] lesbian women, and some heterosexual women as well, perceive choice as an important element in their sexual orientations."
Born bisexual, then monosexualizing
Innate bisexuality is an idea introduced by Sigmund Freud. According to this theory, all humans are born bisexual in a very broad sense of the term, that of incorporating general aspects of both sexes. In Freud's view, this was true anatomically and therefore also psychologically, with sexual attraction to both sexes being one part of this psychological bisexuality. Freud believed that in the course of sexual development the masculine side would normally become dominant in men and the feminine side in women, but that as adults everyone still has desires derived from both the masculine and the feminine sides of their natures. Freud did not claim that everyone is bisexual in the sense of feeling the same level of sexual attraction to both genders.
The exact causes for the development of a particular sexual orientation have yet to be established. To date, a lot of research has been conducted to determine the influence of genetics, hormonal action, development dynamics, social and cultural influences—which has led many to think that biology and environment factors play a complex role in forming it. It was once thought that homosexuality was the result of faulty psychological development, resulting from childhood experiences and troubled relationships, including childhood sexual abuse. It has been found that this was based on prejudice and misinformation.
Research has identified several biological factors which may be related to the development of sexual orientation, including genes, prenatal hormones, and brain structure. No single controlling cause has been identified, and research is continuing in this area.
Though researchers generally believe that sexual orientation is not determined by any one factor but by a combination of genetic, hormonal, and environmental influences, with biological factors involving a complex interplay of genetic factors and the early uterine environment, they favor biological models for the cause. They believe that sexual orientation is not a choice, and some of them believe that it is established at conception. That is, individuals do not choose to be homosexual, heterosexual, bisexual, or asexual. While current scientific investigation usually seeks to find biological explanations for the adoption of a particular sexual orientation, there are yet no replicated scientific studies supporting any specific biological etiology for sexual orientation. However, scientific studies have found a number of statistical biological differences between gay people and heterosexuals, which may result from the same underlying cause as sexual orientation itself.
Genes may be related to the development of sexual orientation. At one time, studies of twins appeared to point to a major genetic component, but problems in experimental design of the available studies have made their interpretation difficult, and one recent study appears to exclude genes as a major factor.
The hormonal theory of sexuality holds that, just as exposure to certain hormones plays a role in fetal sex differentiation, such exposure also influences the sexual orientation that emerges later in the adult. Fetal hormones may be seen as either the primary influence upon adult sexual orientation or as a co-factor interacting with genes or environmental and social conditions.
As female fetuses have two X chromosomes and male ones a XY pair, the chromosome Y is the responsible for producing male differentiation on the defect female development. The differentiation process is driven by androgen hormones, mainly testosterone and dihydrotestosterone (DHT). The newly formed testicles in the fetus are responsible for the secretion of androgens, that will cooperate in driving the sexual differentiation of the developing fetus, included its brain. This results in sexual differences between males and females. This fact has led some scientists to test in various ways the result of modifying androgen exposure levels in mammals during fetus and early life.
Recent studies found an increased chance of homosexuality in men whose mothers previously carried to term many male children. This effect is nullified if the man is left-handed.
Known as the fraternal birth order (FBO) effect, this theory has been backed up by strong evidence of its prenatal origin, although no evidence thus far has linked it to an exact prenatal mechanism. However, research suggests that this may be of immunological origin, caused by a maternal immune reaction against a substance crucial to male fetal development during pregnancy, which becomes increasingly likely after every male gestation. As a result of this immune effect, alterations in later-born males' prenatal development have been thought to occur. This process, known as the maternal immunization hypothesis (MIH), would begin when cells from a male fetus enter the mother's circulation during pregnancy or while giving birth. These Y-linked proteins would not be recognized in the mother's immune system because she is female, causing her to develop antibodies which would travel through the placental barrier into the fetal compartment. From here, the anti-male bodies would then cross the blood/brain barrier (BBB) of the developing fetal brain, altering sex-dimorphic brain structures relative to sexual orientation, causing the exposed son to be more attracted to men over women.
There is no substantive evidence to support the suggestion that early childhood experiences, parenting, sexual abuse, or other adverse life events influence sexual orientation; however, studies do find that aspects of sexuality expression have an experiential basis and that parental attitudes towards a particular sexual orientation may affect how children of the parents experiment with behaviors related to a certain sexual orientation.
Influences: professional organizations' statements
The mechanisms for the development of a particular sexual orientation remain unclear, but the current literature and most scholars in the field state that one's sexual orientation is not a choice; that is, individuals do not choose to be homosexual or heterosexual. A variety of theories about the influences on sexual orientation have been proposed. Sexual orientation probably is not determined by any one factor but by a combination of genetic, hormonal, and environmental influences. In recent decades, biologically based theories have been favored by experts. Although there continues to be controversy and uncertainty as to the genesis of the variety of human sexual orientations, there is no scientific evidence that abnormal parenting, sexual abuse, or other adverse life events influence sexual orientation. Current knowledge suggests that sexual orientation is usually established during early childhood.
Currently, there is no scientific consensus about the specific factors that cause an individual to become heterosexual, homosexual, or bisexual – including possible biological, psychological, or social effects of the parents' sexual orientation. However, the available evidence indicates that the vast majority of lesbian and gay adults were raised by heterosexual parents and the vast majority of children raised by lesbian and gay parents eventually grow up to be heterosexual.
Despite almost a century of psychoanalytic and psychological speculation, there is no substantive evidence to support the suggestion that the nature of parenting or early childhood experiences play any role in the formation of a person's fundamental heterosexual or homosexual orientation. It would appear that sexual orientation is biological in nature, determined by a complex interplay of genetic factors and the early uterine environment. Sexual orientation is therefore not a choice, though sexual behaviour clearly is.
The American Psychiatric Association stated:
No one knows what causes heterosexuality, homosexuality, or bisexuality. Homosexuality was once thought to be the result of troubled family dynamics or faulty psychological development. Those assumptions are now understood to have been based on misinformation and prejudice.
A legal brief dated September 26, 2007, and presented on behalf of the American Psychological Association, California Psychological Association, American Psychiatric Association, National Association of Social Workers, and National Association of Social Workers, California Chapter, stated:
Although much research has examined the possible genetic, hormonal, developmental, social, and cultural influences on sexual orientation, no findings have emerged that permit scientists to conclude that sexual orientation – heterosexuality, homosexuality, or bisexuality – is determined by any particular factor or factors. The evaluation of amici is that, although some of this research may be promising in facilitating greater understanding of the development of sexual orientation, it does not permit a conclusion based in sound science at the present time as to the cause or causes of sexual orientation, whether homosexual, bisexual, or heterosexual.
Efforts to change sexual orientation
Sexual orientation change efforts are methods that aim to change a same-sex sexual orientation. They may include behavioral techniques, cognitive behavioral techniques, "reparative therapy", psychoanalytic techniques, medical approaches, and religious and spiritual approaches.
No major mental health professional organization has sanctioned efforts to change sexual orientation and virtually all of them have adopted policy statements cautioning the profession and the public about treatments that purport to change sexual orientation. These include the American Psychiatric Association, American Psychological Association, American Counseling Association, National Association of Social Workers in the USA, the Royal College of Psychiatrists, and the Australian Psychological Society.
In 2009, the American Psychological Association Task Force on Appropriate Therapeutic Responses to Sexual Orientation conducted a systematic review of the peer-reviewed journal literature on sexual orientation change efforts (SOCE) and concluded:
Efforts to change sexual orientation are unlikely to be successful and involve some risk of harm, contrary to the claims of SOCE practitioners and advocates. Even though the research and clinical literature demonstrate that same-sex sexual and romantic attractions, feelings, and behaviors are normal and positive variations of human sexuality, regardless of sexual orientation identity, the task force concluded that the population that undergoes SOCE tends to have strongly conservative religious views that lead them to seek to change their sexual orientation. Thus, the appropriate application of affirmative therapeutic interventions for those who seek SOCE involves therapist acceptance, support, and understanding of clients and the facilitation of clients' active coping, social support, and identity exploration and development, without imposing a specific sexual orientation identity outcome.
In 2012, the Pan American Health Organization (the North and South American branch of the World Health Organization) released a statement cautioning against services that purport to "cure" people with non-heterosexual sexual orientations as they lack medical justification and represent a serious threat to the health and well-being of affected people, and noted that the global scientific and professional consensus is that homosexuality is a normal and natural variation of human sexuality and cannot be regarded as a pathological condition. The Pan American Health Organization further called on governments, academic institutions, professional associations and the media to expose these practices and to promote respect for diversity. The World Health Organization affiliate further noted that gay minors have sometimes been forced to attend these "therapies" involuntarily, being deprived of their liberty and sometimes kept in isolation for several months, and that these findings were reported by several United Nations bodies. Additionally, the Pan American Health Organization recommended that such malpractices be denounced and subject to sanctions and penalties under national legislation, as they constitute a violation of the ethical principles of health care and violate human rights that are protected by international and regional agreements.
The National Association for Research & Therapy of Homosexuality (NARTH), which describes itself as a "professional, scientific organization that offers hope to those who struggle with unwanted homosexuality," disagrees with the mainstream mental health community's position on conversion therapy, both on its effectiveness and by describing sexual orientation not as a binary immutable quality, or as a disease, but as a continuum of intensities of sexual attractions and emotional affect. The American Psychological Association and the Royal College of Psychiatrists expressed concerns that the positions espoused by NARTH are not supported by the science and create an environment in which prejudice and discrimination can flourish.
Assessment and measurement
Varying definitions and strong social norms about sexuality can make sexual orientation difficult to quantify.
Early classification schemes
One of the earliest sexual orientation classification schemes was proposed in the 1860s by Karl Heinrich Ulrichs in a series of pamphlets he published privately. The classification scheme, which was meant only to describe males, separated them into three basic categories: dionings, urnings and uranodionings. An urning can be further categorized by degree of effeminacy. These categories directly correspond with the categories of sexual orientation used today: heterosexual, homosexual, and bisexual. In the series of pamphlets, Ulrichs outlined a set of questions to determine if a man was an urning. The definitions of each category of Ulrichs' classification scheme are as follows:
- Dioning - Comparable to the modern term "heterosexual"
- Urning - Comparable to the modern term "homosexual"
- Mannling - A manly urning
- Weibling - An effeminate urning
- Zwischen - A somewhat manly and somewhat effeminate urning
- Virilised - An urning that sexually behaves like a dioning
- Urano-Dioning - Comparable to the modern term "bisexual"
From at least the late nineteenth century in Europe, there was speculation that the range of human sexual response looked more like a continuum than two or three discrete categories. Berlin sexologist Magnus Hirschfeld published a scheme in 1896 that measured the strength of an individual's sexual desire on two independent 10-point scales, A (homosexual) and B (heterosexual). A heterosexual individual may be A0, B5; a homosexual individual may be A5, B0; an asexual would be A0, B0; and someone with an intense attraction to both sexes would be A9, B9.
The Kinsey scale, also called the Heterosexual-Homosexual Rating Scale, was first published in Sexual Behavior in the Human Male (1948) by Alfred Kinsey, Wardell Pomeroy, and Clyde Martin and also featured in Sexual Behavior in the Human Female (1953). The scale was developed to combat the assumption at the time that people are either heterosexual or homosexual and that these two types represent antitheses in the sexual world. Recognizing that a large portion of population is not completely heterosexual or homosexual and people can experience both heterosexual and homosexual behavior and psychic responses, Kinsey et al., stated:
Males do not represent two discrete populations, heterosexual and homosexual. The world is not to be divided into sheep and goats. Not all things are black nor all things white... The living world is a continuum in each and every one of its aspects. The sooner we learn this concerning human sexual behavior, the sooner we shall reach a sound understanding of the realities of sex.— Kinsey et al. (1948) pp. 639.
The Kinsey scale provides a classification of sexual orientation based on the relative amounts of heterosexual and homosexual experience or psychic response in one's history at a given time. The classification scheme works such that individuals in the same category show the same balance between the heterosexual and homosexual elements in their histories. The position on the scale is based on the relation of heterosexuality to homosexuality in one's history, rather than the actual amount of overt experience or psychic response. An individual can be assigned a position on the scale in accordance with the following definitions of the points of the scale:
|0||Exclusively heterosexual. Individuals make no physical contact which results in erotic arousal or orgasm and make no psychic responses to individuals of their own sex.|
|1||Predominantly heterosexual/incidentally homosexual. Individuals have only incidental homosexual contacts which have involved physical or psychic response or incidental psychic response without physical contact.|
|2||Predominantly heterosexual but more than incidentally homosexual. Individuals have more than incidental homosexual experience and/or respond rather definitely to homosexual stimuli.|
|3||Equally heterosexual and homosexual. Individuals are about equally homosexual and heterosexual in their experiences and/or psychic reactions.|
|4||Predominantly homosexual but more than incidentally heterosexual. Individuals have more overt activity and/or psychic reactions in the homosexual while still maintaining a fair amount of heterosexual activity and/or responding rather definitively to heterosexual contact.|
|5||Predominantly homosexual/only incidentally heterosexual. Individuals are almost entirely homosexual in their activities and/or reactions.|
|6||Exclusively homosexual. Individuals who are exclusively homosexual, both in regard to their overt experience and in regard to their psychic reactions.|
The Kinsey scale has been praised for dismissing the dichotomous classification of sexual orientation and allowing for a new perspective on human sexuality. However, the scale has been criticized because it is still not a true continuum. Despite seven categories being able to provide a more accurate description of sexual orientation than a dichotomous scale it is still difficult to determine which category individuals should be assigned to. In a major study comparing sexual response in homosexual males and females, Masters and Johnson discuss the difficulty of assigning the Kinsey ratings to participants. Particularly, they found it difficult to determine the relative amount heterosexual and homosexual experience and response in a person's history when using the scale. They report finding it difficult to assign ratings 2-4 for individuals with a large number of heterosexual and homosexual experiences. When, there is a lot of heterosexual and homosexual experiences in one's history it becomes difficult for that individual to be fully objective in assessing the relative amount of each.
Weinrich et al. (1993) and Weinberg et al. (1994) criticized the scale for lumping individuals who are different based on different dimensions of sexuality into the same categories. When applying the scale, Kinsey considered two dimensions of sexual orientation: overt sexual experience and psychosexual reactions. Valuable information was lost by collapsing the two values into one final score. A person who has only predominantly same sex reactions is different from someone with relatively little reaction but lots of same sex experience. It would have been quite simple for Kinsey to have measured the two dimensions separately and report scores independently to avoid loss of information. Furthermore, there are more than two dimensions of sexuality to be considered. Beyond behavior and reactions, one could also assess attraction, identification, lifestyle etc. This is addressed by the Klein Sexual Orientation Grid.
A third concern with the Kinsey scale is that it inappropriately measures heterosexuality and homosexuality on the same scale, making one a tradeoff of the other. Research in the 1970s on masculinity and femininity found that concepts of masculinity and femininity are more appropriately measured as independent concepts on a separate scale rather than as a single continuum, with each end representing opposite extremes. When compared on the same scale, they act as tradeoffs such, whereby to be more feminine one had to be less masculine and vice versa. However, if they are considered as separate dimensions one can be simultaneously very masculine and very feminine. Similarly, considering heterosexuality and homosexuality on separate scales would allow one to be both very heterosexual and very homosexual or not very much of either. When they are measured independently, the degree of heterosexual and homosexual can be independently determined, rather than the balance between heterosexual and homosexual as determined using the Kinsey Scale.
Klein Sexual Orientation Grid
In response to the criticism of the Kinsey scale only measuring two dimensions of sexual orientation, Fritz Klein developed the Klein sexual orientation grid (KSOG), a multidimensional scale for describing sexual orientation. Introduced in Klein's book The Bisexual Option (1978), the KSOG uses a 7-point scale to assess seven different dimensions of sexuality at three different points in an individual's life: past (from early adolescence up to one year ago), present (within the last 12 months), and ideal (what would you choose if it were completely your choice).
The Sell Assessment of Sexual Orientation
The Sell Assessment of Sexual Orientation (SASO) was developed to address the major concerns with the Kinsey Scale and Klein Sexual Orientation Grid and as such, measures sexual orientation on a continuum, considers various dimensions of sexual orientation, and considers homosexuality and heterosexuality separately. Rather than providing a final solution to the question of how to best measure sexual orientation, the SASO is meant to provoke discussion and debate about measurements of sexual orientation.
The SASO consists of 12 questions. Six of these questions assess sexual attraction, four assess sexual behavior, and two assess sexual orientation identity. For each question on the scale that measures homosexuality there is a corresponding question that measures heterosexuality giving six matching pairs of questions. Taken all together, the six pairs of questions and responses provide a profile of an individual's sexual orientation. However, results can be further simplified into four summaries that look specifically at responses that correspond to either homosexuality, heterosexuality, bisexuality or asexuality.
Of all the questions on the scale, Sell considered those assessing sexual attraction to be the most important as sexual attraction is a better reflection of the concept of sexual orientation which he defined as "extent of sexual attractions toward members of the other, same, both sexes or neither" than either sexual identity or sexual behavior. Identity and behavior are measured as supplemental information because they are both closely tied to sexual attraction and sexual orientation. Major criticisms of the SASO have not been established, but a concern is that the reliability and validity remains largely unexamined.
Difficulties with assessment
Research focusing on sexual orientation uses scales of assessment to identify who belongs in which sexual population group. It is assumed that these scales will be able to reliably identify and categorize people by their sexual orientation. However, it is difficult to determine an individual's sexual orientation through scales of assessment, due to ambiguity regarding the definition of sexual orientation. Generally, there are three components of sexual orientation used in assessment. Their definitions and examples of how they may be assessed are as follows:
|Sexual attraction||Attraction toward one sex or the desire to have sexual relations or to be in a primary loving, sexual relationship with one or both sexes||"Have you ever had a romantic attraction to a male? Have you ever had a romantic attraction to a female?"|
|Sexual behavior||"Any mutually voluntary activity with another person that involves genital contact and sexual excitement or arousal, that is, feeling really turned on, even if intercourse or orgasm did not occur"||"Have you ever had a relationship with someone of your own sex which resulted in sexual orgasm?"|
|Sexual identity||Personally selected, socially and historically bound labels attached to the perceptions and meaning individuals have about their sexual identity.||"Pick from these six option: gay or lesbian; bisexual, but mostly gay or lesbian; bisexual equally gay/lesbian and heterosexual; bisexual but mostly heterosexual; heterosexual; and uncertain, don't know for sure."|
Though sexual attraction, behavior, and identity are all components of sexual orientation, if a person defined by one of these dimensions were congruent with those defined by another dimension it would not matter which was used in assessing orientation, but this is not the case. There is "little coherent relationship between the amount and mix of homosexual and heterosexual behavior in a person's biography and that person's choice to label himself or herself as bisexual, homosexual, or heterosexual". Individuals typically experience diverse attractions and behaviors that may reflect curiosity, experimentation, social pressure and is not necessarily indicative of an underlying sexual orientation. For example, a woman may have fantasies or thoughts about sex with other women but never act on these thoughts and only have sex with opposite gender partners. If sexual orientation was being assessed based on one's sexual attraction then this individual would be considered homosexual, but her behavior indicates heterosexuality.
As there is no research indicating which of the three components is essential in defining sexual orientation, all three are used independently and provide different conclusions regarding sexual orientation. Savin Williams (2006) discusses this issue and notes that by basing findings regarding sexual orientation on a single component, researchers may not actually capture the intended population. For example, if homosexual is defined by same sex behavior, gay virgins are omitted, heterosexuals engaging in same sex behavior for other reasons than preferred sexual arousal are miscounted, and those with same sex attraction who only have opposite-sex relations are excluded. Because of the limited populations that each component captures, consumers of research should be cautious in generalizing these findings.
One of the uses for scales that assess sexual orientation is determining what the prevalence of different sexual orientations are within a population. Depending on subject's age, culture and sex, the prevalence rates of homosexuality vary depending on which component of sexual orientation is being assessed: sexual attraction, sexual behavior, or sexual identity. Assessing sexual attraction will yield the greatest prevalence of homosexuality in a population whereby the proportion of individuals indicating they are same sex attracted is two to three times greater than the proportion reporting same sex behavior or identify as gay, lesbian, or bisexual. Furthermore, reports of same sex behavior usually exceed those of gay, lesbian, or bisexual identification. The following chart demonstrates how widely the prevalence of homosexuality can vary depending on what age, location and component of sexual orientation is being assessed:
|Country: Age group||Female||Male||Female||Male||Female||Male|
|Turkey: Young adults||7%||6%||4%||5%||2%||2%|
The variance in prevalence rates is reflected in people's inconsistent responses to the different components of sexual orientation within a study and the instability of their responses over time. Laumann et al., (1994) found that among U.S. adults 20% of those who would be considered homosexual on one component of orientation were homosexual on the other two dimensions and 70% responded in a way that was consistent with homosexuality on only one of the three dimensions. Furthermore, sexuality is fluid such that one's sexual orientation is not necessarily stable or consistent over time but is subject to change throughout life. Diamond (2003) found that over 7 years 2/3 of the women changed their sexual identity at least once, with many reporting that the label was not adequate in capturing the diversity of their sexual or romantic feelings. Furthermore, women who relinquished bisexual and lesbian identification did not relinquish same sex sexuality and acknowledged the possibility for future same sex attractions and/or behaviour. One woman stated "I'm mainly straight but I'm one of those people who, if the right circumstance came along, would change my viewpoint". Therefore, individuals classified as homosexual in one study might not be identified the same way in another depending on which components are assessed and when the assessment is made making it difficult to pin point who is homosexual and who is not and what the overall prevalence within a population may be.
Depending on which component of sexual orientation is being assessed and referenced, different conclusions can be drawn about the prevalence rate of homosexuality which has real world consequences. Knowing how much of the population is made up of homosexual individuals influences how this population may be seen or treated by the public and government bodies. For example, if homosexual individuals constitute only 1% of the general population they are politically easier to ignore or than if they are known to be a constituency that surpasses most ethnic and ad minority groups. If the number is relatively minor then it is difficult to argue for community based same sex programs and services, mass media inclusion of gay role models, or Gay/Straight Alliances in schools. For this reason, in the 1970s Bruce Voeller, the chair of the National Gay and Lesbian Task Force perpetuated a common myth that the prevalence of homosexuality is 10% for the whole population by averaging a 13% number for men and a 7% number for women. Voeller generalized this finding and used it as part of the modern gay rights movement to convince politicians and the public that "we [gays and lesbians] are everywhere".
In the paper "Who's Gay? Does It Matter?", Ritch Savin-Williams proposes two different approaches to assessing sexual orientation until well positioned and psychometrically sound and tested definitions are developed that would allow research to reliably identify the prevalence, causes, and consequences of homosexuality. He first suggests that greater priority should be given to sexual arousal and attraction over behaviour and identity because it is less prone to self- and other-deception, social conditions and variable meanings. To measure attraction and arousal he proposed that biological measures should be developed and used. There are numerous biological/physiological measures that exist that can measure sexual orientation such as sexual arousal, brain scans, eye tracking, body odour preference, and anatomical variations such as digit-length ratio and right or left handedness. Secondly, Savin-Williams suggests that researchers should forsake the general notion of sexual orientation altogether and assess only those components that are relevant for the research question being investigated. For example:
- To assess STDs or HIV transmission, measure sexual behaviour
- To assess interpersonal attachments, measure sexual/romantic attraction
- To assess political ideology, measure sexual identity
Means of assessment
Means typically used include surveys, interviews, cross-cultural studies, physical arousal measurements sexual behavior, sexual fantasy, or a pattern of erotic arousal. The most common is verbal self-reporting or self-labeling, which depend on respondents being accurate about themselves.
Studying human sexual arousal has proved a fruitful way of understanding how men and women differ as genders and in terms of sexual orientation. A clinical measurement may use penile or vaginal photoplethysmography, where genital engorgement with blood is measured in response to exposure to different erotic material.
Some researchers who study sexual orientation argue that the concept may not apply similarly to men and women. A study of sexual arousal patterns found that women, when viewing erotic films which show female-female, male-male and male-female sexual activity (oral sex or penetration), have patterns of arousal which do not match their declared sexual orientations as well as men's. That is, heterosexual and lesbian women's sexual arousal to erotic films do not differ significantly by the genders of the participants (male or female) or by the type of sexual activity (heterosexual or homosexual). On the contrary, men's sexual arousal patterns tend to be more in line with their stated orientations, with heterosexual men showing more penis arousal to female-female sexual activity and less arousal to female-male and male-male sexual stimuli, and homosexual and bisexual men being more aroused by films depicting male-male intercourse and less aroused by other stimuli.
Another study on men and women's patterns of sexual arousal confirmed that men and women have different patterns of arousal, independent of their sexual orientations. The study found that women's genitals become aroused to both human and nonhuman stimuli from movies showing humans of both genders having sex (heterosexual and homosexual) and from videos showing non-human primates (bonobos) having sex. Men did not show any sexual arousal to non-human visual stimuli, their arousal patterns being in line with their specific sexual interest (women for heterosexual men and men for homosexual men).
These studies suggest that men and women are different in terms of sexual arousal patterns and that this is also reflected in how their genitals react to sexual stimuli of both genders or even to non-human stimuli. Sexual orientation has many dimensions (attractions, behavior, identity), of which sexual arousal is the only product of sexual attractions which can be measured at present with some degree of physical precision. Thus, the fact that women are aroused by seeing non-human primates having sex does not mean that women's sexual orientation includes this type of sexual interest. Some researchers argue that women's sexual orientation depends less on their patterns of sexual arousal than men's and that other components of sexual orientation (like emotional attachment) must be taken into account when describing women's sexual orientations. In contrast, men's sexual orientations tend to be primarily focused on the physical component of attractions and, thus, their sexual feelings are more exclusively oriented according to sex.
More recently, scientists have started to focus on measuring changes in brain activity related to sexual arousal, by using brain-scanning techniques. A study on how heterosexual and homosexual men's brains react to seeing pictures of naked men and women has found that both hetero- and homosexual men react positively to seeing their preferred sex, using the same brain regions. The only significant group difference between these orientations was found in the amygdala, a brain region known to be involved in regulating fear.
Although these findings have contributed to understanding how sexual arousal can differentiate between genders and sexual orientations, it is still a matter of debate whether these results reflect differences which are the result of social learning or genetic or biological factors. Further studies are needed to clarify how much of people's reactions to sexual stimuli of their preferred gender are due to learned or innate factors.
Research suggests that sexual orientation is independent of cultural and other social influences, but that open identification of one's sexual orientation may be hindered by homophobic/hetereosexist settings. Social systems such as religion, language and ethnic traditions can have a powerful impact on realization of sexual orientation. Influences of culture may complicate the process of measuring sexual orientation. The majority of empirical and clinical research on LGBT populations are done with largely white, middle-class, well-educated samples, however there are pockets of research that document various other cultural groups, although these are frequently limited in diversity of gender and sexual orientation of the subjects. Integration of sexual orientation with sociocultural identity may be a challenge for LGBT individuals. Individuals may or may not consider their sexual orientation to define their sexual identity, as they may experience various degrees of fluidity of sexuality, or may simply identify more strongly with another aspect of their identity such as family role. American culture puts a great emphasis on individual attributes, and views the self as unchangeable and constant. In contrast, East Asian cultures put a great emphasis on a person's social role within social hierarchies, and view the self as fluid and malleable. These differing cultural perspectives have many implications on cognitions of the self, including perception of sexual orientation.
Translation is a major obstacle when comparing different cultures. Many English terms lack equivalents in other languages, while concepts and words from other languages fail to be reflected in the English language. Translation and vocabulary obstacles are not limited to the English language. Language can force individuals to identify with a label that may or may not accurately reflect their true sexual orientation. Language can also be used to signal sexual orientation to others. The meaning of words referencing categories of sexual orientation are negotiated in the mass media in relation to social organization. New words may be brought into use to describe new terms or better describe complex interpretations of sexual orientation. Other words may pick up new layers or meaning. For example, the heterosexual Spanish terms marido and mujer for "husband" and "wife", respectively, have recently been replaced in Spain by the gender-neutral terms cónyuges or consortes meaning "spouses".
One person may presume knowledge of another person's sexual orientation based upon perceived characteristics, such as appearance, clothing, tone of voice, and accompaniment by and behavior with other people. The attempt to detect sexual orientation in social situations is known as gaydar; some studies have found that guesses based on face photos perform better than chance. 2015 research suggests that "gaydar" is an alternate label for using LGBT stereotypes to infer orientation, and that face-shape is not an accurate indication of orientation.
Perceived sexual orientation may affect how a person is treated. For instance, in the United States, the FBI reported that 15.6% of hate crimes reported to police in 2004 were "because of a sexual-orientation bias". Under the UK Employment Equality (Sexual Orientation) Regulations 2003, as explained by Advisory, Conciliation and Arbitration Service, "workers or job applicants must not be treated less favourably because of their sexual orientation, their perceived sexual orientation or because they associate with someone of a particular sexual orientation".
In Euro-American cultures, sexual orientation is defined by the gender(s) of the people a person is romantically or sexually attracted to. Euro-American culture generally assumes heterosexuality, unless otherwise specified. Cultural norms, values, traditions and laws facilitate heterosexuality, including constructs of marriage and family. Efforts are being made to change these attitudes, and legislation is being passed to promote equality.
Some other cultures do not recognize a homosexual/heterosexual/bisexual distinction. It is common to distinguish a person's sexuality according to their sexual role (active/passive; insertive/penetrated). In this distinction, the passive role is typically associated with femininity and/or inferiority, while the active role is typically associated with masculinity and/or superiority. For example, an investigation of a small Brazilian fishing village revealed three sexual categories for men: men who have sex only with men (consistently in a passive role), men who have sex only with women, and men who have sex with women and men (consistently in an active role). While men who consistently occupied the passive role were recognized as a distinct group by locals, men who have sex with only women, and men who have sex with women and men, were not differentiated. Little is known about same-sex attracted females, or sexual behavior between females in these cultures.
Racism and ethnically relevant support
In the United States, non-Caucasian LGBT individuals may find themselves in a double minority, where they are neither fully accepted or understood by mainly Caucasian LGBT communities, nor are they accepted by their own ethnic group. Many people experience racism in the dominant LGBT community where racial stereotypes merge with gender stereotypes, such that Asian-American LGBTs are viewed as more passive and feminine, while African-American LGBTs are viewed as more masculine and aggressive. There are a number of culturally specific support networks for LGBT individuals active in the United States. For example, "Ô-Môi" for Vietnamese American queer females.
Sexuality in the context of religion is often a controversial subject, especially that of sexual orientation. In the past, various sects have viewed homosexuality from a negative point of view and had punishments for same-sex relationships. In modern times, an increasing number of religions and religious denominations accept homosexuality. It is possible to integrate sexual identity and religious identity, depending on the interpretation of religious texts.
Some religious organizations object to the concept of sexual orientation entirely. In the 2014 revision of the code of ethics of the American Association of Christian Counselors, members are forbidden to "describe or reduce human identity and nature to sexual orientation or reference," even while counselors must acknowledge the client’s fundamental right to self-determination.
Internet and media
The internet has influenced sexual orientation in two ways: it is a common mode of discourse on the subject of sexual orientation and sexual identity, and therefore shapes popular conceptions; and it allows anonymous attainment of sexual partners, as well as facilitates communication and connection between greater numbers of people.
The multiple aspects of sexual orientation and the boundary-drawing problems already described create methodological challenges for the study of the demographics of sexual orientation. Determining the frequency of various sexual orientations in real-world populations is difficult and controversial.
Most modern scientific surveys find that the majority of people report a mostly heterosexual orientation. However, the relative percentage of the population that reports a homosexual orientation varies with differing methodologies and selection criteria. Most of these statistical findings are in the range of 2.8 to 9% of males, and 1 to 5% of females for the United States – this figure can be as high as 12% for some large cities and as low as 1% for rural areas.
Estimates for the percentage of the population that are bisexual vary widely, at least in part due to differing definitions of bisexuality. Some studies only consider a person bisexual if they are nearly equally attracted to both sexes, and others consider a person bisexual if they are at all attracted to the same sex (for otherwise mostly heterosexual persons) or to the opposite sex (for otherwise mostly homosexual persons). A small percentage of people are not sexually attracted to anyone (asexuality). A study in 2004 placed the prevalence of asexuality at 1%.
In the oft-cited and oft-criticized Sexual Behavior in the Human Male (1948) and Sexual Behavior in the Human Female (1953), by Alfred C. Kinsey et al., people were asked to rate themselves on a scale from completely heterosexual to completely homosexual. Kinsey reported that when the individuals' behavior as well as their identity are analyzed, most people appeared to be at least somewhat bisexual — i.e., most people have some attraction to either sex, although usually one sex is preferred. According to Kinsey, only a minority (5–10%) can be considered fully heterosexual or homosexual. Conversely, only an even smaller minority can be considered fully bisexual (with an equal attraction to both sexes). Kinsey's methods have been criticized as flawed, particularly with regard to the randomness of his sample population, which included prison inmates, male prostitutes and those who willingly participated in discussion of previously taboo sexual topics. Nevertheless, Paul Gebhard, subsequent director of the Kinsey Institute for Sex Research, reexamined the data in the Kinsey Reports and concluded that removing the prison inmates and prostitutes barely affected the results.
Social constructionism and Western societies
Because sexual orientation is complex and multi-dimensional, some academics and researchers, especially in queer studies, have argued that it is a historical and social construction. In 1976, philosopher and historian Michel Foucault argued in The History of Sexuality that homosexuality as an identity did not exist in the eighteenth century; that people instead spoke of "sodomy," which referred to sexual acts. Sodomy was a crime that was often ignored, but sometimes punished severely (see sodomy law). He wrote, "'Sexuality' is an invention of the modern state, the industrial revolution, and capitalism."
Sexual orientation is argued as a concept that evolved in the industrialized West, and there is a controversy as to the universality of its application in other societies or cultures. Non-westernized concepts of male sexuality differ essentially from the way sexuality is seen and classified under the Western system of sexual orientation.[unreliable source?] The validity of the notion of sexual orientation as defined in the West, as a biological phenomenon rather than a social construction specific to a region and period, has also been questioned within the industrialized Western society).
Heterosexuality and homosexuality are terms often used in European and American cultures to encompass a person's entire social identity, which includes self and personality. In Western cultures, some people speak meaningfully of gay, lesbian, and bisexual identities and communities. In other cultures, homosexuality and heterosexual labels do not emphasize an entire social identity or indicate community affiliation based on sexual orientation.
Some historians and researchers argue that the emotional and affectionate activities associated with sexual-orientation terms such as "gay" and "heterosexual" change significantly over time and across cultural boundaries. For example, in many English-speaking nations, it is assumed that same-sex kissing, particularly between men, is a sign of homosexuality, whereas various types of same-sex kissing are common expressions of friendship in other nations. Also, many modern and historic cultures have formal ceremonies expressing long-term commitment between same-sex friends, even though homosexuality itself is taboo within the cultures.
Law, politics and theology
Professor Michael King stated, "The conclusion reached by scientists who have investigated the origins and stability of sexual orientation is that it is a human characteristic that is formed early in life, and is resistant to change. Scientific evidence on the origins of homosexuality is considered relevant to theological and social debate because it undermines suggestions that sexual orientation is a choice."
Legally as well, a person's sexual orientation is hard to establish as either an intrinsic or a binary quality. In 1999, law professor David Cruz wrote that "sexual orientation (and the related concept homosexuality) might plausibly refer to a variety of different attributes, singly or in combination. What is not immediately clear is whether one conception is most suited to all social, legal, and constitutional purposes."
- Romantic orientation
- Ascribed characteristics
- Bisexuality in the United States
- Hate crime and Homophobia
- History of gay men in the United States
- History of lesbianism in the United States
- LGBT (Lesbian, Gay, Bisexual, and Transgender)
- List of anti-discrimination acts
- LGBT rights by country or territory
- Fundamental Rights Agency
- Human male sexuality, including non-western perspectives on sexual orientation
- Marriage and Same-sex marriage
- Sexual orientation and military service
- Sexual orientation hypothesis
- Terminology of homosexuality
- Sociosexual orientation
- Sexual orientation and gender identity at the United Nations
- "Sexual orientation, homosexuality and bisexuality". American Psychological Association. Archived from the original on August 8, 2013. Retrieved August 10, 2013.
- "Sexual Orientation". American Psychiatric Association. Archived from the original on July 22, 2011. Retrieved January 1, 2013.
- Melby, Todd (November 2005). "Asexuality gets more attention, but is it a sexual orientation?". Contemporary Sexuality. 39 (11): 1, 4–5.
- Marshall Cavendish Corporation, ed. (2009). "Asexuality". Sex and Society. 2. Marshall Cavendish. pp. 82–83. ISBN 978-0-7614-7905-5. Retrieved February 2, 2013.
- Firestein, Beth A. (2007). Becoming Visible: Counseling Bisexuals Across the Lifespan. Columbia University Press. p. 9. ISBN 0231137249. Retrieved October 3, 2012.
- "Case No. S147999 in the Supreme Court of the State of California, In re Marriage Cases Judicial Council Coordination Proceeding No. 4365(...) - APA California Amicus Brief — As Filed" (PDF). Page 33 n. 60 (p. 55 per Adobe Acrobat Reader);citation per id., Brief, p. 6 n. 4 (p. 28 per Adobe Acrobat Reader). p. 30. Retrieved March 13, 2013.
- Schmidt J (2010). Migrating Genders: Westernisation, Migration, and Samoan Fa'afafine, p. 45 Ashgate Publishing, Ltd., ISBN 978-1-4094-0273-2
- "Avoiding Heterosexual Bias in Language" (PDF). American Psychological Association. Retrieved July 19, 2011.
- Rosario, M.; Schrimshaw, E.; Hunter, J.; Braun, L. (2006). "Sexual identity development among lesbian, gay, and bisexual youths: Consistency and change over time". Journal of Sex Research. 43 (1): 46–58. doi:10.1080/00224490609552298.
- Friedman, Lawrence Meir (1990). The republic of choice: law, authority, and culture. Harvard University Press. p. 92. ISBN 978-0-674-76260-2. Retrieved 8 January 2012.
- Heuer, Gottfried (2011). Sexual revolutions: psychoanalysis, history and the father. Taylor & Francis. p. 49. ISBN 978-0-415-57043-5. Retrieved 8 January 2011.
- Frankowski BL; American Academy of Pediatrics Committee on Adolescence (June 2004). "Sexual orientation and adolescents". Pediatrics. 113 (6): 1827–32. doi:10.1542/peds.113.6.1827. PMID 15173519.
- Gloria Kersey-Matusiak (2012). Delivering Culturally Competent Nursing Care. Springer Publishing Company. p. 169. ISBN 0826193811. Retrieved February 10, 2016.
Most health and mental health organizations do not view sexual orientation as a 'choice.'
- Mary Ann Lamanna, Agnes Riedmann, Susan D Stewart (2014). Marriages, Families, and Relationships: Making Choices in a Diverse Society. Cengage Learning. p. 82. ISBN 1305176898. Retrieved February 11, 2016.
The reason some individuals develop a gay sexual identity has not been definitively established – nor do we yet understand the development of heterosexuality. The American Psychological Association (APA) takes the position that a variety of factors impact a person's sexuality. The most recent literature from the APA says that sexual orientation is not a choice that can be changed at will, and that sexual orientation is most likely the result of a complex interaction of environmental, cognitive and biological factors...is shaped at an early age...[and evidence suggests] biological, including genetic or inborn hormonal factors, play a significant role in a person's sexuality (American Psychological Association 2010).
- Gail Wiscarz Stuart (2014). Principles and Practice of Psychiatric Nursing. Elsevier Health Sciences. p. 502. ISBN 032329412X. Retrieved February 11, 2016.
No conclusive evidence supports any one specific cause of homosexuality; however, most researchers agree that biological and social factors influence the development of sexual orientation.
- "Submission to the Church of England's Listening Exercise on Human Sexuality". The Royal College of Psychiatrists. Retrieved 13 June 2013.
- Långström, N.; Rahman, Q.; Carlström, E.; Lichtenstein, P. (2008). "Genetic and Environmental Effects on Same-sex Sexual Behavior: A Population Study of Twins in Sweden". Archives of Sexual Behavior. 39 (1): 75–80. doi:10.1007/s10508-008-9386-1. PMID 18536986.
- Cruz, David B. (1999). "Controlling Desires: Sexual Orientation Conversion and the Limits of Knowledge and Law" (PDF). Southern California Law Review. 72: 1297. Retrieved May 2015. Check date values in:
- Bogaert, Anthony F (2006). "Toward a conceptual understanding of asexuality". Review of General Psychology. 10 (3): 241–250. doi:10.1037/1089-2622.214.171.124.
- Bogaert, Anthony F (2004). "Asexuality: prevalence and associated factors in a national probability sample". Journal of Sex Research. 41 (3): 281. doi:10.1080/00224490409552235. PMID 15497056.
- Greene, B., & Herek, G. M. (eds.). (1994). Lesbian and Gay Psychology: Theory, Research, and Clinical Applications. Thousand Oaks, CA: Sage. p. 162. Quotation: "A second aspect of sexual identity, sexual orientation, refers to a person's choice of sexual partners: heterosexual, homosexual, or bisexual".
- American Psychological Association (2005). "Lesbian & Gay Parenting".
- Tasker, F.; Patterson, C. J. (2007). "Research on Lesbian and Gay Parenting: Retrospect and Prospect" (PDF). Journal of GLBT Family Studies. 3 (2–3): 9–34. doi:10.1300/J461v03n02_02.
- Reiter L. (1989). "Sexual orientation, sexual identity, and the question of choice". Clinical Social Work Journal. 17: 138–50. doi:10.1007/bf00756141.
- Ross, Michael W.; Essien, E. James; Williams, Mark L.; Fernandez-Esquer, Maria Eugenia. (2003). "Concordance Between Sexual Behavior and Sexual Identity in Street Outreach Samples of Four Racial/Ethnic Groups". Sexually Transmitted Diseases. American Sexually Transmitted Diseases Association. 30 (2): 110–3. doi:10.1097/00007435-200302000-00003. PMID 12567166.
- Rice, Kim (2009). "Pansexuality". In Marshall Cavendish Corporation. Sex and Society. 2. Marshall Cavendish. p. 593. ISBN 978-0-7614-7905-5. Retrieved October 3, 2012.
- Sinclair, Karen, About Whoever: The Social Imprint on Identity and Orientation, NY, 2013
- Aggrawal, Anil (2008). Forensic and medico-legal aspects of sexual crimes and unusual sexual practices. CRC Press, ISBN 978-1-4200-4308-2
- Diamond M (2010). Sexual orientation and gender identity. In Weiner IB, Craighead EW eds. The Corsini Encyclopedia of Psychology, Volume 4. p. 1578. John Wiley and Sons, ISBN 978-0-470-17023-6
- Schmidt J (2001). Redefining fa’afafine: Western discourses and the construction of transgenderism in Samoa. Intersections: Gender, history and culture in the Asian context
- Bagemihl B. Surrogate phonology and transsexual faggotry: A linguistic analogy for uncoupling sexual orientation from gender identity. In Queerly Phrased: Language, Gender, and Sexuality. Anna Livia, Kira Hall (eds.) pp. 380 ff. Oxford University Press ISBN 0-19-510471-4
- Minton HL (1986). "Femininity in men and masculinity in women: American psychiatry and psychology portray homosexuality in the 1930s". Journal of Homosexuality. 13 (1): 1–21. doi:10.1300/J082v13n01_01. PMID 3534080.
Terry, J. (1999). An American obsession: Science, medicine, and homosexuality in modern society. Chicago: University of Chicago Press
- Bailey JM, Zucker KJ (1995). "Childhood sex-typed behavior and sexual orientation: a conceptual analysis and quantitative review". Developmental Psychology. 31 (1): 43–55. doi:10.1037/0012-16126.96.36.199.
- Rodriguez Rust, Paula C. Bisexuality: A contemporary paradox for women, Journal of Social Issues, vol. 56(2), Summer 2000, pp. 205–221. Special Issue: Women's sexualities: New perspectives on sexual orientation and gender. Article online.
Also published in: Rodriguez Rust, Paula C. Bisexuality in the United States: A Social Science Reader. Columbia University Press, 2000. ISBN 0-231-10227-5.
- Butler, Katy (March 7, 2006). "Many Couples Must Negotiate Terms of 'Brokeback' Marriages". New York Times.
- Hentges, Rochelle (October 4, 2006). "How to tell if your husband is gay". Pittsburgh Tribune-Review.
- Gay Men from Heterosexual Marriages: Attitudes, Behaviors, Childhood Experiences, and Reasons for Marriage
- Stack, Peggy Fletcher (August 5, 2006), "Gay, Mormon, married", The Salt Lake Tribune
- "Gay No More". psychologytoday.com.
- Hays D; Samuels A (1989). "Heterosexual women's perceptions of their marriages to bisexual or homosexual men". J Homosex. 18 (1–2): 81–100. doi:10.1300/J082v18n01_04. PMID 2794500.
- Coleman E (1981). "Bisexual and gay men in heterosexual marriage: conflicts and resolutions in therapy". J Homosex. 7 (2–3): 93–103. doi:10.1300/J082v07n02_11. PMID 7346553.
- Matteson DR (1985). "Bisexual men in marriage: is a positive homosexual identity and stable marriage possible?". J Homosex. 11 (1–2): 149–71. doi:10.1300/J082v11n01_12. PMID 4056386.
- Sinclair, Karen, About Whoever: The Social Imprint on Identity and Orientation, NY, 2013 ISBN 9780981450513
- "Question A2: Sexual orientation". Centre for Addiction and Mental Health. Retrieved 3 February 2015.
- "LGBT-Sexual Orientation: What is Sexual Orientation?", the official web pages of APA. Accessed April 9, 2015
- "Appropriate Therapeutic Responses to Sexual Orientation" (PDF). American Psychological Association. 2009: 63, 86. Retrieved February 3, 2015. Cite error: Invalid
<ref>tag; name "apa2009" defined multiple times with different content (see the help page).
- Susan Moore and Doreen Rosenthal, Sexuality in Adolescence: Current Trends (E. Sussex/London: Routledge (Adolescence and Society ser.), 2d ed., pbk., 2006), p. 48 (authors respectively developmental social psychologist, Swinburne Univ. Melbourne, & developmental psychologist, Univ. of Melbourne, id., cover IV).
- Gonsiorek, John C., Randall L Sell, & James D. Weinrich, Definition and Measurement of Sexual Orientation, op. cit., citing Golden, C., Our Politics and Choices: The Feminist Movement and Sexual Orientation, in B. Greene & G. Herek, eds., Lesbian. and Gay Psychology: Theory, Research and Clinical Applications (Thousand Oaks, Calif.: Sage 1994), vol. 1, pp. 54–70 (sic: period so in title).
- Ruse, Michael (1988). Homosexuality: A Philosophical Inquiry. Oxford: Basil Blackwell. pp. 22, 25, 45. ISBN 0 631 15275 X.
- Bearman, P. S.; Bruckner, H. (2002). "Opposite-sex twins and adolescent same-sex attraction" (PDF). American Journal of Sociology. 107: 1179–1205. doi:10.1086/341906.
- Vare, Jonatha W., and Terry L. Norton. "Understanding Gay and Lesbian Youth: Sticks, Stones and Silence." Cleaning House 71.6 (1998): 327-331: Education Full Text (H.W. Wilson). Web. 19 Apr. 2012.
- Wilson, G., & Q. Rahman, Born Gay: The Psychobiology of Human Sex Orientation, op. cit.
- Siiteri, PK; Wilson, JD (Jan 1974). "Testosterone formation and metabolism during male sexual differentiation in the human embryo.". The Journal of Clinical Endocrinology and Metabolism. 38 (1): 113–25. doi:10.1210/jcem-38-1-113. PMID 4809636.
- LeVay, Simon (2011). Gay, Straight, and the reason why. The science of sexual orientation. Oxford University Press. pp. 45–71; 129–156. ISBN 978-0-19-993158-3.
- Blanchard, R.; Cantor, J. M.; Bogaert, A. F.; Breedlove, S. M.; Ellis, L. (2006). "Interaction of fraternal birth order and handedness in the development of male homosexuality" (PDF). Hormones and Behavior. 49 (3): 405–414. doi:10.1016/j.yhbeh.2005.09.002. PMID 16246335.
- Anthony F. Bogaert; Malvina Skorska (April 2011). "Sexual orientation, fraternal birth order, and the maternal immune hypothesis: a review". Frontiers in neuroendocrinology. 32 (2): 247–254. doi:10.1016/j.yfrne.2011.02.004. PMID 21315103.
- "Different aspects of sexual orientation may be influenced to a greater or lesser degree [p. 303:] by experiential factors such that sexual experimentation with same-gender partners may be more dependent on a conducive family environment than the development of a gay or lesbian identity." Susan E. Golombok & Fiona L. Tasker, Do Parents Influence the Sexual Orientation of Their Children?, in J. Kenneth Davidson, Sr., & Nelwyn B. Moore, Speaking of Sexuality: Interdisciplinary Readings (Los Angeles, Calif.: Roxbury Publishing, 2001) (ISBN 1-891487-33-7), pp. 302–303 (adapted from same authors, Do Parents Influence the Sexual Orientation of Their Children? Findings From a Longitudinal Study of Lesbian Families, in Developmental Psychology (American Psychological Association), vol. 32, 1996, 3–11) (author Susan Golombok prof. psychology, City Univ., London, id., p. xx, & author Fiona Tasker sr. lecturer, Birkbeck Coll., Univ. of London, id., p. xxiii).
- "Whereas there is no evidence from the present investigation to suggest that parents have a determining influence on the sexual orientation of their children, the findings do indicate that by creating a climate of acceptance or rejection of homosexuality within the family, parents may have some impact on their children's sexual experimentation as heterosexual, lesbian, or gay." Do Parents Influence the Sexual Orientation of Their Children?, ibid., in Speaking of Sexuality, id., p. 303 (adapted per id., p. 303).
- Appropriate Therapeutic Responses to Sexual Orientation
- Expert affidavit of Gregory M. Herek, Ph.D.
- Statement from the Royal College of Psychiatrists' Gay and Lesbian Mental Health Special Interest Group.
- Australian Psychological Society: Sexual orientation and homosexuality.
- Appropriate Therapeutic Responses to Sexual Orientation.
- ""Therapies" to change sexual orientation lack medical justification and threaten health". Pan American Health Organization. Retrieved May 26, 2012. archived here.
- NARTH Mission Statement
- Spitzer R. L. (1981). "The diagnostic status of homosexuality in DSM-III: a reformulation of the issues". American Journal of Psychiatry. 138 (2): 210–15. doi:10.1176/ajp.138.2.210. PMID 7457641.
- "An Instant Cure", Time; April 1, 1974.
- The A.P.A. Normalization of Homosexuality, and the Research Study of Irving Bieber
- Statement of the American Psychological Association.
- Ulrichs, Karl Heinrich (1994). The Riddle of Man-Manly Love. Prometheus Books. ISBN 0-87975-866-X.
- Hirschfeld, Magnus, 1896. Sappho und Socrates, Wie erklärt sich die Liebe der Männer & und Frauen zu Personen des eigenen Geschlechts? (Sappho and Socrates, How Can One Explain the Love of Men and Women for Individuals of Their Own Sex?).
- Kinsey; et al. (1953). Sexual Behavior in the Human Female. Indiana University Press. p. 499. ISBN 4-87187-704-3.
- Kinsey; et al. (1948). Sexual Behavior in the Human Male. Indiana University Press. ISBN 0-253-33412-8.
- Kinsey; et al. (1948). Sexual Behavior in the Human Male. p. 639.
- Kinsey; et al. (1948). Sexual Behavior in the Human Male. pp. 639–641.
- Masters and Johnson (1979). Homosexuality in Perspective. ISBN 0-316-54984-3.
- Weinrich, J.; et al. (1993). "A factor analysis of the Klein Sexual Orientation Grid in two disparate samples.". Archives of Sexual Behavior. 22: 157–168. doi:10.1007/bf01542364.
- Weinberg; et al. (1994). Dual Attraction. Oxford University Press. ISBN 0-19-508482-9.
- Sell, R.L. (1997). "Defining and measuring sexual orientation: A review". Archives of Sexual Behavior. 26: 643–58. doi:10.1023/A:1024528427013. PMID 9415799.
- Bem, S.L. (1981). Bem sex-rol inventory professional manual. Palo Alto, CA: Consulting Psychologists Press.
- Shively, M.G.; DeCecco, J.P. (1977). "Components of sexual identity". Journal of Homosexuality. 3: 41–48. doi:10.1300/j082v03n01_04.
- Sell, R.L. (1996). "The Sell assessment of sexual orientation: Background and scoring". Journal of Lesbian, Gay, and Bisexual Identity. 1: 295–310.
- Udry, J.; Chantala, K. (2005). "Risk factors differ according to same sex and opposite-sex interest.". Journal of Biological Sciences. 37: 481–497. doi:10.1017/s0021932004006765.
- Laumann; et al. (1994). The Social Organization of Sexuality. The University of Chicago Press. p. 67. ISBN 0-226-46957-3.
- Eskin, M.; et al. (2005). "Same-sex sexual orientation, childhood sexual abuse, and suicidal behaviour in university students in Turkey". Archives of Sexual Behavior. 34 (2): 185–195. doi:10.1007/s10508-005-1796-8.
- D'Augelli; Hershberger, SL; Pilkington, NW; et al. (2001). "Suicidality patterns and sexual orientation-related factors among lesbian, gay, and bisexual youths". Suicide and Life-Threatening Behavior. 31 (3): 250–264. doi:10.1521/suli.188.8.131.5246. PMID 11577911.
- Rust, Paula (2000). Bisexuality in the United States: A Social Science Reader. Columbia University Press. p. 167. ISBN 0-231-10226-7.
- Savin-Williams, R. (2006). "Who's Gay? Does it Matter?". Current Directions in Psychological Sciences. 15: 40–44. doi:10.1111/j.0963-7214.2006.00403.x.
- Laumann; et al. (1994). The Social Organization of Sexuality. The University of Chicago Press. p. 301. ISBN 0-226-46957-3.
- Mosher, W; Chandra, A.; Jones, J. "Sexual behaviour and selected health measures: Men and women 15-44 years of age, United States, 2002". Advance Data from Vital and Health Statistics. National Center for Health Statistics. 362.
- Savin-Williams, R.; Ream, G.L. (2003). "Suicide attempts among sexual-minority male youth". Journal of Clinical Child and Adolescent Psychology. 32 (4): 509–522. doi:10.1207/S15374424JCCP3204_3. PMID 14710459.
- Laumann; et al. (1994). The Social Organization of Sexuality. The University of Chicago Press. ISBN 0-226-46957-3.
- Dunne, M.; Bailey, J.; Kirk, K.; Martin, N. (2000). "The subtlety of sex -atypicality". Archives of Sexual Behavior. 29 (6): 549–565. doi:10.1023/A:1002002420159. PMID 11100262.
- Eskin, M., Kaynak-Demir, H., & Demis, S., Mehmet; Kaynak-Demir, Hadiye; Demir, Sinem (2005). "Same-sex sexuall orientation, childhood sexual abuse, and suicidal behaviour in university students in Turkey". Archives of Sexual Behavior. 34 (2): 185–195. doi:10.1007/s10508-005-1796-8.
- Wichstrom, L., & Hegna, K., L; Hegna, K (2003). "Sexual orientation and suicide attempt: A longitudinal study of the general Norwegian adolescent population". Journal of Abnormal Psychology. 112 (1): 144–151. doi:10.1037/0021-843X.112.1.144. PMID 12653422.
- Laumann; et al. (1994). The Social Organization of Sexuality. The University of Chicago Press. p. 303. ISBN 0-226-46957-3.
- Diamond, L.M. (2003). "Was it a phase? Young women's relinquishment of lesbian/bisexual identities over a 5-year period". Journal of Personality and Social Psychology. 84 (2): 352–364. doi:10.1037/0022-35184.108.40.2062. PMID 12585809.
- Laumann; et al. (1994). The Social Organization of Sexuality. The University of Chicago Press. p. 289. ISBN 0-226-46957-3.
- Savin-Williams, R. (2006). "Who's Gay? Does it Matter?". Current Directions in Psychological Science. 15: 40–44. doi:10.1111/j.0963-7214.2006.00403.x.
- Wilson, G., & Q. Rahman, Born Gay: The Psychobiology of Human Sex Orientation (London: Peter Owen Publishers, 2005), p. 21.
- Chivers, Meredith L.; Gerulf Rieger; Elizabeth Latty; J. Michael Bailey (2004). "A Sex Difference in the Specificity of Sexual Arousal". Psychological Science. Blackwell Publishing. 15 (11): 736–44. doi:10.1111/j.0956-7976.2004.00750.x. PMID 15482445.
- Chivers, Meredith L.; J. Michael Bailey. (2005). "A sex difference in features that elicit genital response". Biological Psychology. Elsevier B.V. 70 (2): 115–20. doi:10.1016/j.biopsycho.2004.12.002. PMID 16168255.
- Safron, Adam; Bennett, Barch; Bailey, J. Michael; Gitelman, Darren R.; Parrish, Todd B.; Reber, Paul J. (2007). "Neural Correlates of Sexual Arousal in Homosexual and Heterosexual Men". Behavioral Neuroscience. American Psychological Association. 121 (2): 237–48. doi:10.1037/0735-7044.121.2.237. PMID 17469913.
- LeDoux JE, The Emotional Brain (N.Y.: Simon & Schuster, 1996).
- Garnets, L. & Kimmel, D. C. (Eds.). (2003). Psychological perspectives on lesbian, gay and bisexual experiences. New York: Columbia University Press
- Mock, S. E.; Eibach, R. P. (2011). "Stability and change in sexual orientation identity over a 10-year period in adulthood". Archives of Sexual Behavior. 41: 641–648. doi:10.1007/s10508-011-9761-1.
- Markus H. R.; Kitayama S. (1991). "Culture and the self: Implications for cognition, emotion, and motivation". Psychological Review. 98: 224–253. doi:10.1037/0033-295X.98.2.224.
- Minwalla O.; Rosser B. R. S.; Feldman J.; Varga C. (2005). "Identity experience among progressive gay Muslims in North America: A qualitative study within Al-Fatiha". Culture, Health & Sexuality. 7 (2): 113–128. doi:10.1080/13691050412331321294.
- Sechrest, L.; Fay, T. L.; Zaidi, M. H. (1972). "Problems of Translation in Cross-Cultural Research". Journal of Cross-Cultural Research. 3 (1): 41–56. doi:10.1177/002202217200300103.
- Santaemilia, J. (2008). 'War of words' on New (Legal) Sexual Identities: Spain's Recent Gender-Related Legislation and Discursive Conflict. In J. Santaemilia & P. Bou (Eds.). Gender and sexual identities in transition: international perspectives, pp.181-198. Newcastle: Cambridge Scholars Publishing.
- Leap, W. L. (1996). Word's Out: Gay Men's English. Minneapolis:University of Minnesota Press.
- Rule, NO (2011). "The influence of target and perceiver race in the categorisation of male sexual orientation". Perception. 40 (7): 830–9. doi:10.1068/p7001. PMID 22128555.
- Johnson, KL; Ghavami, N (2011). Gilbert, Sam, ed. "At the crossroads of conspicuous and concealable: What race categories communicate about sexual orientation". PLoS ONE. 6 (3): e18025. doi:10.1371/journal.pone.0018025. PMC . PMID 21483863.
- Rule, NO; Ishii, K; Ambady, N; Rosen, KS; Hallett, KC (2011). "Found in translation: Cross-cultural consensus in the accurate categorization of male sexual orientation". Personality and Social Psychology Bulletin. 37 (11): 1499–507. doi:10.1177/0146167211415630. PMID 21807952.
- Cox, William T. L.; Devine, Patricia G.; Bischmann, Alyssa A.; Hyde, Janet S. (2015). "Inferences About Sexual Orientation: The Roles of Stereotypes, Faces, and The Gaydar Myth". The Journal of Sex Research. 52 (8): 1–15. doi:10.1080/00224499.2015.1015714.
- "Crime in the United States 2004: Hate Crime". FBI. Archived from the original on April 11, 2007. Retrieved 2007-05-04.
- ACAS (About Us), as accessed Apr. 19, 2010.
- Sexual orientation and the workplace: Putting the Employment Equality (Sexual Orientation) Regulations 2003 into practice
- Rust, P. C. (2003). Finding a Sexual Identity and Community: Therapeutic Implications and Cultural Assumptions in Scientific Models of Coming Out. In L. Garnets & D. C. Kimmel (Eds.). Psychological perspectives on lesbian, gay and bisexual experiences (pp. 227-269). New York: Columbia University Press
- Carballo-Diéguez A.; Dolezal C.; Nieves L.; Díaz F.; Decena C.; Balan I. (2004). "Looking for a tall, dark, macho man… sexual-role behaviour variations in Latino gay and bisexual men". Culture, Health & Sexuality. 6 (2): 159–171. doi:10.1080/13691050310001619662.
- Cardoso F. L. (2005). "Cultural Universals and Differences in Male Homosexuality: The Case of a Brazilian Fishing Village". Archives of Sexual Behavior. 34 (1): 103–109. doi:10.1007/s105080051004x.
- Cheng, P (2011). "Gay Asian Masculinities and Christian Theologies". Cross currents. 61 (4): 540–548. doi:10.1111/j.1939-3881.2011.00202.x.
- Masequesmay, G (2003). "Emergence of queer Vietnamese America". Amerasia Journal. 29 (1): 117–134. doi:10.17953/amer.29.1.l15512728mj65738.
- "Code of Ethics of the American Association of Christian Counselors" (PDF). www.aacc.net. American Association of Christian Counselors. Retrieved May 2015. Check date values in:
- Davis, M.; Hart, G.; Bolding, G.; Sherr, L.; Elford, J. (2006). "Sex and the Internet: Gay men, risk reduction and serostatus". Culture, Health & Sexuality. 8 (2): 161–174. doi:10.1080/13691050500526126.
- James Alm, M. V. Lee Badgett, Leslie A. Whittington, Wedding Bell Blues: The Income Tax Consequences of Legalizing Same-Sex Marriage, p. 24 (1998) PDF link.
- "Study: One in 100 adults asexual". CNN. 15 October 2004. Archived from the original on 27 October 2007. Retrieved 11 November 2007.
- "The Kinsey Institute - [Publications]". kinseyinstitute.org.
- Chinese Femininities, Chinese Masculinities: A Reader, by Susan Brownell & Jeffrey N. Wasserstrom (Univ. of Calif. Press, 2002 (ISBN 0520221168, ISBN 978-0-520-22116-1)). Quote: "The problem with sexuality: Some scholars have argued that maleness and femaleness were not closely linked to sexuality in China. Michel Foucault's The History of Sexuality (which deals primarily with Western civilization and western Europe) began to influence some China scholars in the 1980s. Foucault's insight was to demonstrate that sexuality has a history; it is not fixed psycho-biological drive that is the same for all humans according to their sex, but rather it is a cultural construct inseparable from gender constructs. After unmooring sexuality from biology, he anchored it in history, arguing that this thing we now call sexuality came into existence in the eighteenth-century West and did not exist previously in this form. "Sexuality" is an invention of the modern state, the industrial revolution, and capitalism. Taking this insight as a starting point, scholars have slowly been compiling the history of sexuality in China. The works by Tani Barlow, discussed above, were also foundational in this trend. Barlow observes that, in the West, heterosexuality is the primary site for the production of gender: a woman truly becomes a woman only in relation to a man's heterosexual desire. By contrast, in China before the 1920s the "jia" (linage unit, family) was the primary site for the production of gender: marriage and sexuality were to serve the lineage by producing the next generation of lineage members; personal love and pleasure were secondary to this goal. Barlow argues that this has two theoretical implications: (1) it is not possible to write a Chinese history of heterosexuality, sexuality as an institution, and sexual identities in the European metaphysical sense, and (2) it is not appropriate to ground discussions of Chinese gender processes in the sexed body so central in "Western" gender processes. Here she echoes Furth's argument that, before the early twentieth century, sex-identity grounded on anatomical difference did not hold a central place in Chinese constructions of gender. And she echoes the point illustrated in detail in Sommer's chapter on male homosexuality in the Qing legal code: a man could engage in homosexual behavior without calling into question his manhood so long as his behavior did not threaten the patriarchal Confucian family structure."
- The Psychology of Sexual Orientation, Behavior, and identity, by Louis Diamant & Richard D. McAnulty (Greenwood Publishing Group, 1995 (ISBN 0313285012, ISBN 978-0-313-28501-1) (522 pages). Quote from page 81: Although sexual orientation is a loaded Western concept, the term is still a useful one, if we avoid imposing Western thoughts and meanings associated with our language on non-Western, non contemporary cultures.
- The Handbook of Social Work Direct Practice, by Paula Allen-Meares & Charles D. Garvin & Contributors Paula Allen-Meares & Charles D. Garvin (SAGE, 2001 (ISBN 0761914994, ISBN 978-0-7619-1499-0) (733 pages). Quote from page 478: The concept of sexual orientation is a product of contemporary Western thought.
- Sexual behavior and the non-construction of sexual identity: Implications for the analysis of men who have sex with men and women who have sex with women., [by?] Michael W. Ross & Ann K. Brooks. Quote from Page 9: Chou (2000) notes in his analysis of the lack of applicability of western concepts of sexual identity in China, just because a person has a particular taste for a specific food doesn't mean that we label them in terms of the food that they prefer. A similar approach to sexual appetite as not conferring identity may be operating in this sample. McIntosh (1968) has previously noted that people who do not identify with the classic western, white gay/lesbian role may not necessarily identify their behavior as homosexual;
- Transnational Transgender: Reading Sexual Diversity in Cross-Cultural Contexts Through Film and Video [by?] Ryan, Joelle Ruby (American Studies Association). Quote: Many of the projects which have historically investigated sex/gender variance in non-Western contexts have been ethnographies and anthropological studies. Due to strong and lingering problems with ethnocentrism, many of these research studies have attempted to transpose a Western understanding of sex, gender and sexuality onto cultures in Asia, Latin America and Africa. Terms such as "homosexual," "transvestite," and "transsexual" all arose out of Western concepts of identity based on science, sexology and medicine and often bear little resemblance to sex/gender/sexuality paradigms in the varied cultures of the developing world.
- "Sexual Orientation, Human Rights and Global Politics" (PDF).
- Waits, Matthew. "Matthew Waits of Dept. Sociology, Anthropology & Applied Social Sciences, University of Glasgow, United Kingdom". Quote from the Abstract: The paper problematises utilisation of the concept of 'sexual orientation' in moves to revise human rights conventions and discourses in the light of social constructionist and queer theory addressing sexuality, which has convincingly suggested that 'sexual orientation' is a culturally specific concept, misrepresenting many diverse forms of sexuality apparent in comparative sociological and anthropological research conducted worldwide. I will argue in particular that 'orientation' is a concept incompatible with bisexuality when interpreted within the context of dominant dualistic assumptions about sex, gender and desire in western culture (suggested by Judith Butler's concept of the 'heterosexual matrix'). I will discuss the implications of the this for interpreting contemporary struggles among competing social movements, NGO and governmental actors involved in contesting the relationship of sexuality to human rights as defined by the United Nations.
- "Resisting Orientation" (PDF). McIntosh argues that the labeling process should be the focus of inquiry and that homosexuality should be seen as a social role rather than a condition. Role is more useful than condition, she argues, because roles (of heterosexual and homosexual) can be dichotomised in a way that behavior cannot. She draws upon cross-cultural data to demonstrate that in many societies 'there may be much homosexual behavior, but there are no "homosexuals"' (p71).
- Zachary Green & Michael J. Stiers, Multiculturalism and Group Therapy in the United States: A Social Constructionist Perspective (Springer Netherlands, 2002), pp. 233–246.
- Robert Brain, Friends and Lovers (Granada Publishing Ltd. 1976), chs. 3, 4.
- Church Times: How much is known about the origins of homosexuality?
- Anders Agmo, Functional and Dysfunctional Sexual Behavior (Elsevier, 2007).
- Brum, Gil, Larry McKane, & Gerry Karp, Biology – Exploring Life (John Wiley & Sons, Inc., 2d ed. 1994), p. 663. (About INAH-3.)
- De La Torre, Miguel A., Out of the Shadows, Into the Light: Christianity and Homosexuality (Chalice Press, 2009).
- Dynes, Wayne, ed., Encyclopedia of Homosexuality. (New York & London: Garland Publishing, 1990).
- Sell, Randall L., Defining and measuring sexual orientation: a review, in Archives of Sexual Behavior, 26 (6) (December 1997), 643–658. (excerpt).
- Wunsch, Serge, PhD thesis about sexual behavior (Paris: Sorbonne, 2007).
- Isidro A. T. Savillo's "New Statements: Breakthrough in Human Sexual Behavioral Phenotypes", June 2013.
||This article's use of external links may not follow Wikipedia's policies or guidelines. (May 2013) (Learn how and when to remove this template message)|
|Look up sexual orientation in Wiktionary, the free dictionary.|
|Wikiquote has quotations related to: Sexual orientation|
- Sexual Orientation FAQ
- A law lecture (mp3) on sexual orientation and U.S. constitutional law
- American Psychological Association: Answers to Your Questions About Sexual Orientation and Homosexuality
- Aspirin changes sexual behaviour of rats
- Brain gender: prostaglandins have their say
- Loraine JA; Ismail AA; Adamopoulos DA; Dove GA (November 1970). "Endocrine Function in Male and Female Homosexuals". Br Med J. 4 (5732): 406–9. doi:10.1136/bmj.4.5732.406. PMC . PMID 5481520.
- Etiology on glbtq.com
- Magnus Hirschfeld Archive of Sexology at the Humboldt University in Berlin
- Is sexual orientation determined at birth?
- Sanders BK (2007). "Sex, drugs and sports: prostaglandins, epitestosterone and sexual development". Med. Hypotheses. 69 (4): 829–35. doi:10.1016/j.mehy.2006.12.058. PMID 17382481.
- Survivor bashing – bias motivated hate crimes
- The Science Of Sexual Orientation
- The SexEdLibrary
- BORN FREE AND EQUAL - Sexual orientation and gender identity in international human rights law
- United States |
Electron diffraction refers to the wave nature of electrons. However, from a technical or practical point of view, it may be regarded as a technique used to study matter by firing electrons at a sample and observing the resulting interference pattern. This phenomenon is commonly known as wave–particle duality, which states that a particle of matter (in this case the incident electron) can be described as a wave. For this reason, an electron can be regarded as a wave much like sound or water waves. This technique is similar to X-ray and neutron diffraction.
Electron diffraction is most frequently used in solid state physics and chemistry to study the crystal structure of solids. Experiments are usually performed in a transmission electron microscope (TEM), or a scanning electron microscope (SEM) as electron backscatter diffraction. In these instruments, electrons are accelerated by an electrostatic potential in order to gain the desired energy and determine their wavelength before they interact with the sample to be studied.
The periodic structure of a crystalline solid acts as a diffraction grating, scattering the electrons in a predictable manner. Working back from the observed diffraction pattern, it may be possible to deduce the structure of the crystal producing the diffraction pattern. However, the technique is limited by the phase problem.
Apart from the study of crystals i.e. electron crystallography, electron diffraction is also a useful technique to study the short range order of amorphous solids, and the geometry of gaseous molecules.
The de Broglie hypothesis, formulated in 1924, predicts that particles should also behave as waves. De Broglie's formula was confirmed three years later for electrons (which have a rest-mass) with the observation of electron diffraction in two independent experiments. At the University of Aberdeen, George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. Around the same time at Bell Labs, Clinton Joseph Davisson and Lester Halbert Germer guided their beam through a crystalline grid. In 1937, Thomson and Davisson shared the Nobel Prize for Physics for their (independent) discovery.
Electron interaction with matter
Unlike other types of radiation used in diffraction studies of materials, such as X-rays and neutrons, electrons are charged particles and interact with matter through the Coulomb forces. This means that the incident electrons feel the influence of both the positively charged atomic nuclei and the surrounding electrons. In comparison, X-rays interact with the spatial distribution of the valence electrons, while neutrons are scattered by the atomic nuclei through the strong nuclear forces. In addition, the magnetic moment of neutrons is non-zero, and they are therefore also scattered by magnetic fields. Because of these different forms of interaction, the three types of radiation are suitable for different studies.
Intensity of diffracted beams
In the kinematical approximation for electron diffraction, the intensity of a diffracted beam is given by:
Here is the wavefunction of the diffracted beam and is the so-called structure factor which is given by:
where is the scattering vector of the diffracted beam, is the position of an atom in the unit cell, and is the scattering power of the atom, also called the atomic form factor. The sum is over all atoms in the unit cell.
The structure factor describes the way in which an incident beam of electrons is scattered by the atoms of a crystal unit cell, taking into account the different scattering power of the elements through the factor . Since the atoms are spatially distributed in the unit cell, there will be a difference in phase when considering the scattered amplitude from two atoms. This phase shift is taken into account by the exponential term in the equation.
The atomic form factor, or scattering power, of an element depends on the type of radiation considered. Because electrons interact with matter though different processes than for example X-rays, the atomic form factors for the two cases are not the same.
Wavelength of electrons
The wavelength of an electron is given by the de Broglie equation
Here is Planck's constant and the relativistic momentum of the electron. is called the de Broglie wavelength. The electrons are accelerated in an electric potential to the desired velocity:
is the mass of the electron, and is the elementary charge. The electron wavelength is then given by:
However, in an electron microscope, the accelerating potential is usually several thousand volts causing the electron to travel at an appreciable fraction of the speed of light. A SEM may typically operate at an accelerating potential of 10,000 volts (10 kV) giving an electron velocity approximately 20% of the speed of light, while a typical TEM can operate at 200 kV raising the electron velocity to 70% the speed of light. We therefore need to take relativistic effects into account. The relativistic relation between energy and momentum is E2=p2c2+m02c4 and it can be shown that,
where ΔE = E − E0 = eU. The relativistic formula for the wavelength is then modified to become,
is the speed of light. We recognize the first term in this final expression as the non-relativistic expression derived above, while the last term is a relativistic correction factor. The wavelength of the electrons in a 10 kV SEM is then 12.2 x 10−12 m (12.2 pm) while in a 200 kV TEM the wavelength is 2.5 pm. In comparison the wavelength of X-rays usually used in X-ray diffraction is in the order of 100 pm (Cu Kα: λ=154 pm).
Electron diffraction in a TEM
Electron diffraction of solids is usually performed in a Transmission Electron Microscope (TEM) where the electrons pass through a thin film of the material to be studied. The resulting diffraction pattern is then observed on a fluorescent screen, recorded on photographic film, on imaging plates or using a CCD camera.
As mentioned above, the wavelength of an electron accelerated in a TEM is much smaller than that of the radiation usually used for X-ray diffraction experiments. A consequence of this is that the radius of the Ewald sphere is much larger in electron diffraction experiments than in X-ray diffraction. This allows the diffraction experiment to reveal more of the two-dimensional distribution of reciprocal lattice points.
Furthermore, electron lenses allows the geometry of the diffraction experiment to be varied. The conceptually simplest geometry referred to as selected area electron diffraction (SAED) is that of a parallel beam of electrons incident on the specimen, with the specimen field selected using a sub-specimen image-plane aperture. However, by converging the electrons in a cone onto the specimen, one can in effect perform a diffraction experiment over several incident angles simultaneously. This technique is called Convergent Beam Electron Diffraction (CBED) and can reveal the full three-dimensional symmetry of the crystal.
In a TEM, a single crystal grain or particle may be selected for the diffraction experiments. This means that the diffraction experiments can be performed on single crystals of nanometer size, whereas other diffraction techniques would be limited to studying the diffraction from a multicrystalline or powder sample. Furthermore, electron diffraction in TEM can be combined with direct imaging of the sample, including high resolution imaging of the crystal lattice, and a range of other techniques. These include solving and refining crystal structures by electron crystallography, chemical analysis of the sample composition through energy-dispersive X-ray spectroscopy, investigations of electronic structure and bonding through electron energy loss spectroscopy, and studies of the mean inner potential through electron holography.
Figure 1 to the right is a simple sketch of the path of a parallel beam of electrons in a TEM from just above the sample and down the column to the fluorescent screen. As the electrons pass through the sample, they are scattered by the electrostatic potential set up by the constituent elements. After the electrons have left the sample they pass through the electromagnetic objective lens. This lens acts to collect all electrons scattered from one point of the sample in one point on the fluorescent screen, causing an image of the sample to be formed. We note that at the dashed line in the figure, electrons scattered in the same direction by the sample are collected into a single point. This is the back focal plane of the microscope, and is where the diffraction pattern is formed. By manipulating the magnetic lenses of the microscope, the diffraction pattern may be observed by projecting it onto the screen instead of the image. An example of what a diffraction pattern obtained in this way may look like is shown in figure 2.
If the sample is tilted with respect to the incident electron beam, one can obtain diffraction patterns from several crystal orientations. In this way, the reciprocal lattice of the crystal can be mapped in three dimensions. By studying the systematic absence of diffraction spots the Bravais lattice and any screw axes and glide planes present in the crystal structure may be determined.
Electron diffraction in TEM is subject to several important limitations. First, the sample to be studied must be electron transparent, meaning the sample thickness must be of the order of 100 nm or less. Careful and time-consuming sample preparation may therefore be needed. Furthermore, many samples are vulnerable to radiation damage caused by the incident electrons.
The study of magnetic materials is complicated by the fact that electrons are deflected in magnetic fields by the Lorentz force. Although this phenomenon may be exploited to study the magnetic domains of materials by Lorentz force microscopy, it may make crystal structure determination virtually impossible.
Furthermore, electron diffraction is often regarded as a qualitative technique suitable for symmetry determination, but too inaccurate for determination of lattice parameters and atomic positions. But there are also several examples where unknown crystal structures (inorganic, organic and biological) have been solved by electron crystallography. Lattice parameters of high accuracy can in fact be obtained from electron diffraction, relative errors less than 0.1% have been demonstrated. However, the right experimental conditions may be difficult to obtain, and these procedures are often viewed as too time-consuming and the data too difficult to interpret. X-ray or neutron diffraction are therefore often the preferred methods for determining lattice parameters and atomic positions.
However, the main limitation of electron diffraction in TEM remains the comparatively high level of user interaction needed. Whereas both the execution of powder X-ray (and neutron) diffraction experiments and the data analysis are highly automated and routinely performed, electron diffraction requires a much higher level of user input.
- Thomson, G. P. (1927). "Diffraction of Cathode Rays by a Thin Film" (PDF). Nature. 119 (3007): 890–890. Bibcode:1927Natur.119Q.890T. doi:10.1038/119890a0.
- Feynman, Richard P. (1963). The Feynman Lectures on Physics, Vol. I. Addison-Wesley. pp. 16–10, 17–5.
- Leonid A. Bendersky and Frank W. Gayle, "Electron Diffraction Using Transmission Electron Microscopy", Journal of Research of the National Institute of Standards and Technology, 106 (2001) pp. 997–1012.
- Gareth Thomas and Michael J. Goringe (1979). Transmission Electron Microscopy of Materials. John Wiley. ISBN 0-471-12244-0.
- Remote experiment on electron diffraction (choose English and then "Labs")
- Jmol-mediated image/diffraction analysis of an unknown
- PTCLab-Program for calculation phase transformation crystallography with diffraction simulation, its free and open source python program https://code.google.com/p/transformation-crystallography-lab/ |
In Computer Science, a linked list is a linear data structure in which a pointer in each element determines the order.
In this tutorial, we’ll show how to check if a linked list is a circular linked list.
2. Circular Linked List
Each element of a linked list contains a data field to store the list data and a pointer field to point to the next element in the sequence. We can use a pointer to point to the start element of a linked list:
In a regular linked list, the last element has no next element. Therefore, its pointer field is . However, in a circular linked list, the last element has a reference to one of the elements in the list:
3. Hash Table Solution
It is easy to see that if a linked list contains a cycle, we’ll eventually visit the same node twice when we traverse the linked list. Therefore, we can use a hash table to solve this problem:
This algorithm traverses the linked list and records each node’s reference in a hash table. We use a linked list node’s reference as its unique identifier. If we see the same reference again in the hash table, we can return to indicate a circular linked list. Otherwise, we’ll finally reach the node and return .
Let’s assume each hash table operation, such as insertion and searching, takes time. Then, this algorithm’s overall time complexity is as we traverse the whole linked list only once. The space complexity is also , as we need to store all the nodes into the hash table.
4. Solution With Two Pointers
To detect whether a linked list is a circular linked list, we can use two pointers with different speeds: a pointer and a pointer. We use these two pointers to traverse the linked list. The pointer moves one step at a time, and the pointer moves two steps.
If there is no cycle in the list, the pointer will eventually reach the last element whose pointer is and stop there.
For a circular linked list, let’s imagine the and pointers are two runners racing around a circular track. At the beginning, the runner passes the runner. However, it’ll eventually meet the runner again from behind. We can use the same idea to detect if there is a cycle in the linked list:
In this algorithm, we first have a sanity check on the input pointer. Then, we start two pointers, and , at different locations. In the loop, we advance the two pointers at different speeds. If the pointer reaches the terminated pointer, we can conclude that the linked list does not have a cycle. Otherwise, the two pointers will eventually meet. Then, we finish the loop and return to indicate a circular linked list.
5. Complexity Analysis of the Two-pointer Solution
If the linked list doesn’t have a cycle, the loop will finish when the pointer reaches the end. Therefore, the time complexity is in this case.
For a circular linked list, we need to calculate the number of steps to make the pointer catch the pointer. Let’s first break down the movement of the pointer into two stages.
In the first stage, the pointer takes steps to enter the cycle. At this point, the pointer has already in the cycle and is elements apart from the pointer in the linked list direction. In the following example, the distance between the two pointers is 3 elements since we need to advance the pointer 3 elements to catch the pointer in the cycle:
In the second stage, both pointers are in the cycle. The pointer moves 2 steps in each iteration, and the pointer moves 1 step. Therefore, the pointer can catch up 1 element in each iteration. Since the distance between these two pointers at the beginning is , we need iterations to make these two pointers meet.
Therefore, the total running time is , where is the number of elements between the and the start element of the cycle, and is the distance between the two pointers when the pointer reaches the cycle. Since is at most the cycle length, the overall time complexity is also .
The space complexity is as we only use two pointers ( and ) in the algorithm.
In this tutorial, we showed a sample circular linked list. Also, we discussed two linear-time algorithms that can check if a linked list is a circular linked list. |
Arrays are a fantastic way to introduce multiplication to your students. They serve as a visual representation of the meaning behind multiplication. In this resource, students will work with arrays in different ways to develop their understanding of multiplication.
This set of task cards includes 24 cards. Students will practice working with arrays in a variety of different ways including:
- draw an array to solve a problem
- draw an array to represent a multiplication equation
- draw an array to represent a repeated addition equation
- related repeated addition to multiplication
- write the equation represented by the array
- write a story problem and use an array to solve it
- think about arrays in real life
Recording sheets and answer keys are included. I have also included an array reference poster for classroom display.
What Teachers Are Saying
⭐️⭐️⭐️⭐️⭐️ "A great resource to introduce my students to the concept of arrays - thank you!" Kristin I.
⭐️⭐️⭐️⭐️⭐️ "This was exactly what I need. Great activity for students to review and practice in a center." Karen A.
⭐️⭐️⭐️⭐️⭐️ "Great addition to my multiplication unit. Thanks!" Courtney S. |
High School: Algebra
Reasoning with Equations and Inequalities HSA-REI.A.2
2. Solve simple rational and radical equations in one variable, and give examples showing how extraneous solutions may arise.
Rational equations might sometimes be a bit more radical than students would like. Those square root signs can instill terror in the heart of any student unprepared for them. If students overcame their fear of monsters under the bed (and hopefully they did), they'll get over their fear of radicals too.
Rational equations mean that fractions are involved. Radical equations mean that square roots are involved. Students should know how to deal with both separately and together.
A radical equation is one in which the variable is under the radical sign. When solving radical equations, it's usually best to leave the radicals for last unless there's a quick and easy way to get rid of all of them. If only there were a quick and easy way to get rid of monsters.
For example, is a radical equation. To solve, students should add 4 to both sides and get , or . All that's left is squaring both sides to get x = 256. Not too scary, right?
Of course, solving radical equations means students have to understand how to combine radicals. For example, since both terms have the same thing under the radical. We cannot combine and this same way. We could, however, rewrite as in which case . Not as pretty, but nonetheless doable.
Students should know how to combine, manipulate, and rewrite radical expressions. This usually takes practice and repetition. When all else fails, tell students to treat radicals like they'd treat variables. This is the one time the Golden Rule doesn't apply.
Students should already know how to solve rational equations. They should be able to find x faster than a pirate with a treasure map. And if he has an eye patch then it shouldn't even be a contest.
A rational equation might look like
In this case, let's say it does. First, cross-multiply to get 3(x – 4) = (x – 4)(x + 2). Since we have (x – 4) on both sides, we can reduce the equation to 3 = x + 2. Our final answer is x = 1.
Students should know that sometimes, algebraic manipulation produces extraneous solutions. For example, multiplying the fairly simple equation x + 5 = 0 by x will give x2 + 5x = 0. Now, both x = 0 and x = -5 will satisfy that quadratic. However, looking at x + 5 = 0, we can see that x = 0 won't work for the original equation (because 0 + 5 ≠ 0). That means x = 0 is an extraneous solution.
To check for extraneous solutions, students should plug in their final answers back into the original equation. If the equation produces an incorrect statement (like 5 = 0), then they'll know that solution didn't really exist. Much like the monsters under the bed…we hope.
- Solving Radical Equations
- Solving Rational Equations with One Variable
- Solving Two-Step Equations Using Addition and Subtraction
- ACT Math 3.3 Elementary Algebra
- ACT Math 3.4 Elementary Algebra
- ACT Math 4.5 Intermediate Algebra
- ACT Math 5.4 Intermediate Algebra
- ACT Math 7.1 Pre-Algebra
- ACT Math 2.2 Pre-Algebra
- Paso dos o Segundo paso Ecuaciones Usando la Adición y la Substracción
- CAHSEE Math 2.2 Mathematical Reasoning
- CAHSEE Math 2.2 Number Sense
- CAHSEE Math 3.3 Algebra and Functions
- CAHSEE Math 3.3 Algebra I
- CAHSEE Math 3.3 Measurement and Geometry
- CAHSEE Math 3.3 Statistics, Data, and Probability I
- CAHSEE Math 3.3 Statistics, Data, and Probability II
- CAHSEE Math 3.4 Algebra and Functions
- CAHSEE Math 3.4 Algebra I
- CAHSEE Math 3.4 Measurement and Geometry
- CAHSEE Math 3.4 Statistics, Data, and Probability I
- Solving Rational Equations - Math Shack
- Evaluating Expressions with Negative Exponents - Math Shack
- Solving Basic Rational Equations - Math Shack
- Solving Radical Equations - Math Shack
- Solving Radical Equations Involving Fractions - Math Shack
- Solving Complicated Equations - Math Shack
- Solving Rational Inequalities - Math Shack |
As you probably already know, there are various ways in which you can represent a straight line in math. So, you can express it in terms of the intercept, the slope, or the independent variable. However, the basic equation of a straight line is y = mx + c. Here y is the dependent variable and x is the independent variable. So, the value of x directly decides the value of the y coordinate. However, neither is x the x-intercept of the straight line nor is y the y-intercept of the same. Therefore, c is the y-intercept of the straight line and m is the value of the slope. Both m and c can be positive or negative. There are various ways to write the equation such that it suits the given data at hand. So, a point-slope form calculator will include the point-slope form of the equation.
The equation of the point-slope form is y- y1 = m (x – x1). So, here y and y1 are 2 points on the y-axis. Similarly, x and x1 are the two points on the x-axis and m is the slope of the line. However, x, y is a random point on the line while x1 and y1 surmise the point given through which the straight line passes. Therefore, you can also use this form to find out the slope when you have two points on the straight line. However, the formula that a point-slope form calculator uses comes from the definition of slope.
So, the slope is the ratio of the difference of the y-coordinates and the difference of the corresponding x-coordinates. So, slope (m) = (y- y1) / (x- x1). Therefore, if we interchange the terms and multiply m with (x – x1), we get (y-y1). This is how a point-slope form calculator functions.
Point-slope form calculator Emath
Calculating the slope of a straight line can often be a little tricky. This is because there are so many options and you never know what kind of data you will have. So, you may have two points or the equation of the straight line to find out the slope. On the other hand, you may have the slope and need to find a point. So, it is only natural that we will be overwhelmed. This is why we should use a point-slope form calculator.Read Also: Improper Integral Calculator
It is lucky that in times such as these, everything is available on the web that makes calculations way easier. So, various online point-slope form calculators can help you out. You just have to enter the data and wait for the results to show up. Moreover, you will also get a full length of steps that are involved in the calculation. So, this is because finding these values is not a one-liner step. Hence, a point-slope form calculator makes sure that you do not just copy a value but understand the process behind the solution. However, different point-slope form calculators are available online. In this case, we will be using the point-slope form calculator of Emath. Click on “Emath” and it will guide you to a link that will land you on the main page of the calculator.
Point-slope form calculator with steps
So, as you can well imagine, there are largely two ways by which you can feed data into a point-slope form calculator. Therefore, you can either enter two points or the equation of a straight line. So, in either case, the point-slope form calculator will give you the value of the slope of the line. Therefore, let us see the steps for both cases.
Step 1 to use a point-slope form calculator
So, when you open the page, right in the center of the page you find the calculator. Now, you find there are two panels in the point-slope form calculator. In the first panel, two boxes look like search engines. Here you have to feed in the data that you have to the calculator. Therefore, the panel underneath is where you see the solutions.
So, in the first panel, the first box reads “choose type”. There are two in-built types. So, you have to click on either line or two points according to your needs. Let us say, we consider two cases. Therefore, the line is case 1 and two points in case 2.
Step 2 to use a point-slope form calculator
So, now that you have entered the kind of data that you have, there can be two scenarios. Therefore, in either case, you have to fill up the boxes completely to get the correct answer.
In case I, you have the data that is the equation of a line. So, right under the choose the type box, you will find that there is another box. Therefore, it reads “enter a line”. So, here you have to enter the equation of the line you have. Let us consider an example. So, you enter y = x – 5.
Now, in case II, you have chosen to enter the two points. So, now you have a set of four boxes where you have to enter the values of the two points. Each point gets a set of boxes. So, you write the x-coordinate in the first box and the y-coordinate in the second. Let us say the two points we consider are (3, 4) and (5, 6). So, in the first set of boxes, you enter 3 and 4 individually and then 5 and 6- also individually. So, now you have fed all the information you had to the point-slope form calculator.
However, there is an interesting feature here. When you choose to enter the two points, there is a tab on the panel telling “generate values”. So, if you click on this, the point-slope form calculator automatically generates two points. Moreover, you can get the solution too. So, it is a great option if you are not solving a particular sum as well. Therefore, you can simply use it to understand the concept.
Step 3 to use a point-slope form calculator
Now, the third step is the same for both cases. So, you have already entered all the information that you had into the calculator. Therefore, at the bottom of this panel, you have a blue tab that reads “calculate”. So, click it to get the final answer as well as the full solution process for either case.
Step 4 to use a point-slope form calculator
So now, your job is done. Scroll down to the solution panel below. It shows the data that you entered, the entire process, how the calculator has solved it and the final answer as well, separately.
So, in case I, the calculator simply equates with the standard form of a straight line. Therefore, y = x – 5 when represented, y = mx + c, m or the slope is 1. So, the answer is 1.
Now, in case II, the calculator first combines the 4 values into two points. So, they consider P to be (3, 4) and Q to be (5, 6). Therefore, the point-slope form calculator now applies the formula to find the slope. So, slope m = (y2 – y1)/ (x2 – x1). Therefore (y2, x2) and (y1, x1) are (5, 6) and (3, 4) respectively. Hence, the slope is 6-4 / (5-3) = 2/ 2 = 1. So, the final answer is 1.
Step 5 to use a point-slope form calculator
The point-slope form calculator is also open to comments and suggestions. It may so happen that you wanted to verify an answer but you are quite sure that the calculator is giving a wrong answer. So, in that case, you have the option to give feedback regarding your issues and the calculator will get back to you.
Point-slope form calculator with two points
So, the most common way we use a point-slope calculator is when we have two points through which a straight line passes and we need to find out the slope of the line. So, the point-slope form calculator from Emath allows this. In the direction of what the steps are to operate the calculator, we have already seen how this functions. However, we will take another example. So, this will show us how the operation takes place such that you can have the slope’s value.
Therefore, you can either have two points, or you might want the calculator to generate them for you if you are only interested in knowing the process.
So, you have to enter all the coordinates separately. Then, the calculator groups them into points accordingly. Now, they use the original formula to find the slope of a straight line that needs two points. However, when you are putting the two points, insert the values carefully. This is because the way you enter, the point-slope form calculator calculates as it is. Therefore, if you are not careful enough, there can be an unwanted change in the sign of the line. This is because the direction concerning which you want to find the slope is important.
Point-slope form calculator with fraction
Now, when you are entering the values of the points into the point-slope form calculator, as long as the point exists on the graph, there are no restrictions. So, there are infinite points on a graph even though they cannot properly fit in a single box. Therefore, this means your points can be in fractions and decimals as well. It is not necessary for them to always be integers- either positive or negative. So, let us take an example. Let us say you entered the points ( 3/5, 4/3) and ( 5/2, 6/7).
So, the solution that you will get will consider the two points as P and Q respectively.
Therefore, the points are x1 = 3/5 and y1 = 4/3.
On the other hand, x2 = 5/2 and y2 = 6/7.
Now the slope of a line that passes through 2 points P ( x1, y1) and Q (x2, y2) is m = (y2 – y1)/ (x2- x1)
So, if we put the values properly, what we find is ( 6/7 – 4/3) / (5/2 – 3/5). Therefore, this gives (-10/21) / (19/10). So, this yields -100/ 399. Therefore, this is approximately m = −0.25062656641604. Moreover, the slope, in this case, is negative.
Point-slope form calculator with x-intercept
So, as per definition, the x-intercept is where a straight line crosses the x-axis. Therefore, technically, at this point y must be equal to 0 because the point lies on the x-axis. Hence, you should not be worried if you have to enter the x-intercept to the point-slope calculator for finding out the slope. It is quite easy. So, when you are entering this particular point, you put the value to the x- coordinate. On the other hand, the y-coordinate must be 0. Therefore, let us take an example.
So, let us say one point is (3, 4). The other point that you are entering into the point-slope form calculator is the x-intercept at 5. Therefore, the point is (5, 0). So, from now on, the calculator follows the normal process to find the slope that cuts through the x-intercept and the point (3, 4). Therefore, it considers (3,4) to be P ( x1, y1) and (5, 0) to be Q (x2, y2). So, the slope is m = y2 – y1/ (x2 – x1). This is equal to (4 – 0)/ (3- 5). So, the slope is 4/(-2). Therefore, the final answer is -2.
Point-slope form calculator perpendicular
So, there is also a perpendicular and parallel calculator. Therefore, here, you first have the option to choose whether the line you are finding is perpendicular or parallel to the reference line. After this, you have to enter the equation of the reference line. So, finally, you have to enter the coordinates of the point on the reference line through which the parallel or the perpendicular line passes. This is how it works. However, if you need to find the line equation you can use a line calculator. Similarly, if you need to find the slope of the line, you can go for the point-slope form calculator. It is just one click away.
Moreover, the point-slope form calculator, in this case, might be important. This is because two parallel lines will always have the same slope. Moreover, for perpendicular lines, their slopes are the negative reciprocal of the other one.
Point-slope form calculator FAQs.
What are the rules for point-slope form?
Ans. So, there are no such rules in the point-slope form calculator. However, there are a few requirements. It works following the principle m (x – x1 ) = y – y1. Therefore, you should know any 2 pieces of information. So, you can either have the two points to calculate the slope. Or, on the other hand, you might have a point and the slope to find the coordinates of the other point.
Where does point-slope form come from?
Ans. So, this form comes from the basic concept of a slope. Therefore, a slope is the ratio of the difference between 2 y-coordinates with the difference between 2 x-coordinates. So, if you swap the ratio, it adjusts to the equation of a straight line that is y = mx + c. Therefore, we get the point-slope form of the straight line that is m (x – x1 ) = y – y1.
How do you graph a point-slope equation?
Ans. So, from the point-slope equation, you can easily find the slope of the straight line. Now, plot the point on the graph and follow the slope that you obtained. Therefore, now use the same equation to find 3 or 4 other points. Take arbitrary values of x and find out y from there. Hence, plot all of them. So, you must find your required straight line on the graph.
What is the difference between the point-slope form and the slope-intercept form?
Ans. So, the point-slope form calculator considers the points through which the line passes and its slope. On the other hand, the slope-intercept form considers both the x and y-intercepts of the straight line and its slope.
What does point-slope form look like?
Ans. So, the point-slope form has a very simple equation that the point-slope form calculator uses. Therefore, it is m (x – x1 ) = y – y1. Here, m is the slope of the line and the rest are the two coordinates (x1, y1) and (x, y).
What are the three slope forms?
Ans. So, the three basic slope forms of every linear equation are point-slope form, standard form, and slope-intercept form.
How do you change point-slope form into standard form?
Ans. To convert the point-slope form into the standard form of a straight line, the extra information that you need is the y-intercept. So, you can easily find it out by putting x = 0 for a value on the y- axis. Now, you already have the slope. Just put the values appropriately on the equation y = mx + c. This is the standard form of the straight line.
Is the point-slope form the same as the standard form?
Ans. No, the point-slope form and the standard form of a straight line are not the same. However, they are quite similar. Moreover, the point-slope comes from the standard form of the straight line only.
How do you simplify point-slope form?
Ans. So, you can simplify the point-slope form by swapping the equation and finding the slope from it, if you know the coordinates of the two points. If not, then you just match your straight line equation with the standard form of the straight line.
When can the slope of a line be 0?
Ans. The slope of a line is 0 when it is horizontal and coincides with the X-axis. So, what this means is the value of the y-coordinate must be 0. Therefore, any line parallel to the x-axis will also have a 0 slope. So, they will have the same y-coordinate which will become 0 when they are subtracted from each other. |
In physiology, an action potential (AP) occurs when the membrane potential of a specific cell location rapidly rises and falls: this depolarization then causes adjacent locations to similarly depolarize. Action potentials occur in several types of animal cells, called excitable cells, which include neurons, muscle cells, endocrine cells and in some plant cells.
In neurons, action potentials play a central role in cell-to-cell communication by providing for—or with regard to saltatory conduction, assisting—the propagation of signals along the neuron's axon toward synaptic boutons situated at the ends of an axon; these signals can then connect with other neurons at synapses, or to motor cells or glands. In other types of cells, their main function is to activate intracellular processes. In muscle cells, for example, an action potential is the first step in the chain of events leading to contraction. In beta cells of the pancreas, they provoke release of insulin.[a] Action potentials in neurons are also known as "nerve impulses" or "spikes", and the temporal sequence of action potentials generated by a neuron is called its "spike train". A neuron that emits an action potential, or nerve impulse, is often said to "fire".
Action potentials are generated by special types of voltage-gated ion channels embedded in a cell's plasma membrane.[b] These channels are shut when the membrane potential is near the (negative) resting potential of the cell, but they rapidly begin to open if the membrane potential increases to a precisely defined threshold voltage, depolarising the transmembrane potential.[b] When the channels open, they allow an inward flow of sodium ions, which changes the electrochemical gradient, which in turn produces a further rise in the membrane potential towards zero. This then causes more channels to open, producing a greater electric current across the cell membrane and so on. The process proceeds explosively until all of the available ion channels are open, resulting in a large upswing in the membrane potential. The rapid influx of sodium ions causes the polarity of the plasma membrane to reverse, and the ion channels then rapidly inactivate. As the sodium channels close, sodium ions can no longer enter the neuron, and they are then actively transported back out of the plasma membrane. Potassium channels are then activated, and there is an outward current of potassium ions, returning the electrochemical gradient to the resting state. After an action potential has occurred, there is a transient negative shift, called the afterhyperpolarization.
In animal cells, there are two primary types of action potentials. One type is generated by voltage-gated sodium channels, the other by voltage-gated calcium channels. Sodium-based action potentials usually last for under one millisecond, but calcium-based action potentials may last for 100 milliseconds or longer. In some types of neurons, slow calcium spikes provide the driving force for a long burst of rapidly emitted sodium spikes. In cardiac muscle cells, on the other hand, an initial fast sodium spike provides a "primer" to provoke the rapid onset of a calcium spike, which then produces muscle contraction.
Nearly all cell membranes in animals, plants and fungi maintain a voltage difference between the exterior and interior of the cell, called the membrane potential. A typical voltage across an animal cell membrane is −70 mV. This means that the interior of the cell has a negative voltage relative to the exterior. In most types of cells, the membrane potential usually stays fairly constant. Some types of cells, however, are electrically active in the sense that their voltages fluctuate over time. In some types of electrically active cells, including neurons and muscle cells, the voltage fluctuations frequently take the form of a rapid upward spike followed by a rapid fall. These up-and-down cycles are known as action potentials. In some types of neurons, the entire up-and-down cycle takes place in a few thousandths of a second. In muscle cells, a typical action potential lasts about a fifth of a second. In some other types of cells and plants, an action potential may last three seconds or more.
The electrical properties of a cell are determined by the structure of the membrane that surrounds it. A cell membrane consists of a lipid bilayer of molecules in which larger protein molecules are embedded. The lipid bilayer is highly resistant to movement of electrically charged ions, so it functions as an insulator. The large membrane-embedded proteins, in contrast, provide channels through which ions can pass across the membrane. Action potentials are driven by channel proteins whose configuration switches between closed and open states as a function of the voltage difference between the interior and exterior of the cell. These voltage-sensitive proteins are known as voltage-gated ion channels.
All cells in animal body tissues are electrically polarized – in other words, they maintain a voltage difference across the cell's plasma membrane, known as the membrane potential. This electrical polarization results from a complex interplay between protein structures embedded in the membrane called ion pumps and ion channels. In neurons, the types of ion channels in the membrane usually vary across different parts of the cell, giving the dendrites, axon, and cell body different electrical properties. As a result, some parts of the membrane of a neuron may be excitable (capable of generating action potentials), whereas others are not. Recent studies have shown that the most excitable part of a neuron is the part after the axon hillock (the point where the axon leaves the cell body), which is called the initial segment, but the axon and cell body are also excitable in most cases.
Each excitable patch of membrane has two important levels of membrane potential: the resting potential, which is the value the membrane potential maintains as long as nothing perturbs the cell, and a higher value called the threshold potential. At the axon hillock of a typical neuron, the resting potential is around –70 millivolts (mV) and the threshold potential is around –55 mV. Synaptic inputs to a neuron cause the membrane to depolarize or hyperpolarize; that is, they cause the membrane potential to rise or fall. Action potentials are triggered when enough depolarization accumulates to bring the membrane potential up to threshold. When an action potential is triggered, the membrane potential abruptly shoots upward and then equally abruptly shoots back downward, often ending below the resting level, where it remains for some period of time. The shape of the action potential is stereotyped; this means that the rise and fall usually have approximately the same amplitude and time course for all action potentials in a given cell. (Exceptions are discussed later in the article). In most neurons, the entire process takes place in about a thousandth of a second. Many types of neurons emit action potentials constantly at rates of up to 10–100 per second. However, some types are much quieter, and may go for minutes or longer without emitting any action potentials.
Action potentials result from the presence in a cell's membrane of special types of voltage-gated ion channels. A voltage-gated ion channel is a transmembrane protein that has three key properties:
Thus, a voltage-gated ion channel tends to be open for some values of the membrane potential, and closed for others. In most cases, however, the relationship between membrane potential and channel state is probabilistic and involves a time delay. Ion channels switch between conformations at unpredictable times: The membrane potential determines the rate of transitions and the probability per unit time of each type of transition.
Voltage-gated ion channels are capable of producing action potentials because they can give rise to positive feedback loops: The membrane potential controls the state of the ion channels, but the state of the ion channels controls the membrane potential. Thus, in some situations, a rise in the membrane potential can cause ion channels to open, thereby causing a further rise in the membrane potential. An action potential occurs when this positive feedback cycle (Hodgkin cycle) proceeds explosively. The time and amplitude trajectory of the action potential are determined by the biophysical properties of the voltage-gated ion channels that produce it. Several types of channels capable of producing the positive feedback necessary to generate an action potential do exist. Voltage-gated sodium channels are responsible for the fast action potentials involved in nerve conduction. Slower action potentials in muscle cells and some types of neurons are generated by voltage-gated calcium channels. Each of these types comes in multiple variants, with different voltage sensitivity and different temporal dynamics.
The most intensively studied type of voltage-dependent ion channels comprises the sodium channels involved in fast nerve conduction. These are sometimes known as Hodgkin-Huxley sodium channels because they were first characterized by Alan Hodgkin and Andrew Huxley in their Nobel Prize-winning studies of the biophysics of the action potential, but can more conveniently be referred to as NaV channels. (The "V" stands for "voltage".) An NaV channel has three possible states, known as deactivated, activated, and inactivated. The channel is permeable only to sodium ions when it is in the activated state. When the membrane potential is low, the channel spends most of its time in the deactivated (closed) state. If the membrane potential is raised above a certain level, the channel shows increased probability of transitioning to the activated (open) state. The higher the membrane potential the greater the probability of activation. Once a channel has activated, it will eventually transition to the inactivated (closed) state. It tends then to stay inactivated for some time, but, if the membrane potential becomes low again, the channel will eventually transition back to the deactivated state. During an action potential, most channels of this type go through a cycle deactivated→activated→inactivated→deactivated. This is only the population average behavior, however – an individual channel can in principle make any transition at any time. However, the likelihood of a channel's transitioning from the inactivated state directly to the activated state is very low: A channel in the inactivated state is refractory until it has transitioned back to the deactivated state.
The outcome of all this is that the kinetics of the NaV channels are governed by a transition matrix whose rates are voltage-dependent in a complicated way. Since these channels themselves play a major role in determining the voltage, the global dynamics of the system can be quite difficult to work out. Hodgkin and Huxley approached the problem by developing a set of differential equations for the parameters that govern the ion channel states, known as the Hodgkin-Huxley equations. These equations have been extensively modified by later research, but form the starting point for most theoretical studies of action potential biophysics.
As the membrane potential is increased, sodium ion channels open, allowing the entry of sodium ions into the cell. This is followed by the opening of potassium ion channels that permit the exit of potassium ions from the cell. The inward flow of sodium ions increases the concentration of positively charged cations in the cell and causes depolarization, where the potential of the cell is higher than the cell's resting potential. The sodium channels close at the peak of the action potential, while potassium continues to leave the cell. The efflux of potassium ions decreases the membrane potential or hyperpolarizes the cell. For small voltage increases from rest, the potassium current exceeds the sodium current and the voltage returns to its normal resting value, typically −70 mV. However, if the voltage increases past a critical threshold, typically 15 mV higher than the resting value, the sodium current dominates. This results in a runaway condition whereby the positive feedback from the sodium current activates even more sodium channels. Thus, the cell fires, producing an action potential.[note 1] The frequency at which a neuron elicits action potentials is often referred to as a firing rate or neural firing rate.
Currents produced by the opening of voltage-gated channels in the course of an action potential are typically significantly larger than the initial stimulating current. Thus, the amplitude, duration, and shape of the action potential are determined largely by the properties of the excitable membrane and not the amplitude or duration of the stimulus. This all-or-nothing property of the action potential sets it apart from graded potentials such as receptor potentials, electrotonic potentials, subthreshold membrane potential oscillations, and synaptic potentials, which scale with the magnitude of the stimulus. A variety of action potential types exist in many cell types and cell compartments as determined by the types of voltage-gated channels, leak channels, channel distributions, ionic concentrations, membrane capacitance, temperature, and other factors.
The principal ions involved in an action potential are sodium and potassium cations; sodium ions enter the cell, and potassium ions leave, restoring equilibrium. Relatively few ions need to cross the membrane for the membrane voltage to change drastically. The ions exchanged during an action potential, therefore, make a negligible change in the interior and exterior ionic concentrations. The few ions that do cross are pumped out again by the continuous action of the sodium–potassium pump, which, with other ion transporters, maintains the normal ratio of ion concentrations across the membrane. Calcium cations and chloride anions are involved in a few types of action potentials, such as the cardiac action potential and the action potential in the single-cell alga Acetabularia, respectively.
Although action potentials are generated locally on patches of excitable membrane, the resulting currents can trigger action potentials on neighboring stretches of membrane, precipitating a domino-like propagation. In contrast to passive spread of electric potentials (electrotonic potential), action potentials are generated anew along excitable stretches of membrane and propagate without decay. Myelinated sections of axons are not excitable and do not produce action potentials and the signal is propagated passively as electrotonic potential. Regularly spaced unmyelinated patches, called the nodes of Ranvier, generate action potentials to boost the signal. Known as saltatory conduction, this type of signal propagation provides a favorable tradeoff of signal velocity and axon diameter. Depolarization of axon terminals, in general, triggers the release of neurotransmitter into the synaptic cleft. In addition, backpropagating action potentials have been recorded in the dendrites of pyramidal neurons, which are ubiquitous in the neocortex.[c] These are thought to have a role in spike-timing-dependent plasticity.
In the Hodgkin–Huxley membrane capacitance model, the speed of transmission of an action potential was undefined and it was assumed that adjacent areas became depolarized due to released ion interference with neighbouring channels. Measurements of ion diffusion and radii have since shown this not to be possible. Moreover, contradictory measurements of entropy changes and timing disputed the capacitance model as acting alone. Alternatively, Gilbert Ling's adsorption hypothesis, posits that the membrane potential and action potential of a living cell is due to the adsorption of mobile ions onto adsorption sites of cells.
A neuron's ability to generate and propagate an action potential changes during development. How much the membrane potential of a neuron changes as the result of a current impulse is a function of the membrane input resistance. As a cell grows, more channels are added to the membrane, causing a decrease in input resistance. A mature neuron also undergoes shorter changes in membrane potential in response to synaptic currents. Neurons from a ferret lateral geniculate nucleus have a longer time constant and larger voltage deflection at P0 than they do at P30. One consequence of the decreasing action potential duration is that the fidelity of the signal can be preserved in response to high frequency stimulation. Immature neurons are more prone to synaptic depression than potentiation after high frequency stimulation.
In the early development of many organisms, the action potential is actually initially carried by calcium current rather than sodium current. The opening and closing kinetics of calcium channels during development are slower than those of the voltage-gated sodium channels that will carry the action potential in the mature neurons. The longer opening times for the calcium channels can lead to action potentials that are considerably slower than those of mature neurons. Xenopus neurons initially have action potentials that take 60–90 ms. During development, this time decreases to 1 ms. There are two reasons for this drastic decrease. First, the inward current becomes primarily carried by sodium channels. Second, the delayed rectifier, a potassium channel current, increases to 3.5 times its initial strength.
In order for the transition from a calcium-dependent action potential to a sodium-dependent action potential to proceed new channels must be added to the membrane. If Xenopus neurons are grown in an environment with RNA synthesis or protein synthesis inhibitors that transition is prevented. Even the electrical activity of the cell itself may play a role in channel expression. If action potentials in Xenopus myocytes are blocked, the typical increase in sodium and potassium current density is prevented or delayed.
This maturation of electrical properties is seen across species. Xenopus sodium and potassium currents increase drastically after a neuron goes through its final phase of mitosis. The sodium current density of rat cortical neurons increases by 600% within the first two postnatal weeks.
Main article: Neurotransmission
Several types of cells support an action potential, such as plant cells, muscle cells, and the specialized cells of the heart (in which occurs the cardiac action potential). However, the main excitable cell is the neuron, which also has the simplest mechanism for the action potential.
Neurons are electrically excitable cells composed, in general, of one or more dendrites, a single soma, a single axon and one or more axon terminals. Dendrites are cellular projections whose primary function is to receive synaptic signals. Their protrusions, known as dendritic spines, are designed to capture the neurotransmitters released by the presynaptic neuron. They have a high concentration of ligand-gated ion channels. These spines have a thin neck connecting a bulbous protrusion to the dendrite. This ensures that changes occurring inside the spine are less likely to affect the neighboring spines. The dendritic spine can, with rare exception (see LTP), act as an independent unit. The dendrites extend from the soma, which houses the nucleus, and many of the "normal" eukaryotic organelles. Unlike the spines, the surface of the soma is populated by voltage activated ion channels. These channels help transmit the signals generated by the dendrites. Emerging out from the soma is the axon hillock. This region is characterized by having a very high concentration of voltage-activated sodium channels. In general, it is considered to be the spike initiation zone for action potentials, i.e. the trigger zone. Multiple signals generated at the spines, and transmitted by the soma all converge here. Immediately after the axon hillock is the axon. This is a thin tubular protrusion traveling away from the soma. The axon is insulated by a myelin sheath. Myelin is composed of either Schwann cells (in the peripheral nervous system) or oligodendrocytes (in the central nervous system), both of which are types of glial cells. Although glial cells are not involved with the transmission of electrical signals, they communicate and provide important biochemical support to neurons. To be specific, myelin wraps multiple times around the axonal segment, forming a thick fatty layer that prevents ions from entering or escaping the axon. This insulation prevents significant signal decay as well as ensuring faster signal speed. This insulation, however, has the restriction that no channels can be present on the surface of the axon. There are, therefore, regularly spaced patches of membrane, which have no insulation. These nodes of Ranvier can be considered to be "mini axon hillocks", as their purpose is to boost the signal in order to prevent significant signal decay. At the furthest end, the axon loses its insulation and begins to branch into several axon terminals. These presynaptic terminals, or synaptic boutons, are a specialized area within the axon of the presynaptic cell that contains neurotransmitters enclosed in small membrane-bound spheres called synaptic vesicles.
Before considering the propagation of action potentials along axons and their termination at the synaptic knobs, it is helpful to consider the methods by which action potentials can be initiated at the axon hillock. The basic requirement is that the membrane voltage at the hillock be raised above the threshold for firing. There are several ways in which this depolarization can occur.
Action potentials are most commonly initiated by excitatory postsynaptic potentials from a presynaptic neuron. Typically, neurotransmitter molecules are released by the presynaptic neuron. These neurotransmitters then bind to receptors on the postsynaptic cell. This binding opens various types of ion channels. This opening has the further effect of changing the local permeability of the cell membrane and, thus, the membrane potential. If the binding increases the voltage (depolarizes the membrane), the synapse is excitatory. If, however, the binding decreases the voltage (hyperpolarizes the membrane), it is inhibitory. Whether the voltage is increased or decreased, the change propagates passively to nearby regions of the membrane (as described by the cable equation and its refinements). Typically, the voltage stimulus decays exponentially with the distance from the synapse and with time from the binding of the neurotransmitter. Some fraction of an excitatory voltage may reach the axon hillock and may (in rare cases) depolarize the membrane enough to provoke a new action potential. More typically, the excitatory potentials from several synapses must work together at nearly the same time to provoke a new action potential. Their joint efforts can be thwarted, however, by the counteracting inhibitory postsynaptic potentials.
Neurotransmission can also occur through electrical synapses. Due to the direct connection between excitable cells in the form of gap junctions, an action potential can be transmitted directly from one cell to the next in either direction. The free flow of ions between cells enables rapid non-chemical-mediated transmission. Rectifying channels ensure that action potentials move only in one direction through an electrical synapse. Electrical synapses are found in all nervous systems, including the human brain, although they are a distinct minority.
Main article: All-or-none law
The amplitude of an action potential is independent of the amount of current that produced it. In other words, larger currents do not create larger action potentials. Therefore, action potentials are said to be all-or-none signals, since either they occur fully or they do not occur at all.[d][e][f] This is in contrast to receptor potentials, whose amplitudes are dependent on the intensity of a stimulus. In both cases, the frequency of action potentials is correlated with the intensity of a stimulus.
Main article: Sensory neuron
In sensory neurons, an external signal such as pressure, temperature, light, or sound is coupled with the opening and closing of ion channels, which in turn alter the ionic permeabilities of the membrane and its voltage. These voltage changes can again be excitatory (depolarizing) or inhibitory (hyperpolarizing) and, in some sensory neurons, their combined effects can depolarize the axon hillock enough to provoke action potentials. Some examples in humans include the olfactory receptor neuron and Meissner's corpuscle, which are critical for the sense of smell and touch, respectively. However, not all sensory neurons convert their external signals into action potentials; some do not even have an axon. Instead, they may convert the signal into the release of a neurotransmitter, or into continuous graded potentials, either of which may stimulate subsequent neuron(s) into firing an action potential. For illustration, in the human ear, hair cells convert the incoming sound into the opening and closing of mechanically gated ion channels, which may cause neurotransmitter molecules to be released. In similar manner, in the human retina, the initial photoreceptor cells and the next layer of cells (comprising bipolar cells and horizontal cells) do not produce action potentials; only some amacrine cells and the third layer, the ganglion cells, produce action potentials, which then travel up the optic nerve.
Main article: Pacemaker potential
In sensory neurons, action potentials result from an external stimulus. However, some excitable cells require no such stimulus to fire: They spontaneously depolarize their axon hillock and fire action potentials at a regular rate, like an internal clock. The voltage traces of such cells are known as pacemaker potentials. The cardiac pacemaker cells of the sinoatrial node in the heart provide a good example.[g] Although such pacemaker potentials have a natural rhythm, it can be adjusted by external stimuli; for instance, heart rate can be altered by pharmaceuticals as well as signals from the sympathetic and parasympathetic nerves. The external stimuli do not cause the cell's repetitive firing, but merely alter its timing. In some cases, the regulation of frequency can be more complex, leading to patterns of action potentials, such as bursting.
The course of the action potential can be divided into five parts: the rising phase, the peak phase, the falling phase, the undershoot phase, and the refractory period. During the rising phase the membrane potential depolarizes (becomes more positive). The point at which depolarization stops is called the peak phase. At this stage, the membrane potential reaches a maximum. Subsequent to this, there is a falling phase. During this stage the membrane potential becomes more negative, returning towards resting potential. The undershoot, or afterhyperpolarization, phase is the period during which the membrane potential temporarily becomes more negatively charged than when at rest (hyperpolarized). Finally, the time during which a subsequent action potential is impossible or difficult to fire is called the refractory period, which may overlap with the other phases.
The course of the action potential is determined by two coupled effects. First, voltage-sensitive ion channels open and close in response to changes in the membrane voltage Vm. This changes the membrane's permeability to those ions. Second, according to the Goldman equation, this change in permeability changes the equilibrium potential Em, and, thus, the membrane voltage Vm.[h] Thus, the membrane potential affects the permeability, which then further affects the membrane potential. This sets up the possibility for positive feedback, which is a key part of the rising phase of the action potential. A complicating factor is that a single ion channel may have multiple internal "gates" that respond to changes in Vm in opposite ways, or at different rates.[i] For example, although raising Vm opens most gates in the voltage-sensitive sodium channel, it also closes the channel's "inactivation gate", albeit more slowly. Hence, when Vm is raised suddenly, the sodium channels open initially, but then close due to the slower inactivation.
The voltages and currents of the action potential in all of its phases were modeled accurately by Alan Lloyd Hodgkin and Andrew Huxley in 1952,[i] for which they were awarded the Nobel Prize in Physiology or Medicine in 1963.[lower-Greek 2] However, their model considers only two types of voltage-sensitive ion channels, and makes several assumptions about them, e.g., that their internal gates open and close independently of one another. In reality, there are many types of ion channels, and they do not always open and close independently.[j]
A typical action potential begins at the axon hillock with a sufficiently strong depolarization, e.g., a stimulus that increases Vm. This depolarization is often caused by the injection of extra sodium cations into the cell; these cations can come from a wide variety of sources, such as chemical synapses, sensory neurons or pacemaker potentials.
For a neuron at rest, there is a high concentration of sodium and chloride ions in the extracellular fluid compared to the intracellular fluid, while there is a high concentration of potassium ions in the intracellular fluid compared to the extracellular fluid. The difference in concentrations, which causes ions to move from a high to a low concentration, and electrostatic effects (attraction of opposite charges) are responsible for the movement of ions in and out of the neuron. The inside of a neuron has a negative charge, relative to the cell exterior, from the movement of K+ out of the cell. The neuron membrane is more permeable to K+ than to other ions, allowing this ion to selectively move out of the cell, down its concentration gradient. This concentration gradient along with potassium leak channels present on the membrane of the neuron causes an efflux of potassium ions making the resting potential close to EK ≈ –75 mV. Since Na+ ions are in higher concentrations outside of the cell, the concentration and voltage differences both drive them into the cell when Na+ channels open. Depolarization opens both the sodium and potassium channels in the membrane, allowing the ions to flow into and out of the axon, respectively. If the depolarization is small (say, increasing Vm from −70 mV to −60 mV), the outward potassium current overwhelms the inward sodium current and the membrane repolarizes back to its normal resting potential around −70 mV. However, if the depolarization is large enough, the inward sodium current increases more than the outward potassium current and a runaway condition (positive feedback) results: the more inward current there is, the more Vm increases, which in turn further increases the inward current. A sufficiently strong depolarization (increase in Vm) causes the voltage-sensitive sodium channels to open; the increasing permeability to sodium drives Vm closer to the sodium equilibrium voltage ENa≈ +55 mV. The increasing voltage in turn causes even more sodium channels to open, which pushes Vm still further towards ENa. This positive feedback continues until the sodium channels are fully open and Vm is close to ENa. The sharp rise in Vm and sodium permeability correspond to the rising phase of the action potential.
The critical threshold voltage for this runaway condition is usually around −45 mV, but it depends on the recent activity of the axon. A cell that has just fired an action potential cannot fire another one immediately, since the Na+ channels have not recovered from the inactivated state. The period during which no new action potential can be fired is called the absolute refractory period. At longer times, after some but not all of the ion channels have recovered, the axon can be stimulated to produce another action potential, but with a higher threshold, requiring a much stronger depolarization, e.g., to −30 mV. The period during which action potentials are unusually difficult to evoke is called the relative refractory period.
The positive feedback of the rising phase slows and comes to a halt as the sodium ion channels become maximally open. At the peak of the action potential, the sodium permeability is maximized and the membrane voltage Vm is nearly equal to the sodium equilibrium voltage ENa. However, the same raised voltage that opened the sodium channels initially also slowly shuts them off, by closing their pores; the sodium channels become inactivated. This lowers the membrane's permeability to sodium relative to potassium, driving the membrane voltage back towards the resting value. At the same time, the raised voltage opens voltage-sensitive potassium channels; the increase in the membrane's potassium permeability drives Vm towards EK. Combined, these changes in sodium and potassium permeability cause Vm to drop quickly, repolarizing the membrane and producing the "falling phase" of the action potential.
The depolarized voltage opens additional voltage-dependent potassium channels, and some of these do not close right away when the membrane returns to its normal resting voltage. In addition, further potassium channels open in response to the influx of calcium ions during the action potential. The intracellular concentration of potassium ions is transiently unusually low, making the membrane voltage Vm even closer to the potassium equilibrium voltage EK. The membrane potential goes below the resting membrane potential. Hence, there is an undershoot or hyperpolarization, termed an afterhyperpolarization, that persists until the membrane potassium permeability returns to its usual value, restoring the membrane potential to the resting state.
Each action potential is followed by a refractory period, which can be divided into an absolute refractory period, during which it is impossible to evoke another action potential, and then a relative refractory period, during which a stronger-than-usual stimulus is required. These two refractory periods are caused by changes in the state of sodium and potassium channel molecules. When closing after an action potential, sodium channels enter an "inactivated" state, in which they cannot be made to open regardless of the membrane potential—this gives rise to the absolute refractory period. Even after a sufficient number of sodium channels have transitioned back to their resting state, it frequently happens that a fraction of potassium channels remains open, making it difficult for the membrane potential to depolarize, and thereby giving rise to the relative refractory period. Because the density and subtypes of potassium channels may differ greatly between different types of neurons, the duration of the relative refractory period is highly variable.
The absolute refractory period is largely responsible for the unidirectional propagation of action potentials along axons. At any given moment, the patch of axon behind the actively spiking part is refractory, but the patch in front, not having been activated recently, is capable of being stimulated by the depolarization from the action potential.
Main article: Nerve conduction velocity
The action potential generated at the axon hillock propagates as a wave along the axon. The currents flowing inwards at a point on the axon during an action potential spread out along the axon, and depolarize the adjacent sections of its membrane. If sufficiently strong, this depolarization provokes a similar action potential at the neighboring membrane patches. This basic mechanism was demonstrated by Alan Lloyd Hodgkin in 1937. After crushing or cooling nerve segments and thus blocking the action potentials, he showed that an action potential arriving on one side of the block could provoke another action potential on the other, provided that the blocked segment was sufficiently short.[k]
Once an action potential has occurred at a patch of membrane, the membrane patch needs time to recover before it can fire again. At the molecular level, this absolute refractory period corresponds to the time required for the voltage-activated sodium channels to recover from inactivation, i.e., to return to their closed state. There are many types of voltage-activated potassium channels in neurons. Some of them inactivate fast (A-type currents) and some of them inactivate slowly or not inactivate at all; this variability guarantees that there will be always an available source of current for repolarization, even if some of the potassium channels are inactivated because of preceding depolarization. On the other hand, all neuronal voltage-activated sodium channels inactivate within several milliseconds during strong depolarization, thus making following depolarization impossible until a substantial fraction of sodium channels have returned to their closed state. Although it limits the frequency of firing, the absolute refractory period ensures that the action potential moves in only one direction along an axon. The currents flowing in due to an action potential spread out in both directions along the axon. However, only the unfired part of the axon can respond with an action potential; the part that has just fired is unresponsive until the action potential is safely out of range and cannot restimulate that part. In the usual orthodromic conduction, the action potential propagates from the axon hillock towards the synaptic knobs (the axonal termini); propagation in the opposite direction—known as antidromic conduction—is very rare. However, if a laboratory axon is stimulated in its middle, both halves of the axon are "fresh", i.e., unfired; then two action potentials will be generated, one traveling towards the axon hillock and the other traveling towards the synaptic knobs.
In order to enable fast and efficient transduction of electrical signals in the nervous system, certain neuronal axons are covered with myelin sheaths. Myelin is a multilamellar membrane that enwraps the axon in segments separated by intervals known as nodes of Ranvier. It is produced by specialized cells: Schwann cells exclusively in the peripheral nervous system, and oligodendrocytes exclusively in the central nervous system. Myelin sheath reduces membrane capacitance and increases membrane resistance in the inter-node intervals, thus allowing a fast, saltatory movement of action potentials from node to node.[l][m][n] Myelination is found mainly in vertebrates, but an analogous system has been discovered in a few invertebrates, such as some species of shrimp.[o] Not all neurons in vertebrates are myelinated; for example, axons of the neurons comprising the autonomous nervous system are not, in general, myelinated.
Myelin prevents ions from entering or leaving the axon along myelinated segments. As a general rule, myelination increases the conduction velocity of action potentials and makes them more energy-efficient. Whether saltatory or not, the mean conduction velocity of an action potential ranges from 1 meter per second (m/s) to over 100 m/s, and, in general, increases with axonal diameter.[p]
Action potentials cannot propagate through the membrane in myelinated segments of the axon. However, the current is carried by the cytoplasm, which is sufficient to depolarize the first or second subsequent node of Ranvier. Instead, the ionic current from an action potential at one node of Ranvier provokes another action potential at the next node; this apparent "hopping" of the action potential from node to node is known as saltatory conduction. Although the mechanism of saltatory conduction was suggested in 1925 by Ralph Lillie,[q] the first experimental evidence for saltatory conduction came from Ichiji Tasaki[r] and Taiji Takeuchi[s] and from Andrew Huxley and Robert Stämpfli.[t] By contrast, in unmyelinated axons, the action potential provokes another in the membrane immediately adjacent, and moves continuously down the axon like a wave.
Myelin has two important advantages: fast conduction speed and energy efficiency. For axons larger than a minimum diameter (roughly 1 micrometre), myelination increases the conduction velocity of an action potential, typically tenfold.[v] Conversely, for a given conduction velocity, myelinated fibers are smaller than their unmyelinated counterparts. For example, action potentials move at roughly the same speed (25 m/s) in a myelinated frog axon and an unmyelinated squid giant axon, but the frog axon has a roughly 30-fold smaller diameter and 1000-fold smaller cross-sectional area. Also, since the ionic currents are confined to the nodes of Ranvier, far fewer ions "leak" across the membrane, saving metabolic energy. This saving is a significant selective advantage, since the human nervous system uses approximately 20% of the body's metabolic energy.[v]
The length of axons' myelinated segments is important to the success of saltatory conduction. They should be as long as possible to maximize the speed of conduction, but not so long that the arriving signal is too weak to provoke an action potential at the next node of Ranvier. In nature, myelinated segments are generally long enough for the passively propagated signal to travel for at least two nodes while retaining enough amplitude to fire an action potential at the second or third node. Thus, the safety factor of saltatory conduction is high, allowing transmission to bypass nodes in case of injury. However, action potentials may end prematurely in certain places where the safety factor is low, even in unmyelinated neurons; a common example is the branch point of an axon, where it divides into two axons.
Some diseases degrade myelin and impair saltatory conduction, reducing the conduction velocity of action potentials.[w] The most well-known of these is multiple sclerosis, in which the breakdown of myelin impairs coordinated movement.
Main article: Cable theory
The flow of currents within an axon can be described quantitatively by cable theory and its elaborations, such as the compartmental model. Cable theory was developed in 1855 by Lord Kelvin to model the transatlantic telegraph cable[x] and was shown to be relevant to neurons by Hodgkin and Rushton in 1946.[y] In simple cable theory, the neuron is treated as an electrically passive, perfectly cylindrical transmission cable, which can be described by a partial differential equation
where V(x, t) is the voltage across the membrane at a time t and a position x along the length of the neuron, and where λ and τ are the characteristic length and time scales on which those voltages decay in response to a stimulus. Referring to the circuit diagram on the right, these scales can be determined from the resistances and capacitances per unit length.
These time and length-scales can be used to understand the dependence of the conduction velocity on the diameter of the neuron in unmyelinated fibers. For example, the time-scale τ increases with both the membrane resistance rm and capacitance cm. As the capacitance increases, more charge must be transferred to produce a given transmembrane voltage (by the equation Q = CV); as the resistance increases, less charge is transferred per unit time, making the equilibration slower. In a similar manner, if the internal resistance per unit length ri is lower in one axon than in another (e.g., because the radius of the former is larger), the spatial decay length λ becomes longer and the conduction velocity of an action potential should increase. If the transmembrane resistance rm is increased, that lowers the average "leakage" current across the membrane, likewise causing λ to become longer, increasing the conduction velocity.
In general, action potentials that reach the synaptic knobs cause a neurotransmitter to be released into the synaptic cleft.[z] Neurotransmitters are small molecules that may open ion channels in the postsynaptic cell; most axons have the same neurotransmitter at all of their termini. The arrival of the action potential opens voltage-sensitive calcium channels in the presynaptic membrane; the influx of calcium causes vesicles filled with neurotransmitter to migrate to the cell's surface and release their contents into the synaptic cleft.[aa] This complex process is inhibited by the neurotoxins tetanospasmin and botulinum toxin, which are responsible for tetanus and botulism, respectively.[ab]
Some synapses dispense with the "middleman" of the neurotransmitter, and connect the presynaptic and postsynaptic cells together.[ac] When an action potential reaches such a synapse, the ionic currents flowing into the presynaptic cell can cross the barrier of the two cell membranes and enter the postsynaptic cell through pores known as connexons.[ad] Thus, the ionic currents of the presynaptic action potential can directly stimulate the postsynaptic cell. Electrical synapses allow for faster transmission because they do not require the slow diffusion of neurotransmitters across the synaptic cleft. Hence, electrical synapses are used whenever fast response and coordination of timing are crucial, as in escape reflexes, the retina of vertebrates, and the heart.
A special case of a chemical synapse is the neuromuscular junction, in which the axon of a motor neuron terminates on a muscle fiber.[ae] In such cases, the released neurotransmitter is acetylcholine, which binds to the acetylcholine receptor, an integral membrane protein in the membrane (the sarcolemma) of the muscle fiber.[af] However, the acetylcholine does not remain bound; rather, it dissociates and is hydrolyzed by the enzyme, acetylcholinesterase, located in the synapse. This enzyme quickly reduces the stimulus to the muscle, which allows the degree and timing of muscular contraction to be regulated delicately. Some poisons inactivate acetylcholinesterase to prevent this control, such as the nerve agents sarin and tabun,[ag] and the insecticides diazinon and malathion.[ah]
The cardiac action potential differs from the neuronal action potential by having an extended plateau, in which the membrane is held at a high voltage for a few hundred milliseconds prior to being repolarized by the potassium current as usual.[ai] This plateau is due to the action of slower calcium channels opening and holding the membrane voltage near their equilibrium potential even after the sodium channels have inactivated.
The cardiac action potential plays an important role in coordinating the contraction of the heart.[ai] The cardiac cells of the sinoatrial node provide the pacemaker potential that synchronizes the heart. The action potentials of those cells propagate to and through the atrioventricular node (AV node), which is normally the only conduction pathway between the atria and the ventricles. Action potentials from the AV node travel through the bundle of His and thence to the Purkinje fibers.[note 2] Conversely, anomalies in the cardiac action potential—whether due to a congenital mutation or injury—can lead to human pathologies, especially arrhythmias.[ai] Several anti-arrhythmia drugs act on the cardiac action potential, such as quinidine, lidocaine, beta blockers, and verapamil.[aj]
The action potential in a normal skeletal muscle cell is similar to the action potential in neurons. Action potentials result from the depolarization of the cell membrane (the sarcolemma), which opens voltage-sensitive sodium channels; these become inactivated and the membrane is repolarized through the outward current of potassium ions. The resting potential prior to the action potential is typically −90mV, somewhat more negative than typical neurons. The muscle action potential lasts roughly 2–4 ms, the absolute refractory period is roughly 1–3 ms, and the conduction velocity along the muscle is roughly 5 m/s. The action potential releases calcium ions that free up the tropomyosin and allow the muscle to contract. Muscle action potentials are provoked by the arrival of a pre-synaptic neuronal action potential at the neuromuscular junction, which is a common target for neurotoxins.[ag]
Plant and fungal cells[ak] are also electrically excitable. The fundamental difference from animal action potentials is that the depolarization in plant cells is not accomplished by an uptake of positive sodium ions, but by release of negative chloride ions.[al][am][an] In 1906, J. C. Bose published the first measurements of action potentials in plants, which had previously been discovered by Burdon-Sanderson and Darwin. An increase in cytoplasmic calcium ions may be the cause of anion release into the cell. This makes calcium a precursor to ion movements, such as the influx of negative chloride ions and efflux of positive potassium ions, as seen in barley leaves.
The initial influx of calcium ions also poses a small cellular depolarization, causing the voltage-gated ion channels to open and allowing full depolarization to be propagated by chloride ions.
Some plants (e.g. Dionaea muscipula) use sodium-gated channels to operate movements and essentially "count". Dionaea muscipula, also known as the Venus flytrap, is found in subtropical wetlands in North and South Carolina. When there are poor soil nutrients, the flytrap relies on a diet of insects and animals. Despite research on the plant, there lacks an understanding behind the molecular basis to the Venus flytraps, and carnivore plants in general.
However, plenty of research has been done on action potentials and how they affect movement and clockwork within the Venus flytrap. To start, the resting membrane potential of the Venus flytrap (-120mV) is lower than animal cells (usually -90mV to -40mV). The lower resting potential makes it easier to activate an action potential. Thus, when an insect lands on the trap of the plant, it triggers a hair-like mechanoreceptor. This receptor then activates an action potential which lasts around 1.5 ms. Ultimately, this causes an increase of positive Calcium ions into the cell, slightly depolarizing it.
However, the flytrap doesn't close after one trigger. Instead, it requires the activation of 2 or more hairs. If only one hair is triggered, it throws the activation as a false positive. Further, the second hair must be activated within a certain time interval (0.75 s - 40 s) for it to register with the first activation. Thus, a buildup of calcium starts and slowly falls from the first trigger. When the second action potential is fired within the time interval, it reaches the Calcium threshold to depolarize the cell, closing the trap on the prey within a fraction of a second.
Together with the subsequent release of positive potassium ions the action potential in plants involves an osmotic loss of salt (KCl). Whereas, the animal action potential is osmotically neutral because equal amounts of entering sodium and leaving potassium cancel each other osmotically. The interaction of electrical and osmotic relations in plant cells[ao] appears to have arisen from an osmotic function of electrical excitability in a common unicellular ancestors of plants and animals under changing salinity conditions. Further, the present function of rapid signal transmission is seen as a newer accomplishment of metazoan cells in a more stable osmotic environment. It is likely that the familiar signaling function of action potentials in some vascular plants (e.g. Mimosa pudica) arose independently from that in metazoan excitable cells.
Unlike the rising phase and peak, the falling phase and after-hyperpolarization seem to depend primarily on cations that are not calcium. To initiate repolarization, the cell requires movement of potassium out of the cell through passive transportation on the membrane. This differs from neurons because the movement of potassium does not dominate the decrease in membrane potential; In fact, to fully repolarize, a plant cell requires energy in the form of ATP to assist in the release of hydrogen from the cell – utilizing a transporter commonly known as H+-ATPase.
Action potentials are found throughout multicellular organisms, including plants, invertebrates such as insects, and vertebrates such as reptiles and mammals.[ap] Sponges seem to be the main phylum of multicellular eukaryotes that does not transmit action potentials, although some studies have suggested that these organisms have a form of electrical signaling, too.[aq] The resting potential, as well as the size and duration of the action potential, have not varied much with evolution, although the conduction velocity does vary dramatically with axonal diameter and myelination.
|Animal||Cell type||Resting potential (mV)||AP increase (mV)||AP duration (ms)||Conduction speed (m/s)|
|Squid (Loligo)||Giant axon||−60||120||0.75||35|
|Earthworm (Lumbricus)||Median giant fiber||−70||100||1.0||30|
|Cockroach (Periplaneta)||Giant fiber||−70||80–104||0.4||10|
|Frog (Rana)||Sciatic nerve axon||−60 to −80||110–130||1.0||7–30|
|Cat (Felis)||Spinal motor neuron||−55 to −80||80–110||1–1.5||30–120|
Given its conservation throughout evolution, the action potential seems to confer evolutionary advantages. One function of action potentials is rapid, long-range signaling within the organism; the conduction velocity can exceed 110 m/s, which is one-third the speed of sound. For comparison, a hormone molecule carried in the bloodstream moves at roughly 8 m/s in large arteries. Part of this function is the tight coordination of mechanical events, such as the contraction of the heart. A second function is the computation associated with its generation. Being an all-or-none signal that does not decay with transmission distance, the action potential has similar advantages to digital electronics. The integration of various dendritic signals at the axon hillock and its thresholding to form a complex train of action potentials is another form of computation, one that has been exploited biologically to form central pattern generators and mimicked in artificial neural networks.
The common prokaryotic/eukaryotic ancestor, which lived perhaps four billion years ago, is believed to have had voltage-gated channels. This functionality was likely, at some later point, cross-purposed to provide a communication mechanism. Even modern single-celled bacteria can utilize action potentials to communicate with other bacteria in the same biofilm.
See also: Electrophysiology
The study of action potentials has required the development of new experimental methods. The initial work, prior to 1955, was carried out primarily by Alan Lloyd Hodgkin and Andrew Fielding Huxley, who were, along John Carew Eccles, awarded the 1963 Nobel Prize in Physiology or Medicine for their contribution to the description of the ionic basis of nerve conduction. It focused on three goals: isolating signals from single neurons or axons, developing fast, sensitive electronics, and shrinking electrodes enough that the voltage inside a single cell could be recorded.
The first problem was solved by studying the giant axons found in the neurons of the squid (Loligo forbesii and Doryteuthis pealeii, at the time classified as Loligo pealeii).[ar] These axons are so large in diameter (roughly 1 mm, or 100-fold larger than a typical neuron) that they can be seen with the naked eye, making them easy to extract and manipulate.[i][as] However, they are not representative of all excitable cells, and numerous other systems with action potentials have been studied.
The second problem was addressed with the crucial development of the voltage clamp,[at] which permitted experimenters to study the ionic currents underlying an action potential in isolation, and eliminated a key source of electronic noise, the current IC associated with the capacitance C of the membrane. Since the current equals C times the rate of change of the transmembrane voltage Vm, the solution was to design a circuit that kept Vm fixed (zero rate of change) regardless of the currents flowing across the membrane. Thus, the current required to keep Vm at a fixed value is a direct reflection of the current flowing through the membrane. Other electronic advances included the use of Faraday cages and electronics with high input impedance, so that the measurement itself did not affect the voltage being measured.
The third problem, that of obtaining electrodes small enough to record voltages within a single axon without perturbing it, was solved in 1949 with the invention of the glass micropipette electrode,[au] which was quickly adopted by other researchers.[av][aw] Refinements of this method are able to produce electrode tips that are as fine as 100 Å (10 nm), which also confers high input impedance. Action potentials may also be recorded with small metal electrodes placed just next to a neuron, with neurochips containing EOSFETs, or optically with dyes that are sensitive to Ca2+ or to voltage.[ax]
While glass micropipette electrodes measure the sum of the currents passing through many ion channels, studying the electrical properties of a single ion channel became possible in the 1970s with the development of the patch clamp by Erwin Neher and Bert Sakmann. For this discovery, they were awarded the Nobel Prize in Physiology or Medicine in 1991.[lower-Greek 3] Patch-clamping verified that ionic channels have discrete states of conductance, such as open, closed and inactivated.
Optical imaging technologies have been developed in recent years to measure action potentials, either via simultaneous multisite recordings or with ultra-spatial resolution. Using voltage-sensitive dyes, action potentials have been optically recorded from a tiny patch of cardiomyocyte membrane.[ay]
Several neurotoxins, both natural and synthetic, are designed to block the action potential. Tetrodotoxin from the pufferfish and saxitoxin from the Gonyaulax (the dinoflagellate genus responsible for "red tides") block action potentials by inhibiting the voltage-sensitive sodium channel;[az] similarly, dendrotoxin from the black mamba snake inhibits the voltage-sensitive potassium channel. Such inhibitors of ion channels serve an important research purpose, by allowing scientists to "turn off" specific channels at will, thus isolating the other channels' contributions; they can also be useful in purifying ion channels by affinity chromatography or in assaying their concentration. However, such inhibitors also make effective neurotoxins, and have been considered for use as chemical weapons. Neurotoxins aimed at the ion channels of insects have been effective insecticides; one example is the synthetic permethrin, which prolongs the activation of the sodium channels involved in action potentials. The ion channels of insects are sufficiently different from their human counterparts that there are few side effects in humans.
The role of electricity in the nervous systems of animals was first observed in dissected frogs by Luigi Galvani, who studied it from 1791 to 1797.[ba] Galvani's results stimulated Alessandro Volta to develop the Voltaic pile—the earliest-known electric battery—with which he studied animal electricity (such as electric eels) and the physiological responses to applied direct-current voltages.[bb]
Scientists of the 19th century studied the propagation of electrical signals in whole nerves (i.e., bundles of neurons) and demonstrated that nervous tissue was made up of cells, instead of an interconnected network of tubes (a reticulum). Carlo Matteucci followed up Galvani's studies and demonstrated that cell membranes had a voltage across them and could produce direct current. Matteucci's work inspired the German physiologist, Emil du Bois-Reymond, who discovered the action potential in 1843. The conduction velocity of action potentials was first measured in 1850 by du Bois-Reymond's friend, Hermann von Helmholtz. To establish that nervous tissue is made up of discrete cells, the Spanish physician Santiago Ramón y Cajal and his students used a stain developed by Camillo Golgi to reveal the myriad shapes of neurons, which they rendered painstakingly. For their discoveries, Golgi and Ramón y Cajal were awarded the 1906 Nobel Prize in Physiology.[lower-Greek 4] Their work resolved a long-standing controversy in the neuroanatomy of the 19th century; Golgi himself had argued for the network model of the nervous system.
The 20th century was a significant era for electrophysiology. In 1902 and again in 1912, Julius Bernstein advanced the hypothesis that the action potential resulted from a change in the permeability of the axonal membrane to ions.[bc] Bernstein's hypothesis was confirmed by Ken Cole and Howard Curtis, who showed that membrane conductance increases during an action potential.[bd] In 1907, Louis Lapicque suggested that the action potential was generated as a threshold was crossed,[be] what would be later shown as a product of the dynamical systems of ionic conductances. In 1949, Alan Hodgkin and Bernard Katz refined Bernstein's hypothesis by considering that the axonal membrane might have different permeabilities to different ions; in particular, they demonstrated the crucial role of the sodium permeability for the action potential.[bf] They made the first actual recording of the electrical changes across the neuronal membrane that mediate the action potential.[lower-Greek 5] This line of research culminated in the five 1952 papers of Hodgkin, Katz and Andrew Huxley, in which they applied the voltage clamp technique to determine the dependence of the axonal membrane's permeabilities to sodium and potassium ions on voltage and time, from which they were able to reconstruct the action potential quantitatively.[i] Hodgkin and Huxley correlated the properties of their mathematical model with discrete ion channels that could exist in several different states, including "open", "closed", and "inactivated". Their hypotheses were confirmed in the mid-1970s and 1980s by Erwin Neher and Bert Sakmann, who developed the technique of patch clamping to examine the conductance states of individual ion channels.[bg] In the 21st century, researchers are beginning to understand the structural basis for these conductance states and for the selectivity of channels for their species of ion,[bh] through the atomic-resolution crystal structures,[bi] fluorescence distance measurements[bj] and cryo-electron microscopy studies.[bk]
Julius Bernstein was also the first to introduce the Nernst equation for resting potential across the membrane; this was generalized by David E. Goldman to the eponymous Goldman equation in 1943.[h] The sodium–potassium pump was identified in 1957[bl][lower-Greek 6] and its properties gradually elucidated,[bm][bn][bo] culminating in the determination of its atomic-resolution structure by X-ray crystallography.[bp] The crystal structures of related ionic pumps have also been solved, giving a broader view of how these molecular machines work.[bq]
Main article: Quantitative models of the action potential
Mathematical and computational models are essential for understanding the action potential, and offer predictions that may be tested against experimental data, providing a stringent test of a theory. The most important and accurate of the early neural models is the Hodgkin–Huxley model, which describes the action potential by a coupled set of four ordinary differential equations (ODEs).[i] Although the Hodgkin–Huxley model may be a simplification with few limitations compared to the realistic nervous membrane as it exists in nature, its complexity has inspired several even-more-simplified models,[br] such as the Morris–Lecar model[bs] and the FitzHugh–Nagumo model,[bt] both of which have only two coupled ODEs. The properties of the Hodgkin–Huxley and FitzHugh–Nagumo models and their relatives, such as the Bonhoeffer–Van der Pol model,[bu] have been well-studied within mathematics,[bv] computation and electronics.[bw] However the simple models of generator potential and action potential fail to accurately reproduce the near threshold neural spike rate and spike shape, specifically for the mechanoreceptors like the Pacinian corpuscle. More modern research has focused on larger and more integrated systems; by joining action-potential models with models of other parts of the nervous system (such as dendrites and synapses), researchers can study neural computation and simple reflexes, such as escape reflexes and others controlled by central pattern generators.[bx] |
Biologists study the living world by posing questions about it and seeking science-based responses. This approach is common to other sciences as well and is often referred to as the scientific method. The scientific process was used even in ancient times, but it was first documented by England’s Sir Francis Bacon (1561–1626) (Figure 1), who set up inductive methods for scientific inquiry. The scientific method is not exclusively used by biologists but can be applied to almost anything as a logical problem solving method.
The scientific process typically starts with an observation(often a problem to be solved) that leads to a question. Remember that science is very good at answering questions having to do with observations about the natural world, but is very bad at answering questions having to do with morals, ethics, or personal opinions.
|Questions that can be
answered using science
|Questions that cannot be
answered using science
|• What is the optimum temperature for the growth of E. coli bacteria?
|• How tall is Santa Claus?
|• Do birds prefer bird feeders of a specific color?
|• Do angels exist?
|• What is the cause of this disease?
|• Which is better: classical music or rock and roll?
|• How effective is this drug in treating this disease?
|• What are the ethical implications of human cloning?
Let’s think about a simple problem that starts with an observation and apply the scientific method to solve the problem. Imagine that one morning when you wake up and flip a the switch to turn on your bedside lamp, the light won’t turn on. That is an observation that also describes a problem: the lights won’t turn on. Of course, you would next ask the question: “Why won’t the light turn on?”
Recall that a hypothesisis a suggested explanation that can be tested. A hypothesis is NOT the question you are trying to answer – it is what you think the answer to the question will be and why.To solve a problem, several hypotheses may be proposed. For example, one hypothesis might be, “The light won’t turn on because the bulb is burned out.” But there could be other answers to the question, and therefore other hypotheses may be proposed. A second hypothesis might be, “The light won’t turn on because the lamp is unplugged” or “The light won’t turn on because the power is out.” A hypothesis should be based on credible background information. A hypothesis is NOT just a guess (not even an educated one), although it can be based on your prior experience (such as in the example where the light won’t turn on). In general, hypotheses in biology should be based on a credible, referenced source of information.
A hypothesis must be testable to ensure that it is valid. For example, a hypothesis that depends on what a dog thinks is not testable, because we can’t tell what a dog thinks. It should also be falsifiable, meaning that it can be disproven by experimental results. An example of an unfalsifiable hypothesis is “Red is a better color than blue.” There is no experiment that might show this statement to be false. To test a hypothesis, a researcher will conduct one or more experiments designed to eliminate one or more of the hypotheses. This is important: a hypothesis can be disproven, or eliminated, but it can never be proven.Science does not deal in proofs like mathematics. If an experiment fails to disprove a hypothesis, then that explanation (the hypothesis) is supported as the answer to the question. However, that doesn’t mean that later on, we won’t find a better explanation or design a better experiment that will be found to falsify the first hypothesis and lead to a better one.
A variable is any part of the experiment that can vary or change during the experiment. Typically, an experiment only tests one variable and all the other conditions in the experiment are held constant.
- The variable that is tested is known as the independent variable.
- The dependent variable is the thing (or things) that you are measuring as the outcome of your experiment.
- A constant is a condition that is the same between all of the tested groups.
- A confounding variableis a condition that is not held constant that could affect the experimental results.
A hypothesis often has the format “If [I change the independent variable in this way] then [I will observe that the dependent variable does this] because [of some reason].” For example, the first hypothesis might be, “If you change the light bulb, then the light will turn on because the bulb is burned out.” In this experiment, the independent variable (the thing that you are testing) would be changing the light bulb and the dependent variable is whether or not the light turns on. It would be important to hold all the other aspects of the environment constant, for example not messing with the lamp cord or trying to turn the lamp on using a different light switch. If the entire house had lost power during the experiment because a car hit the power pole, that would be a confounding variable.
You may have learned that a hypothesis can be phrased as an “If..then…” statement. Simple hypotheses can be phrased that way (but they must also include a “because”), but more complicated hypotheses may require several sentences. It is also very easy to get confused by trying to put your hypothesis into this format. Hypotheses do not have to be phrased as “if..then..” statements, it is just sometimes a useful format.
The resultsof your experiment are the data that you collect as the outcome. In the light experiment, your results are either that the light turns on or the light doesn’t turn on. Based on your results, you can make a conclusion. Your conclusionuses the results to answer your original question.
We can put the experiment with the light that won’t go in into the figure above:
- Observation: the light won’t turn on.
- Question: why won’t the light turn on?
- Hypothesis: the lightbulb is burned out.
- Prediction: if I change the lightbulb (independent variable), then the light will turn on (dependent variable).
- Experiment: change the lightbulb while leaving all other variables the same.
- Analyze the results: the light didn’t turn on.
- Conclusion: The lightbulb isn’t burned out. The results do not support the hypothesis, time to develop a new one!
- Hypothesis 2: the lamp is unplugged.
- Prediction 2: if I plug in the lamp, then the light will turn on.
- Experiment: plug in the lamp
- Analyze the results: the light turned on!
- Conclusion: The light wouldn’t turn on because the lamp was unplugged. The results support the hypothesis, it’s time to move on to the next experiment!
In practice, the scientific method is not as rigid and structured as it might at first appear. Sometimes an experiment leads to conclusions that favor a change in approach; often, an experiment brings entirely new scientific questions to the puzzle. Many times, science does not operate in a linear fashion; instead, scientists continually draw inferences and make generalizations, finding patterns as their research proceeds. Scientific reasoning is more complex than the scientific method alone suggests.
Another important aspect of designing an experiment is the presence of one or more control groups. A control groupallows you to make a comparison that is important for interpreting your results. Control groups are samples that help you to determine that differences between your experimental groups are due to your treatment rather than a different variable – they eliminate alternate explanations for your results (including experimental error and experimenter bias). They increase reliability, often through the comparison of control measurements and measurements of the experimental groups. Often, the control group is a sample that is not treated with the independent variable, but is otherwise treated the same way as your experimental sample. This type of control group contains every feature of the experimental group except it is not given the manipulation that is hypothesized about (it does not get treated with the independent variable). Therefore, if the results of the experimental group differ from the control group, the difference must be due to the hypothesized manipulation, rather than some outside factor. It is common in complex experiments (such as those published in scientific journals) to have more control groups than experimental groups.
Question:Which fertilizer will produce the greatest number of tomatoes when applied to the plants?
Prediction and Hypothesis: If I apply different brands of fertilizer to tomato plants, the most tomatoes will be produced from plants watered with Brand A because Brand A advertises that it produces twice as many tomatoes as other leading brands.
Experiment:Purchase 10 tomato plants of the same type from the same nursery. Pick plants that are similar in size and age. Divide the plants into two groups of 5. Apply Brand A to the first group and Brand B to the second group according to the instructions on the packages. After 10 weeks, count the number of tomatoes on each plant.
Independent Variable:Brand of fertilizer.
Dependent Variable: Number of tomatoes.
The number of tomatoes produced depends on the brand of fertilizer applied to the plants.
Constants:amount of water, type of soil, size of pot, amount of light, type of tomato plant, length of time plants were grown.
Confounding variables: any of the above that are not held constant, plant health, diseases present in the soil or plant before it was purchased.
Results:Tomatoes fertilized with Brand A produced an average of 20 tomatoes per plant, while tomatoes fertilized with Brand B produced an average of 10 tomatoes per plant.
You’d want to use Brand A next time you grow tomatoes, right? But what if I told you that plants grown without fertilizer produced an average of 30 tomatoes per plant! Now what will you use on your tomatoes?
Results including control group: Tomatoes which received no fertilizer produced more tomatoes than either brand of fertilizer.
Conclusion:Although Brand A fertilizer produced more tomatoes than Brand B, neither fertilizer should be used because plants grown without fertilizer produced the most tomatoes!
Positive control groupsare often used to show that the experiment is valid and that everything has worked correctly. You can think of a positive control group as being a group where you should be able to observe the thing that you are measuring (“the thing” should happen). The conditions in a positive control group should guarantee a positive result. If the positive control group doesn’t work, there may be something wrong with the experimental procedure.
Negative control groupsare used to show whether a treatment had any effect. If your treated sample is the same as your negative control group, your treatment had no effect. You can also think of a negative control group as being a group where you should NOT be able to observe the thing that you are measuring (“the thing” shouldn’t happen), or where you should not observe any change in the thing that you are measuring (there is no difference between the treated and control group). The conditions in a negative control group should guarantee a negative result. A placebo group is an example of a negative control group.
As a general rule, you need a positive control to validate a negative result, and a negative control to validate a positive result.
- You read an article in the NY Times that says some spinach is contaminated with Salmonella.You want to test the spinach you have at home in your fridge, so you wet a sterile swab and wipe it on the spinach, then wipe the swab on a nutrient plate (petri plate).
- You observe growth. Does this mean that your spinach is really contaminated? Consider an alternate explanation for growth: the swab, the water, or the plate is contaminated with bacteria. You could use a negative control to determine which explanation is true. If a swab is wet and wiped on a nutrient plate, do bacteria grow?
- You don’t observe growth.Does this mean that your spinach is really safe? Consider an alternate explanation for no growth: Salmonella isn’t able to grow on the type of nutrient you used in your plates. You could use a positive control to determine which explanation is true. If you wipe a known sample of Salmonella bacteria on the plate, do bacteria grow?
- In a drug trial, one group of subjects are given a new drug, while a second group is given a placebo drug (a sugar pill; something which appears like the drug, but doesn’t contain the active ingredient). Reduction in disease symptoms are measured. The second group receiving the placebo is a negative control group. You might expect a reduction in disease symptoms purely because the person knows they are taking a drug so they should be getting better. If the group treated with the real drug does not show more a reduction in disease symptoms than the placebo group, the drug doesn’t really work. The placebo group sets a baseline against which the experimental group (treated with the drug) can be compared. A positive control group is not required for this experiment.
- In an experiment measuring the preference of birds for various types of food, a negative control group would be a “placebo feeder”. This would be the same type of feeder, but with no food in it. Birds might visit a feeder just because they are interested in it; an empty feeder would give a baseline level for bird visits. A positive control group might be a food that squirrels are known to like. This would be useful because if no squirrels visited any of the feeders, you couldn’t tell if this was because there were no squirrels around or because they didn’t like any of your food offerings!
- To test the effect of pH on the function of an enzyme, you would want a positive control group where you knew the enzyme would function (pH not changed) and a negative control group where you knew the enzyme would not function (no enzyme added). You need the positive control group so you know your enzyme is working: if you didn’t see a reaction in any of the tubes with the pH adjusted, you wouldn’t know if it was because the enzyme wasn’t working at all or because the enzyme just didn’t work at any of your tested pH. You need the negative control group so you can ensure that there is no reaction taking place in the absence of enzyme: if the reaction proceeds without the enzyme, your results are meaningless.
Text adapted from: OpenStax, Biology. OpenStax CNX. May 27, 2016 http://cnx.org/contents/s8Hh0oOc@9.10:RD6ERYiU@5/The-Process-of-Science. |
GEARS OR GEAR DRIVE
Gears are used to transmit motion from one shaft to another shaft or between a shaft and slide when distance between them is small. in this blog we study about types of gears.
here velocity of point p = w1R1=w2R2 and (w1/w2= R2/R1)= constant, when there is a slip w1/w2 not equal to constant and the drive is known as negative drive.
in gear slip is impossible, due to interlocking of teeth, therefore gear drive is a positive drive.
CLASSIFICATION OF GEARS
According to the axis of their shaft
when the teeth are straight and parallel to the axis of shaft then the gear is known as spur gear.
- there is no axial thrust.
- but huge impact stress and noise are generated due ti sudden engagement and disengagement.
- not used at high speed. and velocity ratio is restricted to 6.
- low speed gear.
- used in toys, watches etc.
When the teeth are straight but inclined to the axis of shaft then the gear is known as helical gear.
- Axial thrust is present.
- impact stress and noise is absent due to gradual engagement and disengagement.
- due to the generation of axial thrust it also not used at very high speed.
- velocity ratio is limited to 10.
Double helical gear
Actually double helical gear is used to minimize the axial thrust. A double helical gear is equivalent to a pair of helical gear separated together one having right hand helix and other having left hand helix.
Special case:- if the left hand and right hand inclination of a double helical gear meet at common apex and there is no groove the gear is known as herringbone gear. which is also a types of gears.
For all the above type of gear generating surface is cylinder.
Types of gears when the axis of shaft is intersecting
The motion between two intersecting shaft is equivalent to rolling of two cones without slipping, the gear in genral is known as bevel gear. therefore for bevel gear the generating surface is cone. When the teeth are straight then the gear is known as bevel gear. and when the teeth are inclined the gear is known as helical/spiral bevel gear. the advantage of is used to gradual load and it have low impact stress.
SPECIAL CASE:-in bevel gear if generating surface is same and axis of their shaft are right angle to each other the gear is known as mitre gear.
Types of gears When axis are neither parallel nor intersecting
The which are neither parallel nor intersecting are called skew shaft. here pure rolling motion is impossible therefore, (rolling possible= rolling + partial sliding). ex- skew bevel gear, hypoid gear, worm and worm wheel.
LAW OF GEARING
According to the law of gearing the ratio of angular speed must remains constant for power transmission between the gear and pinion. when two mating surfaces satisfy this law of gearing then it is known as gear. any tooth profile which satisfy the law of gearing is known as conjugate tooth profile. to satisfy the law of gearing the common normal to the both tooth profile must pass through the pitch point. that is w1/w2= constant. the common normal must intersect to the centre line at a fixed point and that point is known as pitch point.
Important points about helical gears Helix angle– the angle made by inclination of teeth and the axis of gear is known as helix angle.in types of gear it is the important point of helical gear. Normal pitch- the shortest distance between two adjacent teeth is known as normal pitch. the noraml pitch of two mating gear must be same. Circular pitch-the perpendicular distance measured between the two adjacent face is known as circular pitch.
Terms used in types of gears
pitch circle diameter-the diameter of circle produced by the pure rolling action of gear is known as pitch circle diameter. Pitch point- in two mating gears the point of contact of two pitch circle is known as pitch point. Circular pitch- circular pitch is the distance measured along the circumference of pitch circle from one point of teeth to another adjacent teeth. it is also defined as it is the ratio of pitch circle circumference ti the number of teeth on gear. Diametral pitch– it is defined as the ratio of number of teeth to the per unit pitch circle diameter. Module- it is the reciprocal of diametral pitch. the ratio of pitch circle diameter to the number of teeth is known as module. it is generally expressed in millimeter. Addendum- the vertical distance between the top of tooth to the pitch circle is known as addendum. generally addendum equal to one module. Dedendum- it is the radial distance between the pitch circle and base of the tooth. this radial distance is known as dedendum. dedendum equal to generally 1.157 module. |
Maths Class 9 Notes for Volume and Surface Area
SOLIDS : The bodies occupying space (i.e. have 3-dimension) are called solids such as a cuboid, a cube, a cylinder, a cone, a sphere etc.
VOLUME (CAPACITY) OFA SOLID: The measure of space occupied by a solid-body is called its volume. The units of volume are cubic centimeters (written as cm3) or cubic meters (written as m3).
CUBOID: A solid bounded by six rectangular faces is called a cuboid.
In the given figure, ABCDEFGH is a cuboid whose
(i) 6 faces are :
ABCD, EFGH, ABFE, CDHQ ADHE, and BCGF Out of these, the four faces namely ABFE, DCGH, ADHE and BCGF are called lateral faces of the cuboid.
(ii) 12 edges are :
AB, BC, CD, DA, EF, FG GH, HE, CG BF, AE and DH
(iii) 8 vertices are :
A, B, C, D, E, F, and H.
Remark : A rectangular room is in the form of a cuboid and its 4 walls are its lateral surfaces.
Cube : A cuboid whose length, breadth and height are all equal, is called a cube.
A cube has 6 faces, each face is square, 12 edges, all edges are of equal lengths and 8 vertices.
SURFACE AREA OF A CUBOID:
Let us consider a cuboid of length = 1 units
Breadth = b units and height = h units
Then we have :
(i) Total surface area of the cuboid
=2(l * b + b * h + h * l) sq. units
(ii) Lateral surface area of the cuboid
= [2 (1 + b)* h] sq. units
(iii) Area of four walls of a room = [2 (1 + b)* h] sq. units.
= (Perimeter of the base * height) sq. units
(iv) Surface area of four walls and ceiling of a room
= lateral surface area of the room + surface area of ceiling
(v) Diagonal of the cuboid = √l2 + b2 + h2
SURFACE AREA OF A CUBE : Consider a cube of edge a unit.
(i) The Total surface area of the cube = 6a2 sq. units
(ii) Lateral surface area of the cube = 4a2 sq. units.
(iii) The diagonal of the cube = √3 a units.
SURFACE AREA OF THE RIGHT CIRCULAR CYLINDER
Cylinder: Solids like circular pillars, circular pipes, circular pencils, road rollers and gas cylinders etc. are said to be in cylindrical shapes.
Curved surface area of the cylinder
= Area of the rectangular sheet
= length * breadth
= Perimeter of the base of the cylinder * height
= 2πr * h
Therefore, curved surface area of a cylinder = 2πrh
Total surface area of the cylinder =2πrh + 2πr2
So total area of the cylinder=2πr(r + h)
Remark : Value of TE approximately equal to 22 / 7 or 3.14.
If a cylinder is a hollow cylinder whose inner radius is r1 and outer radius r2 and height h then
Total surface area of the cylinder
= 2πr1h + 2πr2h + 2π(r22 – r21)
= 2π(r1 + r2)h + 2π (r2 + r1) (r2 – r1)
= 2π(r1 + r2) [h + r2 – r1]
SURFACE AREA OF A RIGHT CIRCULAR CONE
RIGHT CIRCULAR CONE
A figure generated by rotating a right triangle about a perpendicular side is called the right circular cone.
SURFACE AREA OF A RIGHT CIRCULAR CONE:
curved surface area of a cone = 1 / 2 * l * 2πr = πrl
where r is base radius and l its slant height
Total surface area of the right circular cone
= curved surface area + Area of the base
= πrl + πr2 = πr(l + r)
Note : l2 = r2 + h2
By applying Pythagorus
Theorem, here h is the height of the cone.
Thus l = √r2 + h2 and r = √l2 – h2
h = √l2 + r2
SURFACE AREA OF A SPHERE
Sphere: A sphere is a three dimensional figure (solid figure) which is made up of all points in the space which lie at a constant distance called the radius, from a fixed point called the centre of the sphere.
Note : A sphere is like the surface of a ball. The word solid sphere is used for the solid whose surface is a sphere.
Surface area of a sphere: The surface area of a sphere of radius r = 4 x area of a circle of radius r = 4 * πr2
Surface area ofa hemisphere = 2πr2
Total surface area of a hemisphere = 2πr2 + πr2
Total surface area of a hollow hemisphere with inner and outer radius r1 and r2 respectively
= 2πr21 + 2πr22 + π(r22 — r21)
= 2π(r21 + r22) + π(r22 —r21)
VOLUME OF A CUBOID :
Volume : Solid objects occupy space.
The measure of this occupied space is called volume of the object.
Capacity of a container : The capacity of an object is the volume of the substance its interior can accommodate.
The unit of measurement of either of the two is cubic unit.
Volume of a cuboid : Volume of a cuboid =Area of the base * height V=l * b * h
So, volume of a cuboid = base area * height = length * breadth * height
Volume of a cube : Volume of a cube = edge * edge * edge = a3
where a = edge of the cube
VOLUME OF A CYLINDER
Volume of a cylinder = πr2h
volume of the hollow cylinder πr22h — πr21h
= π(r22 – r21)h
VOLUME OF A RIGHT CIRCULAR CONE
volume of a cone = 1 / 3 πr2h, where r is the base radius
and h is the height of the cone.
VOLUME OF A SPHERE
volume of a sphere the sphere = 4 / 3 πr3, where r is the radius of the sphere.
Volume of a hemisphere = 2 / 3 πr3
APPLICATION : Volume of the material of a hollow sphere with inner and outer radii r1 and r2 respectively
= 4 / 3 πr32 – 4 / 3 πr31 = 4 / 3π(r32 – r31)
Volume of the material of a hemisphere with inner and
outer radius r1 and r2 respectively = 2 / 3π(r32 – r31) |
Spheres in sphere
How many spheres with a radius of 15 cm can fits into the larger sphere with a radius of 150 cm?
Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...):
Showing 0 comments:
Be the first to comment!
To solve this example are needed these knowledge from mathematics:
Next similar examples:
The gasholder has spherical shape with a diameter 20 m. How many m3 can hold in?
Intersect between plane and a sphere is a circle with a radius of 60 mm. Cone whose base is this circle and whose apex is at the center of the sphere has a height of 34 mm. Calculate the surface area and volume of a sphere.
5500 lead shots with diameter 4 mm is decanted into a ball. What is it diameter?
- Fit ball
What is the size of the surface of Gymball (FIT - ball) with a diameter of 65 cm?
Observatory dome has the shape of a hemisphere with a diameter d = 10 m. Calculate the surface.
- Equilateral cylinder
A sphere is inserted into the rotating equilateral cylinder (touching the bases and the shell). Prove that the cylinder has both a volume and a surface half larger than an inscribed sphere.
Cube is inscribed in the cube. Determine its volume if the edge of the cube is 10 cm long.
- Cube corners
The wooden cube with edge 64 cm was cut in 3 corners of cube with edge 4 cm. How many cubes of edge 4 cm can be even cut?
The school attends 344 pupils. Half of them take snacks. 13 pupils who took snacks did not attend school. How many snacks left?
- Cone area and side
Calculate the surface area and volume of a rotating cone with a height of 1.25 dm and 17,8dm side.
In army regiment are six shooters. The first shooter target hit with a probability of 49%, next with 75%, 41%, 20%, 34%, 63%. Calculate the probability of target hit when shooting all at once.
- Cylinder surface, volume
The area of the cylinder surface and the cylinder jacket are in the ratio 3: 5. The height of the cylinder is 5 cm shorter than the radius of the base. Calculate surface area and volume of cylinder.
A trained athlete is able to exhale after a deep breath still 500 ml of air. At normal inhalation and exhalation is breathing 500 ml of air. Within one minute, one breath and exhaled 14 times. What part of breathing air per day is one exhalation?
- Rotary cylinder 2
Base circumference of the rotary cylinder has same length as its height. What is the surface area of cylinder if its volume is 250 dm3?
- Cube 1-2-3
Calculate the volume and surface area of the cube ABCDEFGH if: a) /AB/ = 4 cm b) perimeter of wall ABCD is 22 cm c) the sum of the lengths of all edges of the cube is 30 cm.
- Solid in water
The solid weighs in air 11.8 g and in water 10 g. Calculate the density of the solid.
- Cube volume
The cube has a surface of 384 cm2. Calculate its volume. |
This course focuses on the fundamentals of digital circuit. Topics include binary number, logical algebra and its calculation, basic logic gate, combinational logic circuit, sequential circuit, arithmetic circuit and synchronous logic circuit. Most of electronic devices consists of digital circuit technique. By combining lectures and exercises, the course enables students to understand and acquire the fundamentals of logic circuit so that students can design a simple circuit. Students will experience the satisfaction of solving practical circuit problems by using their knowledge regarding a logic circuit acquired through this course.
By the end of this course, students will be able to:
1) Understand and explain the fundamentals of logical algebra and logic gates.
2) Analyze and design a simple combinational logic circuit.
3) Analyze and design a simple sequential logic circuit.
4) Analyze and design a simple arithmetic logic circuit.
Digital circuit, logic circuit, logical algebra, combinational logic circuit, flip-flop, sequential circuit, operational circuit
|Intercultural skills||Communication skills||Specialist skills||✔ Critical thinking skills||Practical and/or problem-solving skills|
|✔ ・Applied specialist skills on EEE|
At the beginning of each class, solutions to exercise problems that were given at the previous class are reviewed. Towards the end of class, students are given exercise problems related to the lecture given that day to solve. To prepare for class, students should read the course schedule section and check what topics will be covered. Required learning should be completed outside of the classroom for preparation and review purposes. In the first half lectures we will take interactive exercises using Handbook.
|Course schedule||Required learning|
|Class 1||Digital information||Understand binary operation, digital and analog, and BCD code.|
|Class 2||Basic logic gate, Logical algebra||Understand basic logic gate, Boolean algebra, De Morgan's law, and simplification of logical equation.|
|Class 3||Fundamentals of logic circuit||Understand conversion between sum of products form and sum‐product form, half adder, full adder etc.|
|Class 4||Simplification of logic circuits 1||Understand combinational logic circuit, Karnaugh map with and without don't care.|
|Class 5||Simplification of logic circuits 2||Understand Quine–McCluskey algorithm|
|Class 6||CMOS logic gate, Overall exercise of the first half of the course||Understand electrical properties of digital ICs. Review the first half of the course with exercise problems|
|Class 7||Test the level of understanding of the first half of the course||Test level of understanding and evaluate achievement for classes 1–6.|
|Class 8||Flip-flop 1||Understand RS-FF, JK-FF and NAND-based and NOR-based circuits|
|Class 9||Flip-flop 2||Understand Master-Slave FF, edge-trigger circuit, D-FF|
|Class 10||Application of Flip-flop||Understand Shift-register and counter circuit|
|Class 11||Sequential circuit 1||Understand State transition diagram and table, and design the sequential circuit using various FFs|
|Class 12||Sequential circuit 2||Understand the simplification of sequential circuit and concept of the one-hot code|
|Class 13||Sequential circuit 3 Overall exercise of the latter half of the course||Understand the simplification with lengthy states Review the latter half of the course with exercise problems|
|Class 14||Overall exercise of the latter half of the course||Test level of understanding and evaluate achievement for classes 8–13.|
All lecture materials will be uploaded to OCW.
Midterm exam (report submission) (50%), final exam (report submission) (50%) |
Deductive logic is the process of reasoning where the specific information is concluded from general premises. It normally has a form of a syllogism, which in turn can be divided into categorical, conditional, and disjunctive syllogisms. The former implies two or sometimes more categorical statements that indicate the existing relationship between two sets of objects or events. In this regard, categorical syllogisms have the following form:
tailored to your instructions
for only $13.00 $11.05/page
- Major premise: All/Some/No A are B;
- Minor premise: All/Some/No C are A;
- Conclusion: All/Some/No C are B.
Conditional reasoning has a similar structure to the categorical syllogism with the difference that the major premise would have the form of ‘if A…then B’ and the minor premise would show whether either (no) A or (no) B happened. Finally, disjunctive syllogisms have the form of ‘major premise: A or B; minor premise: not A; therefore B’ or ‘ major premise: A or B; minor premise: A; therefore A’ (Mody and Carey, 2016). Thus, it is seen that although the different types of syllogisms may vary, they generally follow the same deductive logic.
Inductive reasoning, on the other hand, is based on the analysis of specific objects and events and making general conclusions. It includes analogical reasoning, establishing causal relationships, and the process category induction, to name a few most often used logical schemas (McBride and Cutting, 2019). For instance, a person uses category induction when he/she encounters some unfamiliar object and intends to assign it to a certain class based on the shared attributes.
As a result, the previous analysis revealed that deductive and inductive reasoning contrasted in the methods of determining the truth. However, probably the greatest difference lies in how these two approaches deal with truth itself. Deductive logic examines the absolute truth and, thus, its conclusions are either valid or not valid. On the contrary, induction concentrates more on the probability of something being true. Nevertheless, both reasoning methods serve their own unique purpose and, therefore, can coexist in harmony.
When making a logical decision, I usually start with defining the goals or determining the problem that should be solved. Oppositely to the unconscious decisions, the logical process should thoroughly consider as many available options as possible (McBride and Cutting, 2019). For example, when making a decision concerning purchasing some product, it is necessary to be led not only by the emotional utility of the commodity but also by its practical value.
Most of the time, I try to deliver decisions that would benefit all or most people involved. I think that there is always an alternative way to make one’s choice to follow a ‘win-win’ strategy. It is important not only because it is in some sense a moral obligation of each person but also because this approach would be beneficial in the long run compared to self-oriented decisions. Indeed, when decisions are made solely following my best interests, other parties would be unwilling to interact with me in the future.
The next step includes gathering as much information as possible and structuring it based on the priorities that were determined at the beginning. In this regard, more information means more possible options to consider, which increases the chances of a plausible decision outcome. Additionally, organizing the information in a manner that reflects the individual preferences, advantages, and disadvantages of each alternative leads to a more balanced choice.
as little as 3 hours
After sufficient information is collected and structured, I can finally make a certain decision. However, this process seems easy in theory but may actually be hard in practice. That happens due to the existing uncertainty associated with decision-making, meaning that it is not always clear whether the most plausible option was chosen (McBride and Cutting, 2019). Being unsure is good as it can signify that more information and analysis are needed. Yet, it can also make an individual be stuck on this stage without being able to deliver any decision.
McBride, D. M., & Cutting, J. C. (2019). Cognitive psychology: Theory, process, and methodology (2nd ed.). SAGE Publications, Inc.
Mody, S., & Carey, S. (2016). The emergence of reasoning by the disjunctive syllogism in early childhood. Cognition, 154, 40-48. |
Now we'll talk about powers and roots. In order to discuss the idea of an exponent, let's first think about multiplication. Multiplication is really a way of doing a whole lot of addition at once. So let's think about this. If I were to ask you to add six 4s together, no one in their right mind would sit there and add 4 plus 4 plus 4 plus 4. Show Transcript
No one would do that. Of course, what you would do is simply multiply 4 times 6. It's just important to keep in mind, that in any act of multiplication, really what you're doing is a whole lot of addition at once. Much in the same way, exponents are a way of doing a whole lot of multiplication at once.
If I were to ask you to multiply seven 3's together, we wouldn't write 3 times 3 times 3, we wouldn't write out that long expression. Instead, we would write 3 to the 7th. Fundamentally, 3 to the 7th means that we multiply seven factors of 3 multiplied together. So it's a very compact notation to express a lot of multiplication at once.
And I hasten to add, the test will not expect you to compute that value. It's not gonna be a test question, calculate 3 to the 7th, that's not gonna be on the test. But you'll have to handle that quantity in relation to other quantities. For example, use the laws of exponents to figure out 3 to the 7th and that whole thing squared or multiplying it by 3 to the 5th or dividing it by something.
You have to use it, but you're not gonna have to calculate its value. Symbolically, we could say that b to the n means that n factors of b are multiplied together. So this is the fundamental definition of what an exponent is. And right now I'll just say b is the base, n is the exponent, and b to the n is the power.
Now, this is a good definition for now, but as we'll see, this definition is ultimately somewhat naive and we're gonna have to expand it in later modules. And why is it naive? Well, if you think about it, how many factors of b that are multiplied together, this means that n is a counting number. That is to say, it is a positive integer.
And so this definition, this way of thinking about exponents is perfectly good as long as the exponents are positive integers. But as we will see in upcoming modules, there are all kinds of exponents that are not positive integers. We'll talk about negative exponents, and fraction exponents, all that. Let's not worry about that in this module.
In this module, we'll just stick with the positive integers. So we can stick with this very intuitive definition of what an exponent is. First of all, notice that we can give exponents to either numbers or variables. We have already seen variables with powers in the algebra module, especially in the videos on quadratics, where you have x squared. Notice that we can read that expression either as 7 to the power of 8 or 7 to the 8th.
Either one of those is perfectly correct. Notice that we have a different way of talking about exponents of 2 or 3. Something to the power of 2 is squared. And something to the power of 3 is cubed. So we would rarely say, something to the power of 3. And we would never say, something to the power of 2.
That just sounds awkward. We would always say that thing squared. If 1 is the base, then the exponent doesn't matter. 1 to any power is 1. And in fact, that expression, 1 to the n equals 1, that works for all n. That's not restricted to positive integers.
That actually works for every single number on the number line. So every single number on the number line, if you put it in for n, 1 to the n equals 1. So that's an important thing to remember. If 0 is the base, then 0 to any positive exponent is 0. So 0 to the n equals 0 as long as 0 is positive.
And in fact, this is true not only of positive integers, it's also true of positive fractions. It's true of everything to the right of 0 on the number line. So don't worry about 0 to the power of 0 or 0 to the power of negatives, you will not have to deal with this on the test. That gets into either illegal mathematics or other forms of mathematics that we don't need to worry about.
So that's just gonna be something we can ignore. An idea we have already discussed in the Integer Properties and Algebra lessons. If an exponent is not written, we can assume that the exponent is 1. We talked a little about this in prime factorizations, and we talked about this again in the algebra module. Another way to say that is, that any base to the power of 1 means that we have only factor of that base.
So 2 to the 1 is 2. 2 squared is 4. 2 cubed is 3 factor, so that's 8. So again, we're, we're using the exponent as a way to count the number of factors we have in the total product. What happens if the base is negative?
What if we start raising a negative number to powers? Well, negative 2 to the 1, of course, will be negative 2. Negative 2 squared, that's negative times negative, that would be positive 4. If we multiply another factor of negative 2, positive times negative gives us a negative 8. Multiply another factor of 2 we get negative 8 times negative 2 gives us positive 16.
Multiply another factor of 2 we get negative 32. And notice we have kind of an alternating pattern here. We're going from negative to positive, negative to positive, negative to positive. So, we get a negative to any even power, is a positive number. And a negative to any odd power is negative.
We'll talk more about this in the next video. This has implications for solving algebraic equations. For example, the equation x squared equals 4 has two solutions, x equals 2 and x equals negative 2, because either of those squared equals 4. By contrast, the equation x cubed equals 8 has only one solution, x equals positive 2.
If we cube positive 2, we get positive 8. But if we cube negative 2, we get negative 8. Notice also that an equation of the form something squared equals a negative has no solution. So, for example, x minus 1 squared equals negative 4. Well, there's no way that we can square anything and get negative 4.
So that's an equation that has no solution. But we could have something cubed equals a negative. That's perfectly fine. If something cubed equals negative 1, then than that thing must equal negative 1, and then we can solve for x. Finally, just as it is important to know your times tables, so it is important to know some of the basic powers of single digit numbers.
So, here's what I'm gonna recommend memorizing and knowing. And it, it's helpful, actually, to multiply these out step-by-step, to help you remember them. First of all, I'll recommend knowing the powers of 2 up to at least 2 to the 9th. And why all the way up to 2 to the 9th? Well, we'll be talking about this more when we talk about some of the rules for exponents.
But again, very good actually to practice once in a while. Just keep on multiplying by 2 and get all these numbers. Just so that you verify for yourself where they come from. Know the powers of 3 up to at least 3 to the 4th, the powers of 4 up to the 4th, the powers of 5 up to the 4th. Again, multiply all these out from time to time just to remind yourself of all these so that you, you really can remember them very well.
And then you should know, of course, the squares and the cubes of everything from 6 to 9. And why would you need to know all these? Well, again, we'll talk about these more when we talk about some of the rules of exponents. And of course know all the powers of 10.
That was discussed in the Multiples of 10 lesson. It was very easy to figure out powers of 10, you're just adding zeros or, or for negative powers you're putting it behind the decimal point. Fundamentally, b to the n means n factors of b multiplied together. That is the fundamental definition of an exponent. And it's very good as we move through the laws of exponents to keep in mind that fundamental definition of an exponent.
1 to any power is 1. 0 to any positive power is 0. A negative to an, to an even power is positive, and negative to an odd power is odd. An equation with an expression to an even power equal to a negative has no solution, but an odd power can equal a negative.
And finally, know the basic powers of the single-digit numbers. |
In this article we'll look at straight bevel gear forces that result from gear mesh. These forces are important to determine the loads on shafts and bearings. Straight bevel gears are useful when input and output shafts are not parallel. Most often, straight bevel gear shafts are 90 degrees apart.
Anatomy of a Bevel Gear
The diagram below shows important factors in a bevel gear.
The broken lines represent the pitch line and together form the pitch cone. The pitch diameter is measured at the outside of the pitch lines. Pitch diameter is sometimes referred to as outside pitch diameter.
Using the assumption that all forces act at the center of each tooth at the pitch line we introduce the mean diameter. This is the diameter at the center of the teeth at the pitch line. Mean diameter is calculated by:
Mean diameter = Pitch diameter - Face width cos (90 - γ)
Example Bevel Gear Pair
Consider the bevel gear pair where the pinion drives the gear.
Now let's look at the force on the pinion's teeth. The normal force, Fn can be broken into tangential, radial, and axial components for easier computation of shaft and bearing loads. The axial load also tells how much thrust the bearings must handle.
The equations for tangential force, Ft, radial force, Fr, and axial force, Fa are:
Ft = Fn cos Φ
Fr = Fn sin Φ cos γ = Ft tan Φ cos γ
Fa = Fn sin Φ sin γ = Ft tan Φ sin γ
Fn is normal force on tooth
Φ is pressure angle
γ is pitch angle
Let's assume we know the torque on the pinion's shaft is 225 in-lb. The pinion pitch diameter is 3 inches, face width is .75 inches, and pitch angle is 30.96 degrees. First, we need to calculate mean diameter:
Dmean = 3 - .75 cos (90 - 30.96) = 2.614 inches
We can calculate Ft by the following:
Ft = 2Tpinion ÷ Dmean = 2(225) ÷2.61 = 172.14 lb
Tpinion is pinion torque
Dmean is mean diameter of pinion
Now let's look at the tangential, radial, and axial forces on both gears.
For shafts that are at 90 degrees:
Pinion radial force = Gear axial force
Pinion axial force = Gear radial force
For shafts that are not at 90 degrees, the relations above do not apply.
The tangential force for both are equal in value and opposite in direction.
The MEboost Gear Forces Tool
MEboost has a gear forces tool that can easily determine straight bevel gear forces. We'll use the same example to illustrate its use. To run the tool, click the Gear Forces button on the Excel ribbon.
The gear forces form will appear. There are tabs for different gear types. In our case we'll use the Straight Bevel tab. The pinion and gear data are entered and the pinion input torque must be supplied.
The pinion and gear tangential, radial, and axial forces are shown. The tool also calculates the pitch angle for each gear.
Excel is a registered trademark of Microsoft Corporation. Used with permission from Microsoft. |
CBSE Class 10 Maths Chapter 3 Pairs of Linear Equations in Two Variables notes are available here. This article is provided here for students of Class 10 to have quick revision for the chapter “Pairs of Linear Equations in Two Variables”. This chapter will help you to determine the values of the unknown variables with the help of various methods, such as the Graphical method and Algebraic methods. An example of pairs of linear equations will be: Rohan bought 2 pencils and 1 pen for Rs.10, whereas Sam bought 4 pencils and 2 pens for Rs. 20, so what is the price of each pen and pencil? Now to find the solution to this question, we can take pen and pencil as two different variables and write the linear equations for each condition. Hence, by solving the pair of linear equations in two variables, we will get the price of a pen and pencil. Let us go through the article to understand how to solve such a pair of equations.
An equation is a statement that two mathematical expressions having one or more variables are equal.
Equations in which the powers of all the variables involved are one are called linear equations. The degree of a linear equation is always one.
To know more about Linear Equations, visit here.
Video Lesson on Pair of Linear Equations in Two Variables
General Form of a Linear Equation in Two Variables
The general form of a linear equation in two variables is ax + by + c = 0, where a and b cannot be zero simultaneously.
Students can refer to the short notes and MCQ questions along with separate solution pdf of this chapter for quick revision from the links below:
- Pair of Linear Equations in Two Variables Short Notes
- Pair of Linear Equations in Two Variables MCQ Practice Questions
- Pair of Linear Equations in Two Variables MCQ Practice Solutions
Representing Linear Equations for a Word Problem
To represent a word problem as a linear equation:
- Identify unknown quantities and denote them by variables.
- Represent the relationships between quantities in a mathematical form, replacing the unknowns with variables.
Example: The cost of 5 pens and 7 pencils is Rs.50 whereas the cost of 7 pens and 5 pencils is Rs. 65. Represent the given word problem in linear equations form.
Solution: Let us say the cost of 1 pen is Rs.x, and the cost of 1 pencil is Rs.y
The cost of 5 pens and 7 pencils is Rs.50.
So, we can write in the form of a linear equation;
5x + 7y = 50
The cost of 7 pens and 5 pencils is Rs.65
So, we can write in the form of a linear equation;
7x + 5y = 65
Hence, 5x + 7y = 50 and 7x + 5y = 65 are the pairs of linear equations in two variables as per the given word problem.
Solution of a Linear Equation in 2 Variables
The solution of a linear equation in two variables is a pair of values (x, y), one for ‘x’ and the other for ‘y’, which makes the two sides of the equation equal.
Eg: If 2x + y = 4, then (0,4) is one of its solutions as it satisfies the equation. A linear equation in two variables has infinitely many solutions.
Know More: Pair of Linear Equations in Two Variables
Geometrical Representation of a Linear Equation
Geometrically, a linear equation in two variables can be represented as a straight line.
2x – y + 1 = 0
⇒ y = 2x + 1
Let us draw the graph for the given equation.
Step 1: Put the value of x = 0, such that y = 1.
Step 2: Put the value of x = -1/2 or 0.5, such that y = 0
Hence, the coordinates are (0, 1) and (-0.5, 0). We can find more points to plot the graph by putting different values for x. Now based on these coordinates, we can draw a straight line in the graph connecting the two points (0, 1) and (-0.5, 0) as shown in the figure below.
To know more about Linear Equations in Two Variables, visit here.
Plotting a Straight Line
The graph of a linear equation in two variables is a straight line. We plot the straight line as follows:
Any additional points plotted in this manner will lie on the same line.
For example, in the above section, we have plotted the graph for y = 2x + 1, which shows a straight line passing through the points (0, 1) and (-0.5, 0).
Know More: Graphing of Linear Equations
General Form of a Pair of Linear Equations in 2 Variables
A pair of linear equations in two variables can be represented as follows:
The coefficients of x and y cannot be zero simultaneously for an equation.
Nature of 2 Straight Lines in a Plane
For a pair of straight lines on a plane, there are three possibilities.
i) They intersect at exactly one point
ii) They are parallel
iii) They are coincident
Representing Pair of LE in 2 Variables Graphically
Graphically, a pair of linear equations in two variables can be represented by a pair of straight lines.
Graphical Method of Finding Solution of a Pair of Linear Equations
The Graphical Method of finding the solution to a pair of linear equations is as follows:
- Plot both the equations (two straight lines)
- Find the point of intersection of the lines.
The point of intersection is the solution.
For example, the graph of two linear equations 2x + y – 6 = 0 and 4x – 2y – 4 = 0 is shown below. The point of intersection for the two graphs is (2,2).
To know more about Graphing of Linear Equations in 2 Variables, visit here.
Comparing the Ratios of Coefficients of a Linear Equation
Example: On comparing the ratio, (a1/a2) , (b1/b2) , (c1/c2) find out whether 2x – 3y = 8 and 4x – 6y = 9 are consistent, or inconsistent.
Solution: As per the given pair of linear equations, 2x – 3y = 8 and 4x – 6y = 9, we have;
a1 = 2, b1 = -3, c1 = -8
a2 = 4, b2 = -6, c2 = -9
(a1/a2) = 2/4 = 1/2
(b1/b2) = -3/-6 = 1/2
(c1/c2) = -8/-9 = 8/9
Since , (a1/a2) = (b1/b2) ≠ (c1/c2)
So, the given linear equations are parallel to each other and have no possible solution. Hence, the given pair of linear equations is inconsistent.
For more information on Finding Solutions for Consistent and Inconsistent Pairs of Linear Equations, watch the below video
Finding Solutions for Consistent Pairs of Linear Equations
The solution of a pair of linear equations is of the form (x,y), which satisfies both equations simultaneously. Solution for a consistent pair of linear equations can be found using
i) Elimination method
ii) Substitution Method
iii) Cross-multiplication method
iv) Graphical method
To know more about Finding the Solution of Algebraic Expressions, visit here.
Substitution Method of Finding Solution of a Pair of Linear Equations
y – 2x = 1
x + 2y = 12
(i) express one variable in terms of the other using one of the equations. In this case, y = 2x + 1.
(ii) substitute for this variable (y) in the second equation to get a linear equation in one variable, x. x + 2 × (2x + 1) = 12
⇒ 5 x + 2 = 12
(iii) Solve the linear equation in one variable to find the value of that variable.
5 x + 2 = 12
⇒ x = 2
(iv) Substitute this value in one of the equations to get the value of the other variable.
y = 2 × 2 + 1
⇒y = 5
So, (2,5) is the required solution of the pair of linear equations y – 2x = 1 and x + 2y = 12.
Elimination Method of Finding Solution of a Pair of Linear Equations
Consider x + 2y = 8 and 2x – 3y = 2
Step 1: Make the coefficients of any variable the same by multiplying the equations with constants. Multiplying the first equation by 2, we get,
2x + 4y = 16
Step 2: Add or subtract the equations to eliminate one variable, giving a single variable equation.
Subtract the second equation from the previous equation
2x + 4y = 16
2x – 3y = 2
– + –
0(x) + 7y =14
Step 3: Solve for one variable and substitute this in any equation to get the other variable.
y = 2,
x = 8 – 2 y
⇒ x = 8 – 4
⇒ x = 4
(4, 2) is the solution.
Cross-Multiplication Method of Finding Solution of a Pair of Linear Equations
For the pair of linear equations
a1x + b1y + c1=0
a2x + b2y + c2=0,
x and y can be calculated as
x = (b1c2−b2c1)/(a1b2−a2b1)
y = (c1a2−c2a1)/(a1b2−a2b1)
Learn more with examples here: Cross Multiplication Method
Equations Reducible to a Pair of Linear Equations in 2 Variables
In this section, we learn about such equations, which are not linear but can be reduced to a pair of linear equations form using the substitution method. Let us understand with the help of an example.
In this case, we may make the substitution
1/x = u and 1/y = v
The pair of equations reduces to
2u + 3v = 4 …..(i)
5u – 4v = 9 ….(ii)
From the first equation, isolate the value of u.
u = (4-3v)/2
Now substitute the value of u in eq. (ii)
5[(4-3v)/2] – 4v = 9
Solving for v, we get;
v = 2/23
Now substitute the value of v in u = (4-3v)/2, to get the value of u.
u = 43/23
Since, u = 1/x or x = 1/u = 23/43
and v = 1/y or y = 1/v, so y = 23/2
Hence, the solutions are x = 23/43 and y = 23/2.
- NCERT Solutions for Class 10 Maths Chapter 3 Pair of Linear Equations in Two Variables
- NCERT Exemplar Class 10 Maths Solutions for Chapter 3 – Pair Of Linear Equations In Two Variables
- RD Sharma Solutions for Class 10 Maths Chapter 3 Pair of Linear Equations In Two Variables
- Class 10 Maths Chapter 3 Pair of Linear Equations in Two Variables MCQs
- Important Questions Class 10 Maths Chapter 3 Linear Equations In Two Variables
Frequently Asked Questions on CBSE Class 10 Maths Notes Chapter 3: Pair of Linear Equations in Two Variables
What is a linear equation?
A linear equation is an equation in which the highest power of the variable is always 1.
What is an algebraic equation?
An algebraic equation can be defined as a mathematical statement in which two expressions are set equal to each other.
What are some of the uses of linear equations?
1. Solve age-related problems 2. Calculate speed 3. Geometry-related problems 4. Work, time and wage-related problems 5. Calculate money, percentage-related sums |
The Women’s Rights Pioneers Monument, which consists of bronze figures of Sojourner Truth, Susan B. Anthony, and Elizabeth Cady Stanton, is the first monument in Central Park to depict actual women. The monument, sponsored and funded by the organization Monumental Women, commemorates the centennial of the ratification of the 19th Amendment, which gave women the right to vote. It’s part of the organization’s efforts to “break the bronze ceiling” by advocating for more commemoration of women in public spaces and promoting women’s history.
The new monument arrives at a critical time, as citizens and municipalities are reexamining the role and meaning of public commemorative monuments. This reckoning has called attention to the fact that many of the country’s monuments have complex histories, and they often reflect specific agendas that are not immediately apparent. It’s also revealed how monuments are never static: The ideas they represent can be investigated and questioned. The Women’s Rights Pioneers Monument is a major addition to the Park and to the City’s collection of monuments. To understand its significance and how it ended up in Central Park, it’s helpful to consider the broader history of monuments in the Park.
Commemorative monuments and Central Park
While many of the monuments in Central Park may look at home—and have been in the Park for a long time—they were not originally envisioned as part of its design. The Park’s designers, Frederick Law Olmsted and Calvert Vaux, did not include monuments because they conflicted with the purpose of the Park in a couple of fundamental ways.
The Park was created to provide an escape from the City and an experience of the countryside, which its designers and administers ardently believed was essential to improving the quality of life for urban dwellers. The Park’s cohesive design created opportunities for gazing over the broad sweep of an open meadow, wandering around in the woods, taking a boat ride across a glassy lake, and many more rural experiences. To create a true sense of escape, the designers aimed to avoid reminders of urban life and limit urban activities such as team sports or military parades, which also required large areas of the Park to be used for small numbers of people. Commemorative monuments were also in this category—enjoyed by relatively few and not something that one would typically encounter on a country stroll.
In addition to being inconsistent with the Park as an experience, monuments contradicted the ideal of the Park as a democratic space. Elevating one person or group above others (literally and figuratively) was inconsistent with the idea that the Park was intended for everyone, which Olmsted and Vaux emphasized in every aspect of how they designed and promoted the Park. One of the most notable examples was their naming of Park entrances. They rejected a plan to name the entrances for prominent individuals, in favor of using them to honor and welcome the different professions and people of the City: women, children, engineers, artists, farmers, and others.
The monuments that were eventually added to the Park were a product of competing ideas about its purpose and its role as a civic space. Following the Civil War, a growing interest in commemorative statues resulted in numerous proposals to add them to the Park. In response to these pressures, the Park’s administrators established rules about accepting monuments and where they could go—policies that were necessary, they believed, to prevent the Park’s landscapes from becoming completely overwhelmed.
Examining the history of monuments in Central Park illuminates what is only now beginning to be fully understood by many New Yorkers—that most historic civic monuments did not originate from official, City-led efforts, nor were they decided upon through consensus, even if they were to live in a public space. By looking at the origins of some of the existing monuments near the location for the new monument, we see how various groups proposed monuments for different reasons, but they were united by a desire to assert themselves in a prominent public setting.
Some of the earliest monuments in Central Park were proposed by groups of European immigrants who sought to see themselves represented in the City’s premier public space—a symbol of their inclusion in American public life. It’s a result of the efforts of prominent New Yorkers of Scottish descent that Literary Walk features statues of two Scottish writers, Sir Walter Scott (dedicated 1872) and Robert Burns (dedicated 1880). The monument to William Shakespeare (dedicated 1872), one of the few figures that has truly withstood the test of time, was sponsored by a group of actors, to celebrate the Bard’s 300th birthday. Another neighboring statue depicts the writer Fitz Greene Halleck, largely unheard of now, but quite famous in his day. Sponsored by the publisher and fellow poet William Cullen Bryant, the monument to Halleck was the first in Central Park to commemorate an American.
The fact that these monuments made it into the Park, amongst many others vying for a place in the City’s most important public space, is a longer story. But it suggests that these sponsoring groups, members of the City’s elite or just beginning to gain influence, had some amount of power that enabled them to successfully advocate for and fund their cause. That these statues all commemorate men reflects the prevailing social order of the period when they were conceived, during the heyday for commemorative monuments, roughly 1870–1920.
By the 1930s, interest in commemorative monuments had waned, and new types of statues were added to the Park. Examples include Alice in Wonderland, a sculpture that children could play on and a memorial to the sponsor's wife; and the Burnett Memorial Fountain, honoring children’s book author Frances Hodgson Burnett. Both honor women in Central Park, albeit in non-traditional monumental forms. A monument to the Cuban revolutionary and writer, José Martí, sculpted by Anna Hyatt Huntington, was the last commemorative monument installed in the Park. Beginning in the 1960s, concerns about the Park’s deteriorating condition and a growing movement to recognize and protect its historic and cultural value resulted in increased vigilance among defenders of its historic purpose and design. This culminated in the City designating the Park as a scenic landmark in 1974, formalizing the process for additions and changes to the Park.
The Women’s Rights Pioneers Monument in Central Park
Many have wondered about the addition of the Women’s Rights Pioneers Monument given the Park’s landmark status and a general presumption against adding new structures and features. After the City agreed that it was important to commemorate the women’s suffrage movement, it also agreed to contemplate introducing a monument in Central Park for the first time since the Park was designated a landmark. That decision was made to acknowledge the importance of representing women—who make up over half of the population—in New York’s flagship park.
The Conservancy's responsibility for monuments—as part of our role as steward of Central Park—includes caring for them and all artworks in the Park’s collection. In the case of the new monument, the Conservancy was primarily involved in helping to identify a location that would be appropriate for the monument and the Park. This involved carefully considering the historic policies informing the placement of monuments in the Park, and the relationship between monuments and the original purpose and design of the Park. After much analysis and discussion, the City and the Conservancy identified a suitable spot on the Mall, which was designated in 1873 as the primary spot for monuments, across from the statue of Fitz Greene Halleck.
The importance of commemorating women’s suffrage and including women among the historical figures represented in Central Park was undisputed. However, the proposed monument generated a debate over what this representation should look like. This debate unfolded during the standard process for reviewing and approving all new monuments on public property, led by the Public Design Commission, and resulted in several changes to its design. The monument as originally proposed depicted two figures, Susan B. Anthony and Elizabeth Cady Stanton, the most well-known leaders of the women’s suffrage movement. It also acknowledged other contributors to the movement, including suffragist women of color. In the original design, their names and writings were to be included on a scroll that unfurled from the writing desk at which Anthony and Stanton were seated. Some critics saw this depiction as demeaning, and even racist. To address this and simplify the design, the scroll was taken out. Left with just Stanton and Anthony, however, the depiction was to some a “whitewashing” of history. After further debate and discussion with the Public Design Commission, Monumental Women redesigned the monument to include Sojourner Truth.
A traditional bronze monument was preferred from the beginning, in part to ensure that the monument would not look out of place on the Mall, but this form proved a challenge. The controversy over its design speaks to the limitations of traditional figurative monuments to be inclusive, to fully represent something much larger than the persons depicted—in this case, a decades-long struggle that involved numerous contributors and aimed to benefit over half the population. The founders of the Park recognized these same limitations, which is why they sought to regulate the quantity and placement of monuments in the Park. If they included everyone who wanted to be represented and made up the diverse and populous City, Central Park would have become overwhelmed with monuments. The debate over the design of the Women’s Rights Pioneers Monument also speaks to the urgent need to acknowledge the lives and accomplishments of people of color in the public realm and to make voices that have been forgotten, or silenced, heard. It may also be another indication that we need to rethink traditional ideas and forms of monumentality—to reinvent the monument for the 21st century.
In contemplating monuments today, it’s essential to consider the role and meaning of the public spaces in which they exist, and to investigate the relationship between monument and site. The addition of the Women’s Rights Pioneer Monument to Central Park’s eclectic—and certainly male-centric—collection of works of art affirms the presence, contributions, and rights of women in one of the most important and popular public spaces in the world. But it also highlights the uneasy relationship between monuments and Central Park, serving as a reminder that the Park’s purpose intentionally defied notions of monumentality, specifically as they were defined in the 19th and early 20th centuries.
Olmsted and Vaux saw Central Park, and specifically the experience of nature, as a great leveler. In one of his many essays about the purpose of the Park, Olmsted described his great pleasure in seeing groups of people with “an evident glee in the prospect of coming together,” and “all classes largely represented with a common purpose.” He marveled at how in the Park (versus in the City) urban dwellers were “not at all intellectual, competitive with none, disposing to jealousy and spiritual or intellectual pride toward none.” Olmsted and Vaux intended the Park as a place where all urban dwellers could come together, despite their differences and worries, and all enjoy nature and being together in beautiful surroundings. This experience would ultimately make them happier and healthier. It’s important to acknowledge that this was a vision and an ideal—one that has not yet been perfectly realized—and that not everyone has always had equal access to the Park. But Olmsted and Vaux, and other Park supporters, certainly saw the Park as a work of art, endowed with what they called a “single noble purpose,” which they defined as providing all urban dwellers with an opportunity to retreat from the City and experience nature. As we begin to rethink what a monument can be, perhaps the Park itself could be considered a monument—to the power of nature to restore our common humanity.
Marie Warsh is the Conservancy's historian.
Park HistoryFor many New Yorkers right now, Central Park and the City’s other open spaces are more valuable and meaningful than ever.
Things to See and DoStumped about how to spend a winter day in the Park? We’ve got you covered.
Tags: Tips for Visiting / Winter
Things to See and Do
We sat down with Alan Clark, a Conservancy arborist, for a glimpse into the life of Central Park's changing leaves.
Tags: Fall / Trees
Park HistoryThe land on which Central Park was constructed featured an unusually large amount of exposed bedrock, which heavily influenced how it was designed and built.
Tags: Park Design / First-Time Visitors |
Ethernet switches link Ethernet devices together by relaying Ethernet frames between the devices connected to the switches. By moving Ethernet frames between the switch ports, a switch links the traffic carried by the individual network connections into a larger Ethernet network.
Ethernet switches perform their linking function by bridging Ethernet frames between Ethernet segments. To do this, they copy Ethernet frames from one switch port to another, based on the Media Access Control (MAC) addresses in the Ethernet frames. Ethernet bridging was initially defined in the 802.1D IEEE Standard for Local and Metropolitan Area Networks: Media Access Control (MAC) Bridges.
The standardization of bridging operations in switches makes it possible to buy switches from different vendors that will work together when combined in a network design. That’s the result of lots of hard work on the part of the standards engineers to define a set of standards that vendors could agree upon and implement in their switch designs.
The first Ethernet bridges were two-port devices that could link two of the original Ethernet system’s coaxial cable segments together. At that time, Ethernet only supported connections to coaxial cables. Later, when twisted-pair Ethernet was developed and switches with many ports became widely available, they were often used as the central connection point, or hub, of Ethernet cabling systems, resulting in the name “switching hub.” Today, in the marketplace, these devices are simply called switches.
Things have changed quite a lot since Ethernet bridges were first developed in the early 1980s. Over the years, computers have become ubiquitous, and many people use multiple devices at their jobs, including their laptops, smartphones, and tablets. Every VoIP telephone and every printer is a computer, and even building management systems and access controls (door locks) are networked. Modern buildings have multiple wireless access points (APs) to provide 802.11 Wi-Fi services for things like smartphones and tablets, and each of the APs is also connected to a cabled Ethernet system. As a result, modern Ethernet networks may consist of hundreds of switch connections in a building, and thousands of switch connections across a campus network.
You should know that there is another network device used to link networks, called a router. There are major differences in the ways that bridges and routers work, and they both have advantages and disadvantages, as described in Routers or Bridges?. Very briefly, bridges move frames between Ethernet segments based on Ethernet addresses with little or no configuration of the bridge required. Routers move packets between networks based on high-level protocol addresses, and each network being linked must be configured into the router. However, both bridges and routers are used to build larger networks, and both devices are called switches in the marketplace.
We will use the words “bridge” and “switch” interchangeably to describe Ethernet bridges. However, note that “switch” is a generic term for network devices that may function as bridges, or routers, or even both, depending on their feature sets and configuration. The point is that as far as network experts are concerned, bridging and routing are different kinds of packet switching with different capabilities. For our purposes, we will follow the practices of Ethernet vendors who use the word “switch,” or more specifically, “Ethernet switch,” to describe devices that bridge Ethernet frames.
While the 802.1D standard provides the specifications for bridging local area network frames between ports of a switch, and for a few other aspects of basic bridge operation, the standard is also careful to avoid specifying issues like bridge or switch performance or how switches should be built. Instead, vendors compete with one another to provide switches at multiple price points and with multiple levels of performance and capabilities.
The result has been a large and competitive market in Ethernet switches, increasing the number of choices you have as a customer. The wide range of switch models and capabilities can be confusing. In Chapter 4, we discuss special purpose switches and their uses.
Networks exist to move data between computers. To perform that task, the network software organizes the data being moved into Ethernet frames. Frames travel over Ethernet networks, and the data field of a frame is used to carry data between computers. Frames are nothing more than arbitrary sequences of information whose format is defined in a standard.
The format for an Ethernet frame includes a destination address at the beginning, containing the address of the device to which the frame is being sent. Next comes a source address, containing the address of the device sending the frame. The addresses are followed by various other fields, including the data field that carries the data being sent between computers, as shown in Figure 1-1.
Frames are defined at Layer 2, or the Data Link Layer, of the Open Systems Interconnection (OSI) seven-layer network model. The seven-layer model was developed to organize the kinds of information sent between computers. It is used to define how that information will be sent and to structure the development of standards for transferring information. Since Ethernet switches operate on local area network frames at the Data Link Layer, you will sometimes hear them called link layer devices, as well as Layer 2 devices or Layer 2 switches.
Ethernet switches are designed so that their operations are invisible to the devices on the network, which explains why this approach to linking networks is also called transparent bridging. “Transparent” means that when you connect a switch to an Ethernet system, no changes are made in the Ethernet frames that are bridged. The switch will automatically begin working without requiring any configuration on the switch or any changes on the part of the computers connected to the Ethernet network, making the operation of the switch transparent to them.
Next, we will look at the basic functions used in a bridge to make it possible to forward Ethernet frames from one port to another.
An Ethernet switch controls the transmission of frames between switch ports connected to Ethernet cables using the traffic forwarding rules described in the IEEE 802.1D bridging standard. Traffic forwarding is based on address learning. Switches make traffic forwarding decisions based on the 48-bit media access control (MAC) addresses used in LAN standards, including Ethernet.
To do this, the switch learns which devices, called stations in the standard, are on which segments of the network by looking at the source addresses in all of the frames it receives. When an Ethernet device sends a frame, it puts two addresses in the frame. These two addresses are the destination address of the device it is sending the frame to, and the source address, which is the address of the device sending the frame.
The way the switch “learns” is fairly simple. Like all Ethernet interfaces, every port on a switch has a unique factory-assigned MAC address. However, unlike a normal Ethernet device that accepts only frames addressed directed to it, the Ethernet interface located in each port of a switch runs in promiscuous mode. In this mode, the interface is programmed to receive all frames it sees on that port, not just the frames that are being sent to the MAC address of the Ethernet interface on that switch port.
As each frame is received on each port, the switching software looks at the source address of the frame and adds that source address to a table of addresses that the switch maintains. This is how the switch automatically discovers which stations are reachable on which ports.
Figure 1-2 shows a switch linking six Ethernet devices. For convenience, we’re using short numbers for station addresses, instead of actual 6-byte MAC addresses. As stations send traffic, the switch receives every frame sent and builds a table, more formally called a forwarding database, that shows which stations can be reached on which ports. After every station has transmitted at least one frame, the switch will end up with a forwarding database such as that shown in Table 1-1.
Table 1-1. Forwarding database maintained by a switch
This database is used by the switch to make a packet forwarding decision in a process called adaptive filtering. Without an address database, the switch would have to send traffic received on any given port out all other ports to ensure that it reached its destination. With the address database, the traffic is filtered according to its destination. The switch is “adaptive” by learning new addresses automatically. This ability to learn makes it possible for you to add new stations to your network without having to manually configure the switch to know about the new stations, or the stations to know about the switch.
When the switch receives a frame that is destined for a station address that it hasn’t yet seen, the switch will send the frame out all of the ports other than the port on which it arrived. This process is called flooding, and is explained in more detail later in Frame Flooding.
Once the switch has built a database of addresses, it has all the information it needs to filter and forward traffic selectively. While the switch is learning addresses, it is also checking each frame to make a packet forwarding decision based on the destination address in the frame. Let’s look at how the forwarding decision works in a switch equipped with eight ports, as shown in Figure 1-2.
Assume that a frame is sent from station 15 to station 20. Since the frame is sent by station 15, the switch reads the frame in on port 6 and uses its address database to determine which of its ports is associated with the destination address in this frame. Here, the destination address corresponds to station 20, and the address database shows that to reach station 20, the frame must be sent out port 2.
Each port in the switch has the ability to hold frames in memory, before transmitting them onto the Ethernet cable connected to the port. For example, if the port is already busy transmitting when a frame arrives for transmission, then the frame can be held for the short time it takes for the port to complete transmitting the previous frame. To transmit the frame, the switch places the frame into the packet switching queue for transmission on port 2.
During this process, a switch transmitting an Ethernet frame from one port to another makes no changes to the data, addresses, or other fields of the basic Ethernet frame. Using our example, the frame is transmitted intact on port 2 exactly as it was received on port 6. Therefore, the operation of the switch is transparent to all stations on the network.
Note that the switch will not forward a frame destined for a station that is in the forwarding database onto a port unless that port is connected to the target destination. In other words, traffic destined for a device on a given port will only be sent to that port; no other ports will see the traffic intended for that device. This switching logic keeps traffic isolated to only those Ethernet cables, or segments, needed to receive the frame from the sender and transmit that frame to the destination device.
This prevents the flow of unnecessary traffic on other segments of the network system, which is a major advantage of a switch. This is in contrast to the early Ethernet system, where traffic from any station was seen by all other stations, whether they wanted the data or not. Switch traffic filtering reduces the traffic load carried by the set of Ethernet cables connected to the switch, thereby making more efficient use of the network bandwidth.
Switches automatically age out entries in their forwarding database after a period of time—typically five minutes—if they do not see any frames from a station. Therefore, if a station doesn’t send traffic for a designated period, then the switch will delete the forwarding entry for that station. This keeps the forwarding database from growing full of stale entries that might not reflect reality.
Of course, once the address entry has timed out, the switch won’t have any information in the database for that station the next time the switch receives a frame destined for it. This also happens when a station is newly connected to a switch, or when a station has been powered off and is turned back on more than five minutes later. So how does the switch handle packet forwarding for an unknown station?
The solution is simple: the switch forwards the frame destined for an unknown station out all switch ports other than the one it was received on, thus flooding the frame to all other stations. Flooding the frame guarantees that a frame with an unknown destination address will reach all network connections and be heard by the correct destination device, assuming that it is active and on the network. When the unknown device responds with return traffic, the switch will automatically learn which port the device is on, and will no longer flood traffic destined to that device.
In addition to transmitting frames directed to a single address, local area networks are capable of sending frames directed to a group address, called a multicast address, which can be received by a group of stations. They can also send frames directed to all stations, using the broadcast address. Group addresses always begin with a specific bit pattern defined in the Ethernet standard, making it possible for a switch to determine which frames are destined for a specific device rather than a group of devices.
A frame sent to a multicast destination address can be received by all stations configured to listen for that multicast address. The Ethernet software, also called “interface driver” software, programs the interface to accept frames sent to the group address, so that the interface is now a member of that group. The Ethernet interface address assigned at the factory is called a unicast address, and any given Ethernet interface can receive unicast frames and multicast frames. In other words, the interface can be programmed to receive frames sent to one or more multicast group addresses, as well as frames sent to the unicast MAC address belonging to that interface.
The broadcast address is a special multicast group: the group of all of the stations in the network. A packet sent to the broadcast address (the address of all 1s) is received by every station on the LAN. Since broadcast packets must be received by all stations on the network, the switch will achieve that goal by flooding broadcast packets out all ports except the port that it was received on, since there’s no need to send the packet back to the originating device. This way, a broadcast packet sent by any station will reach all other stations on the LAN.
Multicast traffic can be more difficult to deal with than broadcast frames. More sophisticated (and usually more expensive) switches include support for multicast group discovery protocols that make it possible for each station to tell the switch about the multicast group addresses that it wants to hear, so the switch will send the multicast packets only to the ports connected to stations that have indicated their interest in receiving the multicast traffic. However, lower cost switches, with no capability to discover which ports are connected to stations listening to a given multicast address, must resort to flooding multicast packets out all ports other than the port on which the multicast traffic was received, just like broadcast packets.
Stations send broadcast and multicast packets for a number of reasons. High-level network protocols like TCP/IP use broadcast or multicast frames as part of their address discovery process. Broadcasts and multicasts are also used for dynamic address assignment, which occurs when a station is first powered on and needs to find a high-level network address. Multicasts are also used by certain multimedia applications, which send audio and video data in multicast frames for reception by groups of stations, and by multi-user games as a way of sending data to a group of game players.
Therefore, a typical network will have some level of broadcast and multicast traffic. As long as the number of such frames remains at a reasonable level, then there won’t be any problems. However, when many stations are combined by switches into a single large network, broadcast and multicast flooding by the switches can result in significant amounts of traffic. Large amounts of broadcast or multicast traffic may cause network congestion, since every device on the network is required to receive and process broadcasts and specific types of multicasts; at high enough packet rates, there could be performance issues for the stations.
Streaming applications (video) sending high rates of multicasts can generate intense traffic. Disk backup and disk duplication systems based on multicast can also generate lots of traffic. If this traffic ends up being flooded to all ports, the network could congest. One way to avoid this congestion is to limit the total number of stations linked to a single network, so that the broadcast and multicast rate does not get so high as to be a problem.
Another way to limit the rate of multicast and broadcast packets is to divide the network into multiple virtual LANs (VLANs). Yet another method is to use a router, also called a Layer 3 switch. Since a router does not automatically forward broadcasts and multicasts, this creates separate network systems. These methods for controlling the propagation of multicasts and broadcasts are discussed in Chapter 2 and Chapter 3, respectively.
So far we’ve seen how a single switch can forward traffic based on a dynamically-created forwarding database. A major difficulty with this simple model of switch operation is that multiple connections between switches can create loop paths, leading to network congestion and overload.
The design and operation of Ethernet requires that only a single packet transmission path may exist between any two stations. An Ethernet grows by extending branches in a network topology called a tree structure, which consists of multiple switches branching off of a central switch. The danger is that, in a sufficiently complex network, switches with multiple inter-switch connections can create loop paths in the network.
On a network with switches connected together to form a packet forwarding loop, packets will circulate endlessly around the loop, building up to very high levels of traffic and causing an overload.
The looped packets will circulate at the maximum rate of the network links, until the traffic rate gets so high that the network is saturated. Broadcast and multicast frames, as well as unicast frames to unknown destinations, are normally flooded to all ports in a basic switch, and all of this traffic will circulate in such a loop. Once a loop is formed, this failure mode can happen very rapidly, causing the network to be fully occupied with sending broadcast, multicast, and unknown frames, and it becomes very difficult for stations to send actual traffic.
Unfortunately, loops like the dotted path shown with arrows in Figure 1-3 are all too easy to achieve, despite your best efforts to avoid them. As networks grow to include more switches and more wiring closets, it becomes difficult to know exactly how things are connected together and to keep people from mistakenly creating a loop path.
While the loop in the drawing is intended to be obvious, in a sufficiently complex network system it can be challenging for anyone working on the network to know whether or not the switches are connected in such a way as to create loop paths. The IEEE 802.1D bridging standard provides a spanning tree protocol to avoid this problem by automatically suppressing forwarding loops.
The purpose of the spanning tree protocol (STP) is to allow switches to automatically create a loop-free set of paths, even in a complex network with multiple paths connecting multiple switches. It provides the ability to dynamically create a tree topology in a network by blocking any packet forwarding on certain ports, and ensures that a set of Ethernet switches can automatically configure themselves to produce loop-free paths. The IEEE 802.1D standard describes the operation of spanning tree, and every switch that claims compliance with the 802.1D standard must include spanning tree capability.
Operation of the spanning tree algorithm is based on configuration messages sent by each switch in packets called Bridge Protocol Data Units, or BPDUs. Each BPDU packet is sent to a destination multicast address that has been assigned to spanning tree operation. All IEEE 802.1D switches join the BPDU multicast group and listen to frames sent to this address, so that every switch can send and receive spanning tree configuration messages.
The process of creating a spanning tree begins by using the information in the BPDU configuration messages to automatically elect a root bridge. The election is based on a bridge ID (BID) which, in turn, is based on the combination of a configurable bridge priority value (32,768 by default) and the unique Ethernet MAC address assigned on each bridge for use by the spanning tree process, called the system MAC. Bridges send BPDUs to one another, and the bridge with the lowest BID is automatically elected to be the root bridge.
Assuming that the bridge priority was left at the default value of 32,768, then the bridge with the lowest numerical value Ethernet address will be the one elected as the root bridge. In the example shown in Figure 1-4, Switch 1 has the lowest BID, and the end result of the spanning tree election process is that Switch 1 has become the root bridge. Electing the root bridge sets the stage for the rest of the operations performed by the spanning tree protocol.
Once a root bridge is chosen, each non-root bridge uses that information to determine which of its ports has the least-cost path to the root bridge, then assigns that port to be the root port (RP). All other bridges determine which of their ports connected to other links has the least-cost path to the root bridge. The bridge with the least-cost path is assigned the role of designated bridge (DB), and the ports on the DB are assigned as designated ports (DP).
The path cost is based on the speed at which the ports operate, with higher speeds resulting in lower costs. As BPDU packets travel through the system, they accumulate information about the number of ports they travel through and the speed of each port. Paths with slower speed ports will have higher costs. The total cost of a given path through multiple switches is the sum of the costs of all the ports on that path.
If there are multiple paths to the root with the same cost, then the path connected to the bridge with the lowest bridge ID will be used.
At the end of this process, the bridges have chosen a set of root ports and designated ports, making it possible for the bridges to remove all loop paths and maintain a packet forwarding tree that spans the entire set of devices connected to the network, hence the name “spanning tree protocol.”
Once the spanning tree process has determined the port status, then the combination of root ports and designated ports provides the spanning tree algorithm with the information it needs to identify the best paths and block all other paths. Packet forwarding on any port that is not a root port or a designated port is disabled by blocking the forwarding of packets on that port.
While blocked ports do not forward packets, they continue to receive BPDUs. The blocked port is shown in Figure 1-4 with a “B,” indicating that port 10 on Switch 3 is in blocking mode and that the link is not forwarding packets. The Rapid Spanning Tree Protocol (RSTP) sends BPDU packets every two seconds to monitor the state of the network, and a blocked port may become unblocked when a path change is detected.
When an active device is connected to a switch port, the port goes through a number of states as it processes any BPDUs that it might receive, and the spanning tree process determines what state the port should be in at any given time. Two of the states are called listening and learning, during which the spanning tree process listens for BPDUs and also learns source addresses from any frames received.
Figure 1-5 shows the spanning tree port states, which include the following:
In the original spanning tree protocol, the listening and learning states lasted for 30 seconds, during which time packets were not forwarded. In the newer Rapid Spanning Tree Protocol, it is possible to assign a port type of “edge” to a port, meaning that the port is known to be connected to an end station (user computer, VoIP telephone, printer, etc.) and not to another switch. That allows the RSTP state machine to bypass the learning and listening processes on that port and to transition to the forwarding state immediately. Allowing a station to immediately begin sending and receiving packets helps avoid such issues as application timeouts on user computers when they are rebooted. While not required for RSTP operation, it is useful to manually configure RSTP edge ports with their port type, to avoid issues on user computers. Setting the port type to edge also means that RSTP doesn’t need to send a BPDU packet upon link state change (link up or down) on that port, which helps reduce the amount of spanning tree traffic in the network.
The inventor of the spanning tree protocol, Radia Perlman, wrote a poem to describe how it works. When reading the poem it helps to know that in math terms, a network can be represented as a type of graph called a mesh, and that the goal of the spanning tree protocol is to turn any given network mesh into a tree structure with no loops that spans the entire set of network segments.
I think that I shall never see
A graph more lovely than a tree.
A tree whose crucial property
Is loop-free connectivity.
A tree that must be sure to span
So packets can reach every LAN.
First, the root must be selected.
By ID, it is elected.
Least cost paths from root are traced.
In the tree, these paths are placed.
A mesh is made by folks like me,
Then bridges find a spanning tree.
— Radia Perlman Algorhyme
This brief description is only intended to provide the basic concepts behind the operation of the system. As you might expect, there are more details and complexities that are not described. The complete details of how the spanning tree state machine operates are described in the IEEE 802.1 standards, which can be consulted for a more complete understanding of the protocol and how it functions. The details of vendor-specific spanning tree enhancements can be found in the vendor documentation. See Appendix A for links to further information.
The original spanning tree protocol, standardized in IEEE 802.1D, specified a single spanning tree process running on a switch, managing all ports and VLANs with a single spanning tree state machine. Nothing in the standard prohibits a vendor from developing their own enhancements to how spanning tree is deployed. Some vendors created their own implementations, in one case providing a separate spanning tree process per VLAN. That approach was taken by Cisco Systems for a version they call per-VLAN spanning tree (PVST).
The IEEE standard spanning tree protocol has evolved over the years. An updated version, called the Rapid Spanning Tree Protocol, was defined in 2004. As the name implies, Rapid Spanning Tree has increased the speed at which the protocol operates. RSTP was designed to provide backward compatibility with the original version of spanning tree. The 802.1Q standard includes both RSTP and a new version of spanning tree called Multiple Spanning Tree (MST), which is also designed to provide backward compatibility with previous versions. MST is discussed further in Virtual LANs.
When building a network with multiple switches, you need to pay careful attention to how the vendor of your switches has deployed spanning tree, and to the version of spanning tree your switches use. The most commonly used versions, classic STP and the newer RSTP, are interoperable and require no configuration, resulting in “plug and play” operation.
Before putting a new switch into operation on your network, read the vendor’s documentation carefully and make sure that you understand how things work. Some vendors may not enable spanning tree as a default on all ports. Other vendors may implement special features or vendor-specific versions of spanning tree. Typically, a vendor will work hard to make sure that their implementation of spanning tree “just works” with all other switches, but there are enough variations in spanning tree features and configuration that you may encounter issues. Reading the documentation and testing new switches before deploying them throughout your network can help avoid any problems.
A single full-duplex Ethernet connection is designed to move Ethernet frames between the Ethernet interfaces at each end of the connection. It operates at a known bit rate and a known maximum frame rate. All Ethernet connections at a given speed will have the same bit rate and frame rate characteristics. However, adding switches to the network creates a more complex system. Now, the performance limits of your network become a combination of the performance of the Ethernet connections and the performance of the switches, as well as of any congestion that may occur in the system, depending on topology. It’s up to you to make sure that the switches you buy have enough performance to do the job.
The performance of the internal switching electronics may not be able to sustain the full frame rate coming in from all ports. In other words, should all ports simultaneously present high traffic loads to the switch that are also continual and not just short bursts, the switch may not be able to handle the combined traffic rate and may begin dropping frames. This is known as blocking, the condition in a switching system in which there are insufficient resources available to provide for the flow of data through the switch. A non-blocking switch is one that provides enough internal switching capability to handle the full load even when all ports are simultaneously active for long periods of time. However, even a non-blocking switch will discard frames when a port becomes congested, depending on traffic patterns.
Typical switch hardware has dedicated support circuits that are designed to help improve the speed with which the switch can forward a frame and perform such essential functions as looking up frame addresses in the address filtering database. Because support circuits and high-speed buffer memory are more expensive components, the total performance of a switch is a trade-off between the cost of those high performance components and the price most customers are willing to pay. Therefore, you will find that not all switches perform alike.
Some less expensive devices may have lower packet forwarding performance, smaller address filtering tables, and smaller buffer memories. Larger switches with more ports will typically have higher performance components and a higher price tag. Switches capable of handling the maximum frame rate on all of their ports, also described as non-blocking switches, are capable of operating at wire speed. Fully non-blocking switches that can handle the maximum bit rate simultaneously on all ports are common these days, but it’s always a good idea to check the specifications for the switch you are considering.
The performance required and the cost of the switches you purchase can vary depending on their location in the network. The switches you use in the core of a network need to have enough resources to handle high traffic loads. That’s because the core of the network is where the traffic from all stations on the network converges. Core switches need to have the resources to handle multiple conversations, high traffic loads, and long duration traffic. On the other hand, the switches used at the edges of a network can be lower performance, since they are only required to handle the traffic loads of the directly connected stations.
All switches contain some high-speed buffer memory in which a frame is stored, however briefly, before being forwarded onto another port or ports of the switch. This mechanism is known as store-and-forward switching. All IEEE 802.1D-compliant switches operate in store-and-forward mode, in which the packet is fully received on a port and placed into high-speed port buffer memory (stored) before being forwarded. A larger amount of buffer memory allows a bridge to handle longer streams of back-to-back frames, giving the switch improved performance in the presence of bursts of traffic on the LAN. A common switch design includes a pool of high-speed buffer memory that can be dynamically allocated to individual switch ports as needed.
Given that a switch is a special-purpose computer, the central CPU and RAM in a switch are important for such functions as spanning tree operations, providing management information, managing multicast packet flows, and managing switch port and feature configuration.
As usual in the computer industry, the more CPU performance and RAM, the better, but you will pay more as well. Vendors frequently do not make it easy for customers to find switch CPU and RAM specifications. Typically, higher cost switches will make this information available, but you won’t be able to order a faster CPU or more RAM for a given switch. Instead, this is information useful for comparing models from a vendor, or among vendors, to see which switches have the best specifications.
Switch performance includes a range of metrics, including the maximum bandwidth, or switching capacity of the packet switch electronics, inside the switch. You should also see the maximum number of MAC addresses that the address database can hold, as well as the maximum rate in packets per second that the switch can forward on the combined set of ports.
Shown here is a set of switch specifications copied from a typical vendor’s data sheet. The vendor’s specifications are shown in bold type. To keep things simple, in our example we show the specifications for a small, low-cost switch with five ports. This is intended to show you some typical switch values, and also to help you understand what the values mean and what happens when marketing and specifications meet on a single page.
Some switches designed for use in data centers and other specialized networks support a mode of operation called cut-through switching, in which the packet forwarding process begins before the entire packet is read into buffer memory. The goal is to reduce the time required to forward a packet through the switch. This method also forwards packets with errors, since it begins forwarding a packet before the error checking field is received.
This set of vendor specifications shows you what port speeds the switch supports and gives you an idea of how well the switch will perform in your system. When buying larger and higher-performance switches intended for use in the core of a network, there are other switch specifications that you should consider. These include support for extra features like multicast management protocols, command line access to allow you to configure the switch, and the Simple Network Management Protocol to enable you to monitor the switch’s operation and performance.
When using switches, you need to keep your network traffic requirements in mind. For example, if your network includes high-performance clients that place demands on a single server or set of servers, then whatever switch you use must have enough internal switching performance, high enough port speeds and uplink speeds, and sufficient port buffers to handle the task. In general, the higher-cost switches with high-performance switching fabrics also have good buffering levels, but you need to read the specifications carefully and compare different vendors to ensure that you are getting the best switch for the job.
The most recent version of the 802.1D bridging standard is dated 2004. The 802.1D standard was extended and enhanced by the subsequent development of the 802.1Q-2011 standard, “Media Access Control (MAC) Bridges and Virtual Bridge Local Area Networks.”
The Preamble field at the beginning of the frame is automatically stripped off when the frame is received on an Ethernet interface, leaving the Destination Address as the first field.
The TCP/IP network protocol is based on network layer packets. The TCP/IP packets are carried between computers in the data field of Ethernet frames. In essence, Ethernet functions as the trucking system that transports TCP/IP packets between computers, carried as data in the Ethernet frame. You will also hear Ethernet frames referred to as “packets,” but as far as the standards are concerned, Ethernet uses frames to carry data between computers.
Any Ethernet system still using coaxial cable segments and/or repeater hubs may have multiple stations on a network segment. Connecting that segment to a switch will result in multiple stations being reachable over a single port.
Suppressing frame transmission on the switch port prevents stations on a shared segment connected to that port from seeing the same traffic more than once. This also prevents a single station on a port from receiving a copy of the frame it just sent.
Both Layer 3 networks and VLANs create separate broadcast domains. Broadcasts and link layer multicasts are not automatically forwarded between networks by routers, and each VLAN operates as a separate and distinct LAN. Therefore, both routers and VLANs provide separate broadcast domains that limit the propagation of broadcasts and multicasts in a complex network system.
Beware that low-cost switches may not include spanning tree capability, rendering them unable to block any packet forwarding loops. Also, some vendors that provide spanning tree may disable it by default, requiring you to manually enable spanning tree before it will function to protect your network.
The bridge multicast group MAC address is 01-80-C2-00-00-00. Vendor-specific spanning tree enhancements may also use other addresses. For example, Cisco per-VLAN spanning tree (PVST) sends BPDUs to address 01-00-0C-CC-CC-CD.
It may happen that a low-performance bridge on your network will have the lowest MAC address and end up as the root bridge. You can configure a lower bridge priority on your core bridge to ensure that the core bridge is chosen to be the root, and that the root will be located at the core of your network and running on the higher-performance switch located there.
Prior to the development of RSTP, some vendors had developed their own versions of this feature. Cisco Systems, for example, provided the “portfast” command to enable an edge port to immediately begin forwarding packets.
Perlman, Radia. Interconnections: Bridges, Routers, Switches and Internetworking Protocols (2nd Edition), New York: Addison-Wesley, 1999, p. 46.
The IEEE 802.1Q standard notes that: “The spanning tree protocols specified by this standard supersede the Spanning Tree Protocol (STP) specified in IEEE Std 802.1D revisions prior to 2004, but facilitate migration by interoperating with the latter…”
For example, a 100 Mbps Ethernet LAN can send a maximum of 148,809 frames per second, when using the minimum frame size of 64 bytes.
If switch vendors marketed automobiles, then presumably they would market a car with a speedometer topping out at 120 mph as being a vehicle that provides an aggregate speed of 480 mph, since each of the four wheels can reach 120 mph at the same time. This is known as “marketing math” in the network marketplace.
Jumbo frames can be made to work locally for a specific set of machines that you manage and configure. However, the Internet consists of billions of Ethernet ports, all operating with the standard maximum frame size of 1,500 bytes. If you want things to work well over the Internet, stick with standard frame sizes. |
Extend a Pattern
In this color blocks worksheet, students explore patterns and sequences. Students determine the number of combinations that can be made before they repeat. This one-page worksheet contains 1 multi-step problem.
3 Views 4 Downloads
Growing Patterns and Sequences
Learners explore, discover, compare, and contrast arithmetic and geometric sequences in this collaborative, hands-on activity. They build and analyze growing patterns to distinguish which kind of sequence is represented by a set of data...
6th - 7th Math CCSS: Adaptable
Module 3: Arithmetic and Geometric Sequences
Natural human interest in patterns and algebraic study of function notation are linked in this introductory unit on the properties of sequences. Once presented with a pattern or situation, the class works through how to justify...
8th - 10th Math CCSS: Adaptable
Graphing Patterns: Student Activity Lesson Plan
Your future graphers battle their way to a complete understanding of plotting points on a coordinate grid. Using analogies of sinking battleships and finding sunken treasure, learners collaboratively and independently explore graphing...
6th - 8th Math CCSS: Adaptable
Stitching Quilts into Coordinate Geometry
Who knew quilting would be so mathematical? Introduce linear equations and graphing while working with the lines of pre-designed quilts. Use the parts of the design to calculate the slope of the linear segments. The project comes with...
8th - 11th Math CCSS: Adaptable
Tile Patterns I: Octagons and Squares
This can be used as a critical thinking exercise in congruence or as a teaching tool when first introducing the concept. Four octagons are arranged in such a way that a square is formed in the middle. With this information, geometry...
7th - 9th Math CCSS: Designed |
European colonization of the North America began in the 15 century when John Cabot reached the northern part of the island of Newfoundland and declared it the possession of the English crown. The meeting of Europeans with indigenous people of North America was followed by both not equal fight, and interaction. However, as a result of this racial and cultural mixing was born and began to develop a new civilization. The colonists did not find gold in the new lands and therefore, their development differed from the Spanish conquest and at the first stage was peaceful.
Before the colonization period, up to 400 tribes lived in the territory of the future United States of America. At the first time, when the number colonizers were not as high as at the late 17 century, and their villages occupied small areas on the ocean coast, the colonizers and Indians lived peacefully. The Indians were friendly to the newcomers, supplied them food, and did barter. Nevertheless, as a number of settlers grew, they began to move into the center of the country and to push aside Indians from their lands. The natives did not know commodity-money relations, had no idea about the value of the land, or land ownership rights, and sold vast areas of lands for peanuts. The Lands of Indians were captured by farmers or the companies of speculators. Under the influence of missionaries, the entire system of values and principles of Indian life has been changed. The colonists brought to the North America previously unknown disease and alcohol that killed the local population (“Spanish Colonization”). Inter-tribal feuds also hampered the unity of Indian resistance to the Europeans. Therefore, all subsequent colonization of America was linked to war with the Indians, and their destruction. Indian tribes were detached from each other, fragmented into dozens of small groups, and placed into the reservations. After the capitalist colonization, especially the discovery of gold in California (1848), the majority of the indigenous population was wiped out.
The Spaniards were the first Europeans who came to America, and to the middle of the 16 century, they were the main researchers of the North America. The British, Portuguese and French also did significant discoveries on the Atlantic coast of North America. Spanish possessions stretched from the Cape Horn to New Mexico and brought huge profits to the royal treasury. The Spanish conquerors captured Indians, plundered and burned their villages, and Indians tried to resist them (“Colonial Settlement, 1600s-1763”). However, in the 17 century, Spanish villages occupied quite a large area on the Atlantic coast of North America (Florida, Georgia, North Carolina) as well as on the banks of the Gulf of Mexico. In the West, they possessed California and areas corresponding to present states Texas, Arizona, New Mexico. From the middle of the 16, century conquerors began to develop new mines. All conquered lands became the property of the Spanish king and their inhabitants - his subjects. Numerous missionaries arrived overseas for the conversion of the Indians to the Catholic faith. The discovery of the New World and the emergence of Spain and Portugal possessions there stopped independent development of these nations and laid the foundation of their colonial dependence. In the 17 century, the balance of power in the Old World has changed, and France and England ousted Spain from the colonial leadership in the North America. Therefore, the most serious contenders for the supremacy in the American colonies were England, Holland, and France.
Despite the fact that the French and Dutch were among the pioneers in the colonization of North America, they did not manage to get a foothold in this new lands. Their merchants bought up fur that was hunted by the Indians and got a huge profit, French did not create big settlements over the ocean. French and Dutch colonization did not become popular. French farmers were firmly attached to their land and did not seek overseas. The French founded small settlements, factories, and set feudal obligations for the peasants. French merchants and landowners bought fur from the Indians for a pittance, so the French did not force out Indians from their homes. The majority of the French and Dutch colonists were all sorts of adventurers and people who could not find the place in the economy of their countries, but they could not create the base for development of the economy in the New World. The French colonization was very unstable for example in 1562 French-based Charlesfort, but due to the lack of provisions left it and the same story happened with the Fort St. Louis. The New France was the most developed French colony. The French built many churches in this colony and made clergy responsible for the secular affairs of the colony. French colonies were short of entrepreneurs, as well as the colony needed a blacksmith, woodcutters, coopers, and carpenters. The elementary accomplishment of the colonies proceeded very slowly. The French treasury allocated very few funds on their possessions in North America, which affected on the success of the French colonization of the New World.
Dutch colonization of North America began in 1621, from the founding of the Dutch West India Company. In 1624 Dutch fur traders founded the Dutch province on the island of Manhattan, which was known as New Netherland, with its main city the New Amsterdam. The land on which the city was founded, was bought in 1626 by the Dutch colonists from the Indians for only $ 24. Unfortunately, the Dutch did not manage to achieve any significant socio-economic development of their colony in the New World. The Dutch were interested only in the fur trade (“Dutch Colonies”). The Dutch farmers didn't hurry to go to the New World, and the population of Holland was not numerous. Neither France nor Spain, and the Netherlands has not given the mass flow of peasants to the colonies. Most colonists from these countries were merchants, capitalists, and wealthy aristocrat’s entrepreneurs. Having received the royal charter, they were not interested in the migration of the peasants to the colonies, on the contrary, they tried to prevent the mass peasant colonization of their American possessions. The capitalists, entrepreneurs in the French and Dutch colonies founded factories that usually had a military garrison and considered that their main task was a profitable fur trade with the Indians. Taking into account all these factors, none of these three countries was not able to take root in their new possessions and collapsed under the pressure of England.
England began to colonize America later than other European countries. The British were not looking for rich deposits of gold and silver, as the Spaniards, or markets for the purchase and export of rare and valuable commodities, as Portuguese and Dutch. They sought to find free land suitable for cultivation, and North America has become exactly what the British were looking for. The British established their first permanent settlement in 1607 at the mouth of the James River (Grigg &Mancall, 190). Subsequently, new settlements emerged on the north and south along the coast of Spanish Florida to New England. Each of these colonies formed independently with its own access to the sea. Initially, the founders of the first colonies were trading companies that took over the transportation and installation of settlers in the new territories, and bourgeoisie who bought or received land in the New World from the king. The companies and landowners had the right to appoint governors, and to collect taxes.
In the English colonies everyone could rent a small plot of land or to settle in the undeveloped areas. The colonists felt freer in the New World than in the England itself. In this new land, there were no those traditional practices that complicated the life of an ordinary Englishman. All the achievements of the colonists were the result of their hard work, that’s why very quickly they became independent from the companies and the landowners. In addressing of all their problems colonists preferred self-organization, what quickly led to the democratization of all aspects of social life in English colonies. Meetings, discussions of the governor orders, and laws passed by the British Parliament have become a norm of life.
Each colony had its own procedures and practices. In the southern colonies, there was a significant layer of black slaves that worked on large plantations. In the northern colonies, free labor prevailed over the slavery. However, Puritan morality strictly regulated the behavior of the northern colonists: gambling games were banned, and population strictly followed all religious rules. Among English possessions were some colonies where the church was separated from the state and all citizens have equal rights. The attitude of colonist’s to the indigenous population were different. In Puritan colonies clergy believed that the natives were infected by the spirit of Satan, and they were trying to destroy it. In other colonies, the attitude towards the natives was more loyal.
In economic development English colonies also significantly differed from each other. The plantation economy was strongly developed in the southern colonies, and they became main suppliers of tobacco, a sugar cane, rice, cotton, etc. In the northern colonies, the most developed areas were farming and various crafts. The economic development of the colonies was hindered by the metropolis. The colonists were forbidden to handle iron and skins. England held back the development of factories. The trade with the colonies gave huge profits to the metropolis. Colonies have become the main driving force of the growing British industry. Several development models closely coexisted in the British colonies: capitalism in manufacturing type; slavery as a way of manufacturing capitalism; feudal relations; and farming (“The Economies of the British ”).
“Colonial Settlement, 1600s-1763”. Library of Congress. Web. Accessed 06 Apr 2016 at
“Dutch Colonies”. National Park Service. Web. Accessed 06 Apr 2016 at
Grigg, John A. and Peter C. Mancall. “British Colonial America: People and Perspectives”.
“Spanish Colonization”. Digital History, University of Houston. Web. Accessed 06 Apr 2016
“The Economies of the British North American Colonies in 1763”. San Jose State University.
Web. Accessed 06 Apr 2016 at <http://www.sjsu.edu/faculty/watkins/colonies1763.htm> |
Excerpt from the Jay Treaty
Signed on November 19, 1794
Published in Documents of American History, edited by
Henry S. Commager, 1943
Signed in November 1794, the Jay Treaty (also known as Jay's Treaty) was essentially a truce between Britain and the United States, with each country agreeing to put an end to their hostilities and honor promises made in earlier treaties. When the terms of the Jay Treaty were made public, political division within the United States increased significantly. The two U.S. political parties, the Federalists and the Democratic-Republicans, supported opposing sides in the European war between France and Britain. France had declared war on Britain in 1793, hoping to start a rebellion of the working-class British against the British aristocracy (upper class), similar to the rebellion that was already occurring in France. The Democratic-Republicans remained loyal to France for its support of American forces during the American Revolution (1775–83), when the colonies were fighting for independence from Britain. Democratic-Republicans viewed the French Revolution as similar to the American Revolution and supported the French rebellion. They called for the United States to join France in its war against Britain.
The Federalists, who were largely merchants, manufacturers, and shippers, wanted the United States to strengthen its ties with Britain in order to increase trade. Federalists opposed war with Britain. They believed another war with Britain would be economically disastrous for the United States. However, the Federalists found it difficult to win any support for their cause, because Britain was purposely disrupting American trade with other countries. Between 1793 and 1794, the British seized nearly three hundred American ships that were trading with islands in the French West Indies. The British also captured U.S. sailors and forced them into military service aboard British warships; this practice was known as impressment. These seizures of ships and sailors infuriated Americans.
Another lingering issue causing American contempt for the British was their continued occupation of fur-trading forts in the Old Northwest. The Old Northwest included land north of the Ohio River to the Great Lakes and Canada, east to Pennsylvania's western border, and west to the Mississippi River. British forts included Mackinac, Detroit, Niagara, Oswegatchie, and Oswego on Lake Ontario, and Dutchman's Point on Lake Champlain. From the forts, British troops supplied Native Americans with guns and encouraged them to resist American settlement. Under the terms of the 1783 Treaty of Paris (the peace agreement that ended the American Revolution and granted the United States independence from Britain), the British were to abandon these forts. Instead, the British had remained, and their support of Native American hostilities had almost completely halted American settlement in the region.
Another unresolved issue with Britain involved slaves. During the American Revolution, the British offered freedom to slaves if they could escape, manage to reach British lines, and join the British army. Many slaves took this opportunity to gain their freedom. Some thirty thousand fled Virginia, and twenty-five thousand left South Carolina. When the war ended, Southern planters demanded payment from the British for the loss of these slaves. However, despite repeated demands, the British had not paid them anything.
In 1794, President George Washington (1732–1799; served 1789–97) sent Supreme Court chief justice John Jay (1745–1829) to Britain to negotiate resolutions to the difficulties in order to avoid war with Britain. Jay was an experienced negotiator. He had been part of the U.S. delegation that negotiated the 1783 Treaty of Paris. This time, Jay knew he and the United States were in a weak bargaining position, because the United States had almost no military capability to force the issues.
The treaty Jay managed to negotiate, the Jay Treaty, included a number of British concessions, or compromises. Britain once again promised to withdraw from its forts in the Old Northwest (Article 2). Britain agreed to pay for damages caused to U.S. ships recently seized (Article 7). In return, the United States agreed to repay some debts to Britain (Article 6). Britain also opened up British markets for trade with the United States (Articles 11–17).
Although the British had given in on certain issues, many Americans were outraged with the terms of the treaty. There were those who did not believe that the British would abandon the Old Northwest forts, because Britain had failed to follow through on the same promise eleven years earlier. The two issues that bothered Americans the most, impressment of sailors and the stealing of slaves, were not addressed in the treaty. Britain did agree to pay some damages to shippers for the loss of their vessels and opened up trade for the United States with Britain and British territories. However, the treaty made no guarantees that Britain would stop seizing U.S. ships that were trading with countries such as France.
To many Americans, the Jay Treaty seemed like a Federalist sellout to the British for more trade. Federalist shippers would be paid back for seizure of their vessels, and Federalist merchants and manufacturers would have British markets open to them; however, there was little in the treaty for anyone else. The treaty did draw the United States and Britain closer together than they had been since the end of the American Revolution in 1783.
The French were also upset by the terms of the Jay Treaty. The U.S. alignment with Britain seemed in defiance of an earlier treaty between the United States and France. Article 8 of the 1778 Treaty of Alliance with France stated that neither the United States nor France could make a truce or peace with Britain "without the formal consent of the other." The pro-French Democratic-Republicans were disgusted, because the United States appeared to be breaking its promise to France. More and more Americans adopted an anti-British, anti-Federalist view and moved to the pro-French Democratic-Republican side.
Knowing the Jay Treaty would be controversial, President Washington tried to keep the terms of the treaty secret. He even called a secret session of the Senate on June 8, 1795, to consider ratification (approval). Copies of the treaty leaked out, and intense public debate took place. Nonetheless, the Senate, dominated by Federalists, ratified the treaty on August 14, 1795. Despite substantial public opposition, Washington signed the treaty shortly afterward, with strong encouragement from Alexander Hamilton (1755–1804), the secretary of the treasury and leader of the Federalists.
Things to remember while reading excerpts from the Jay Treaty:
- In the Jay Treaty, Britain made some concessions by agreeing to leave the Old Northwest fur-trading posts, open trade, and reimburse U.S. shippers for seized vessels. The treaty averted war between the United States and Britain. However, it did not guarantee that the British would stop capturing U.S. sailors and seizing U.S. ships trading with countries other than Britain. The treaty also ignored the issue of compensation for stolen slaves. Therefore, overall the treaty was a bitter disappointment to a majority of Americans.
- Articles 2, 3, and 4 announced that the British would leave Old Northwest fur-trading posts, that all inhabitants including Native Americans could navigate the lakes and rivers and pass freely through the region, and that a boundary would be established between the United States and British-controlled Canada.
- Articles 6 and 7 promised British payment of damages to U.S. merchants and citizens and U.S. payment of debts owed to British merchants and other British lenders.
- Articles 11, 13, 14, and 15 opened limited trade markets. U.S. ships were allowed to trade with Britain and British territories in the East Indies (Southeast Asia). However, any U.S. ship trading in the East Indies could only bring goods back to U.S. ports and could not go to any other country.
Excerpt from the Jay Treaty
ART. I. There shall be a firm ... peace, and a true and sincere friendship betweenhis Britannic Majesty, his heirs and successors, and the United States of America; and between their respective countries, territories, cities, towns and people of every degree, without exception of persons or places.
ART. II. His Majesty will withdraw all his troops andgarrisons from all posts and places within the boundary lines assigned by thetreaty of peace to the United States. This evacuation shall take place on or before [June 1, 1796] ...: The United States in the mean timeat their discretion extending their settlements to any part within the said boundary line . ... All settlers and traders, within theprecincts or jurisdiction of the said posts, shall continue to enjoy,unmolested, all their property of every kind, and shall be protected therein. ...
ART. III. It is agreed that it shall at all times be free to his Majesty's subjects, and to the citizens of the United States, and also to the Indians dwelling on either side of the said boundary line freely topass and repass by land, orinland navigation, into the respective territories and countries ofthe two parties on the continent of America ... and to navigate all the lakes, rivers, and waters thereof, and freely to carry on trade and commerce with each other. ... The river Mississippi, shall however, according to the treaty of peace be entirely open to both parties; and it is further agreed, that all the ports and places on its eastern side ... may freely be ... used by both parties, in asample a manner as any of the Atlantic ports or places of the United States, or any of the ports or places of his Majesty in Great Britain. ...
ART. IV. Whereas it is uncertain whether the river Mississippi extends so far to the northward, as to beintersected by a line to be drawn due west from theLake of the Woods ...; the two parties will thereupon proceed byamicable negotiation, toregulate the boundary line in that quarter. ...
ART. VI. Whereas it is alleged by ... British merchants and others ... that debts, to a considerable amount, which werebona fide contracted beforethe peace, still remain owing to them by citizens or inhabitants of the United States, and that by theoperation of variouslawful impediments since the peace, not only the full recovery of the said debts has been delayed, but also the value and security thereof have been, in several instances, impaired and lessened, so that by the ordinary course ofjudicial proceedings, the British creditors cannot now obtain ... full and adequate compensation for the losses and damages, which they have thereby sustained. It is agreed, that in all such cases where full compensation for such losses and damages cannot, for whatever reason, be actually obtained, had and received by the saidcreditors in the ordinary course of justice, the United States will make full and completecompensation for the same to the said creditors. ...
ART. VII. Whereas complaints have been made by ... merchants and others, citizens of the United States, that during the course of the war in which his Majesty is now engaged, they have sustained considerable losses and damage, by reason of irregular or illegal captures ... of their vessels and other property,under colour of authority or commissions from his Majesty, and that from various circumstances belonging to the said cases adequate compensation for the losses and damages so sustained cannot now be actually obtained ... by the ordinary course of judicial proceedings; it is agreed that in all such cases, where adequate compensation cannot, for whatever reason, be now actually obtained, had and received by the said merchants and others, in the ordinary course of justice, full and complete compensation for the same will be made by the British government to the said complainants. ...
ART. XI. It is agreed between his Majesty and the United States of America, that there shall be areciprocal and entirely perfect liberty of navigation andcommerce, between their respective people, in the manner, under the limitations, and on the conditions specified in the following articles:
[Art. XII, relating to trade with the West Indies, was suspended.]
ART. XIII. His Majesty consents that the vessels belonging to the citizens of the United States of America, shall be admitted and hospitably received, in all the sea ports and harbours of the British territories in theEast Indies: and that the citizens of the said United States, may freely carry on a trade between the said territories and the said United States, in allarticles of which the importation or exportation respectively, to or from the said territories, shall not be entirely prohibited . ... The citizens of the United States shall pay for their vessels when admitted into the said ports no other or highertonnage duty than shall be payable on British vessels when admitted into theports of the United States. And they shall pay no other or higher duties or charges on the importation or exportation of the cargoes of the said vessels, than shall be payable on the same articles when imported or exported in British vessels. But it is expressly agreed, that the vessels of the United States shall not carry any of the articles exported by them from the said British territories, to any port or place, except to some Port or Place in America, where the same shall beunladen, and such regulations shall be adopted by both parties, as shall from time to time be found necessary to enforce the due and faithful observance of thisstipulation . ...
ART. XIV. There shall be between all thedominions of his Majesty in Europe, and the territories of the United States, a reciprocal and perfect liberty of commerce and navigation. The people and inhabitants of the two countries respectively, shall have liberty freely and securely, and without hindrance andmolestation, to come with their ships and cargoes to the lands, countries, cities, ports, places and rivers, within the dominions and territories aforesaid, to enter into the same, toresort there, and to remain and reside there, without any limitation of time. Also to hire and possess houses and warehouses for the purposes of their commerce, and generally the merchants and traders on each side, shall enjoy the most complete protection and security for their commerce. ...
ART. XV. It is agreed, that no other or higher duties shall be paid by the ships or merchandize of the one party in the ports of the other, than such as are paid by the like vessels or merchandize of all other nations. Nor shall any other or higher duty be imposed in one country on the importation of any articles the growth, produce or manufacture of the other, than are or shall be payable on the importation of the like articles being of the growth, produce, or manufacture of any other foreign country. Nor shall any prohibition be imposed, on the exportation or importation of any articles to or from the territories of the two parties respectively, which shall not equally extend to all other nations. ...
... it is agreed, that the United States will not impose any new or additional tonnage duties on British vessels. ...
ART. XVII. It is agreed that, in all cases where vessels shall be captured or detained on just suspicion of having on board enemy's property, or of carrying to the enemy any of the articles which arecontraband of war; the said vessel shall be brought to the nearest or most convenient port, and if any property of an enemy should be found on board such vessel, that part only which belongs to the enemy shall bemade prize, and the Vessel shall be at liberty to proceed with the remainder without anyimpediment . ...
ART. XIX. And that more abundant care may be taken for the security of the respective subjects and citizens of the contracting parties, and to prevent their suffering injuries by themen of war, orprivateers of either party, all commanders of ships of war and privateers, and all others the said subjects and citizens shallforbear doing any damage to those of the other party, or committing anyoutrage against them, and if they act to the contrary, they shall be punished, and shall also be bound in their persons and estates to makesatisfaction and reparation for all damages. ...
ART. XXII. It is expresslystipulated, that neither of the said contracting parties will order or authorize any acts ofreprisal against the other, on complaints of injuries or damages, until the said party shall first have presented to the other a Statement thereof, verified by competent proof and evidence, and demanded justice and satisfaction, and the same shall either have been refused or unreasonably delayed. ...
What happened next...
Surprising many Americans, the British did begin abandoning their Old Northwest forts. Therefore, the Jay Treaty, along with key U.S. military victories over Native Americans in the Old Northwest, opened up settlement across the Appalachian Mountains north of the Ohio River. The treaty also opened the door for a treaty with Britain's ally, Spain. Spain had agreed to help Britain fight against France in the European war. However, by 1795, Spain was out of money and ready to pull out of its commitment. The Spanish feared that backing out of the alliance would jeopardize their holdings in America; an angry Britain might attack and take over these lands. Spain controlled land west of the Mississippi River, all travel and trade going up and down the river, and the port of New Orleans. Spain also controlled Florida, which at that time included a strip of land running west along the Gulf of Mexico to present-day Louisiana. In addition, Spain claimed a large strip of land west of Georgia called the Yazoo Strip, land that the United States also claimed.
For years Spain had harassed western settlers who needed to transport goods and supplies on the Mississippi River. Spain collected fees on shipments going and coming through New Orleans. Spain also angered the United States by supplying guns and ammunition to Native Americans who were resisting U.S. settlement in the Yazoo Strip.
Spain decided the best way to prevent Britain and the United States from joining together to attack New Orleans was to gain U.S. friendship. The United States sent diplomat Thomas Pinckney (1750–1828) to Spain for negotiations. On October 27, 1795, the Treaty of San Lorenzo, also known as the Pinckney Treaty, was signed. Spain gave up all claims to the Yazoo Strip and agreed to stop supporting Native American resistance against U.S. settlements. Even more important, the treaty granted Americans free use of the Mississippi River, eliminating fees at New Orleans. With the Jay Treaty and the Treaty of San Lorenzo in place, American settlement of the western lands accelerated rapidly.
Although shipping was risky for the United States in the mid- to late 1790s, trade continued to grow. The Jay Treaty opened up trade with Britain, and the Pinckney Treaty opened the Mississippi River for easy transportation of U.S. goods grown and produced west of the Appalachian Mountains. These goods were then shipped around Florida to the U.S. Atlantic coast and to foreign countries. While Britain and France continued their war, they ignored many of their worldwide trading partners. The United States filled the void. American ships replaced British and French ships in trade with the West Indies and European markets, and the U.S. economy grew rapidly. The bustling ports of New England, New York, and Philadelphia exported U.S. goods and goods that were brought up from the West Indies, shipping them to many foreign destinations. Meanwhile, Britain and France continued to seize U.S. ships, since the Jay Treaty only applied to trade between Britain and the United States. So Britain's navy could still seize U.S. ships bound for France and other countries, and France could still seize U.S. ships bound for Britain. Although hundreds more were seized, thousands of U.S. ships got safely through to their destinations.
The rise of U.S. trade increased the growth of American industries. Shipbuilding companies, warehouses, ports, banks, insurance companies covering cargo losses, manufacturers, and farmers whose products were exported all had rising profits. Profits were often invested in new U.S. manufacturing ventures and land expansion.
Within a few years, Americans' dismay over the Jay Treaty had somewhat subsided. The Federalists, in the form of Vice President John Adams (1735–1826), would win the presidential election of 1796, but their popularity was decreasing. The number of Democratic-Republicans continued to grow, and in 1800 Thomas Jefferson (1743–1826), the leader of the Democratic-Republicans, won the presidency by defeating the incumbent Federalist president Adams.
Did you know...
- Although President Washington had hoped to keep the controversial terms of the Jay Treaty secret and avoid intense public debate, he was unsuccessful. Benjamin Franklin Bache (1769–1798), grandson of Benjamin Franklin (1706–1790) and editor of the Democratic-Republican newspaper Aurora, obtained a copy of the treaty and published a summary on June 29, 1795, for all to read.
- Federalist Alexander Hamilton, who served as the U.S. secretary of the treasury in the Washington administration, was a strong supporter of the Jay Treaty. When news came that the treaty had been signed, Hamilton believed he had achieved his goals of strengthening relations with Britain and setting the United States on a course of economic and financial prosperity. Hamilton retired from public service in late January 1795, before the treaty was ratified. He remained the leader of the Federalist Party.
- Most likely the reason Britain willingly gave up its forts in the Old Northwest in the mid-1790s, after refusing to do so in the mid-1780s, was a decline in the fur trade. By the 1790s, the area south of the Great Lakes was rapidly becoming "trapped out." So many animals had been trapped and skinned that the animal population and hence the number of furs available were rapidly decreasing.
Consider the following...
- Conduct a debate over the Jay Treaty, with members of the class taking sides with either the Federalists or the Democratic-Republicans.
- What policy did the British carry out against Americans on the high seas, and how did U.S. citizens feel about it?
- In 1793, President George Washington declared a policy of neutrality, saying that the United States would not take sides with the British or the French in their European war. Consider why Washington later wanted the Jay Treaty approved. List the possible reasons. Do you think Washington's behavior indicated he was flexible and trying to promote the common good or simply weak and giving in to heavy Federalist pressure?
His Britannic Majesty: The king of England.
Garrisons: Troops stationed at forts.
Treaty of peace: The 1783 Treaty of Paris, which ended the American Revolution and granted the United States independence from Britain.
At their discretion: Whenever they desire.
Precincts or jurisdiction: Areas of legal authority.
Pass and repass: Travel back and forth.
Inland navigation: On lakes and rivers.
The two parties: Britain and the United States.
Lake of the Woods: A lake located in southeastern Manitoba, southwestern Ontario, and northern Minnesota.
Regulate the boundary line: Decide on a boundary line between Canada and the United States.
Bona fide contracted: Agreed to in good faith without deception.
The peace: The 1783 Treaty of Paris.
Lawful impediments: Legal obstacles created by laws passed.
Creditors: People to whom money is owed.
Under colour of authority or commissions from his Majesty: By the British navy or ships authorized by Britain.
East Indies: Malay islands and Southeast Asian countries.
Tonnage duty: Fee per each ton of cargo.
Resort: Frequently travel.
Contraband of war: Prohibited war supplies.
Made prize: Seized.
Men of war: British navy.
Privateers: Privately owned ships given authority by the military to fight or harass the enemy.
Forbear: Refrain from.
Satisfaction and reparation: Compensation and payment.
For More Information
Bemis, Samuel F. Jay's Treaty: A Study in Commerce and Diplomacy. Westport, CT: Greenwood Press, 1975.
Commager, Henry S., ed. Documents of American History. New York: F. S. Crofts and Company, 1943.
Morris, Richard B., ed. John Jay. New York: Harper and Row, 1975.
Stahr, Walter. John Jay: Founding Father. London: Hambledon and London, 2005.
Statesman, diplomat, Supreme Court chief justice
Although he originally opposed the idea of American independence from Britain, John Jay became one of the most important figures in the fight for independence and the shaping of the new nation. Showing exceptional intelligence, great dignity, boundless ability, and high moral integrity, Jay made invaluable contributions to the fledgling government of the United States of America. He held more prestigious public offices than any other person in the late eighteenth and early nineteenth centuries. His many roles included serving as a foreign diplomat, a Supreme Court chief justice, and governor of New York. His self-confidence and uncompromising adherence to his beliefs while in office contributed to the strong character of the nation.
"This country and this people seem to have been made for each other."
Born into a privileged life
John Jay was born to Peter Jay and Mary Van Cortlandt Jay in New York City in December 1745. He was the sixth son among eight children in the family. Both of his parents came from very influential families in the colony of New York. His mother came from one of the large landowning Dutch families that settled the Hudson River valley. His grandfather, Augustus Jay, came to New York in 1686 from France. He was part of a group of French Protestants who faced religious persecution by the French Catholic government at the time; along with many others, he escaped imprisonment in France by boarding a ship to America. John Jay grew up hearing stories about the persecution of his French ancestors, and as a result, he would never trust the French government.
John Jay's father, Peter Jay, was a successful merchant, so young John grew up in a privileged setting of wealth and power. Through private tutors, he received a well-rounded education. John was bookish and very serious in demeanor, with a keen mind and an ability to speak in grand style. Some also saw him as cold, formal, and quiet, and frequently arrogant. He could also be very stubborn. He quickly graduated from King's College (later Columbia University) in 1760 with high honors; he was only fourteen years old. He spoke French, Greek, and Latin fluently. Jay was tall and slender and was frequently ill. His face was distinctive, with a prominent nose, high arched eyebrows, and a long chin.
A career of law and politics
In 1764, Jay began the study of law at a law office in New York City. He was admitted to the bar (legal profession) in 1768 and began his private law practice. For a while, he shared a practice with a past college mate Robert R. Livingston (1746–1813), like Jay a future U.S. diplomat. Jay took all kinds of legal cases and was financially successful in his work. His first public role came in 1773 when he served as secretary of a royal commission to settle a boundary dispute between the colonies of New York and New Jersey.
In April 1774, Jay married Sarah Van Brugh Livingston. Her father, William Livingston (1723–1790), was a large landowner and would serve as the governor of New Jersey during the American Revolution (1775–83). The Jays had seven children.
In 1774, Jay became a conservative member of the New York Committee of Correspondence, a committee responsible for sending out written information describing political viewpoints to similar committees in other colonies. It was a key means of sharing important information. He was also a delegate to the First Continental Congress in Philadelphia in 1774. At twenty-eight years of age, Jay was the second youngest member of Congress. He represented the wealthy colonial merchants who opposed independence from Britain. The merchants feared that a new, independent, democratic government (whose laws and functions are determined by the will of the majority) would mean mob rule and the loss of their property. However, Jay had one complaint about British rule: He did not like the new taxes Britain had imposed on the colonies. On this issue, he voted in favor of the decision Congress made to communicate their grievances to Britain. He personally drafted "The Address to the People of Great Britain," stating the colonists' claims. Jay also served as a delegate to the Second Continental Congress meeting in 1775.
Jay becomes a Patriot
In 1776, Jay did not support the writing of the Declaration of Independence. However, once the Continental Congress adopted the Declaration, Jay joined fully with the Patriots (colonists who favored independence from Britain) to fight against British rule. Jay became a strong supporter of the revolutionary cause. The Jays, Livingstons, and Van Cortlandts were among the few wealthy families supporting the Patriot cause. After British forces invaded New York in 1776, Jay organized spy rings for the Continental Army and helped deliver cannons to troops led by General George Washington (1732–1799; see entry in volume 2), who were defending New York.
After serving in the Continental Congress, Jay played a key role in the New York Provisional Congress. He drafted a constitution for the newly formed state of New York. In the new state government, Jay was appointed chief justice of the New York Supreme Court in 1777. In this role, he would be interpreting the state constitution—the constitution that he himself had drafted. Jay served on the court until 1779. During this period, he also served in the New York militia but never saw active service. A militia is an organized military force, made up of citizens, that serves in times of emergency.
While serving on the New York court, Jay returned to Philadelphia as a New York delegate to the Continental Congress in December 1778. The delegates elected Jay president of the Congress, and he held this position until September 1779, when he received an appointment to become the U.S. minister to Spain. This appointment by Congress began his diplomatic career.
Jay's mission as U.S. minister to Spain was to seek Spain's support for America's fight against the British monarchy. Spain refused to provide open support; however, Spanish officials were interested in keeping their foe Britain occupied in the turmoil, so they secretly supplied funding and arms.
While in Europe in May 1782, Jay received a message from Benjamin Franklin (1706–1790; see entry in volume 1). Franklin was in Paris, France, trying to negotiate an end to the war with Britain, and he wanted Jay to come and assist him. They were joined by Massachusetts politician John Adams (1735–1826; see entry in volume 1). Jay did not waste time getting involved in the discussions: He immediately demanded British recognition of America's independence, even before a treaty was drafted, and it appeared that he had set back the sensitive negotiations that Franklin had been informally nurturing with the British. Along with Adams, Jay also convinced Franklin that they should conduct negotiations without coordinating with the French, contrary to directions from the Continental Congress.
After a difficult start, Jay's bold approach to negotiations unexpectedly proved successful. Britain had grown weary of the war, so the British negotiators agreed to the terms laid out by the Americans. The Treaty of Paris was signed in September 1783, putting an official end to the American Revolution. Britain recognized America's independence and gave to the United States all rights to lands east of the Mississippi River, except for lands held by Spain in Florida and along the Gulf Coast. Jay returned home in July 1784 a national hero.
Upon the successful completion of the peace negotiations, Jay was offered the position of minister to Britain. However, he had been away from the United States for several years and was eager to return home, so he declined the appointment. Jay was eager to resume his private law practice and renew his involvement in New York social life. However, public service beckoned again. The Continental Congress appointed Jay secretary of foreign affairs under the new Articles of Confederation, the first U.S. constitution. Jay was clearly the most qualified U.S. citizen in foreign affairs and was a natural choice for the job. He accepted the appointment in the Confederation government and served throughout the remainder of the 1780s.
As secretary of foreign affairs, Jay regularly received reports from John Adams, the U.S. representative in Britain, and Thomas Jefferson (1743–1826; see entry in volume 1), the U.S. representative in France. Adams and Jefferson reported that Britain and France were treating the young United States with disrespect. Jay negotiated several trade treaties with various other countries during this time but found particular frustration in dealing with Britain as well as its longtime ally Spain, both of whom refused to establish trade relations. In addition, British troops remained at trading posts in the new Northwest Territory of the United States, despite Britain's peace treaty promise to vacate that area.
Spain was also causing problems for Jay. Spain continued occupying posts in the South on U.S. soil. In addition, Spain closed navigation of the Mississippi River to Americans, cutting off the principal route for transporting produce from frontier farms west of the Appalachians to the East Coast. Jay negotiated with Spanish diplomats from 1784 to 1789, but they could not come to an agreement. In 1786, Jay thought he was close to concluding a treaty with Spain; the treaty would have given Spain exclusive use of the Mississippi River for thirty years in exchange for expanded trade between the two nations and mutual recognition of U.S. and Spanish territories in North America. However, Americans in the South and West were enraged by the prospect of giving up navigation rights to the Mississippi, and negotiations ended. Those in the West and South never trusted Jay again.
Jay grew frustrated with the Confederation government, which had no power to enforce treaty measures and was too weak to protect the new U.S. boundaries. He and other government leaders began to lobby for a stronger central government that could deal with foreign issues.
Chief justice in a new government
Busy serving as foreign secretary, Jay was not a delegate to the Constitutional Convention in Philadelphia in 1787. However, once the new U.S. Constitution was adopted, Jay joined with fellow delegates James Madison (1751–1836; see entry in volume 2) and Alexander Hamilton (1755–1804; see entry in volume 1) to write a series of essays promoting New York's ratification (approval) of the Constitution. The essays were published in New York newspapers between October 1787 and May 1788. The articles later appeared together in a book titled The Federalist. Because he was ill at the time, Jay wrote only five of the eighty-five papers. His essays focused on foreign affairs and the Constitution.
Following ratification of the new constitution and the election of George Washington as the first president, a new national government was formed. In late 1789, Washington appointed Jay as the first chief justice of the newly formed U.S. Supreme Court, selecting him over Jay's longtime friend Robert Livingston. The Senate quickly confirmed Jay's appointment. Jay and two associate justices held the first meeting of the U.S. Supreme Court on February 1, 1790, in New York City. Jay continued serving in foreign affairs until March 1790, when Thomas Jefferson took over as secretary of state for the new government. During the Supreme Court's early years, the justices not only heard cases before that court, but were expect to preside over cases in federal courts that heard appeals (appellate courts) that were located around the nation. Therefore, one of the most demanding aspects of Jay's Supreme Court position was riding the circuit holding court in remote locations in New York and New England. Roads were poor if they existed at all.
As the nation's first chief justice, Jay established many rules and procedures for the Court. The most important case he heard was Chisholm v. Georgia in 1793. This case raised a key question of state sovereignty (freedom from external control). In his decision, Jay established the right of the federal courts to hear cases in which a citizen of one state sues another state government. This was the first Court opinion establishing the power of the federal government over state governments. However, many Americans were still apprehensive of a strong central government. This led Congress and the states to adopt the Eleventh Amendment to the U.S. Constitution, which overturned the Court's decision in Chisholm. The new amendment denied the authority of federal courts in cases that involve an individual suing a state.
The Jay Treaty
As conflicts with Britain heated up again through the 1790s, President George Washington selected U.S. Supreme Court chief justice John Jay to journey to Britain. Jay was to reach a peaceful settlement on several nagging issues: Britain's refusal to recognize U.S. neutrality rights to international trade; the continued presence of British troops at fur-trading posts on American soil in the Northwest Territory; British encouragement of Native American resistance to expansion of American settlement in the West; and Britain's continuing practice of capturing American sailors to serve on British warships, a practice called impressment. For their part, the British wanted to resolve the issue of unpaid debt, money that the United States owed Britain for damages caused during the American Revolution. In the peace treaty that ended that war, America had promised to pay this debt; however, it had failed to do so.
By the 1790s, Jay had established himself as the leading foreign diplomat of the young nation. Relations with Britain were a hot political issue in the United States. Supporters of Thomas Jefferson and James Madison wanted retaliation against Britain for its actions as well as closer relations with France. Supporters of Alexander Hamilton wanted to expand trade relations with Britain and settle the differences. Hamilton's economic policies heavily relied on tax revenue on goods imported from Britain.
Jay sided with Hamilton and sought to reach a compromise with Britain. He spent the summer of 1794 negotiating with British foreign secretary Lord Grenville (1759–1834). Jay's goals were to gain British recognition of U.S. neutrality trading rights, convince the British to abandon the Northwest Territory forts, and open the British-held West Indies once again to American trade ships. Because the United States had almost no military to protect American interests and enforce treaties, Jay was negotiating from a weak position. Britain did agree to evacuate the forts, as it had already promised to do in 1783, and did open the West Indies to limited trade by small American commercial ships. The two nations also established claims commissions to determine payments of damages and debts on both sides. However, Britain steadfastly refused to recognize the U.S. proclaimed neutrality trade rights, and Jay promised not to interfere with British trade for at least ten years.
When the terms of the Jay Treaty reached the United States, Jefferson and his supporters were enraged. They accused Jay of selling out to Britain in order to protect Hamilton's economic programs. France felt betrayed that the United States had negotiated with Britain at all, because this violated an existing treaty between America and France that dated back to the American Revolution. Despite the uproar, President Washington signed the treaty, and the U.S. Senate ratified it. Though highly controversial, the treaty proved very beneficial to the young nation in maintaining financial stability. However, it also transformed the two political factions behind Jefferson and Hamilton into more-organized political parties, the Federalists and the Democratic-Republicans.
While serving as chief justice, Jay was the Federalist candidate for governor of New York in 1792; his opponent was incumbent governor George Clinton (1739–1812; see entry in volume 1). Though Jay appeared to win, an election board threw out a number of ballots, claiming they were invalid. As a result, the election swung to Clinton. Jay continued to serve on the Supreme Court.
Back to Europe
Chief Justice Jay was still regarded as the leading expert in foreign affairs. However, in large part, Jay was reluctant to advise President Washington and Treasury Secretary Hamilton on current issues or participate in congressional debates over proposed bills. Jay believed in a strict separation of powers between the branches of government. He believed his main role on the Court was to rule on the constitutionality of cases.
Nonetheless, when France and Britain began warring against each other in 1793 and the United States needed to take an official position on the war, Jay drafted an important neutrality proclamation for President Washington. The proclamation asserted that the United States favored neither Britain nor France, and that the United States had a right to continue its profitable international trade as before. However, despite the proclamation, tensions remained with Britain. President Washington sent Jay to Britain to negotiate a settlement of several issues during the summer of 1794 (see box). The resulting agreement, known as the Jay Treaty, attracted severe criticism in the United States and greatly angered France. However, President Washington signed it, and the U.S. Senate ratified it. The treaty maintained peace and economic stability at a critical time in the nation's development.
New York governor
Upon his return from Britain in 1795, Jay discovered he had been nominated and elected governor of New York. He resigned his position on the Supreme Court and assumed his new office. Jay served as governor for six years. During that time, he pushed through key prison reforms and much needed canal construction projects. He also signed the state law abolishing slavery. By the 1800 elections, the rising tide of public support for presidential candidate Thomas Jefferson and other candidates of the Democratic-Republican Party strongly suggested that Jay would likely lose if he ran for reelection. Therefore, Jay decided it was time to retire from public service.
In the November 1800 presidential election, President John Adams, the Federalist candidate, lost his bid for reelection to Thomas Jefferson, leader of the Democratic-Republicans. The Democratic-Republicans had also won a majority of seats in Congress. While he was finishing his term in office, President Adams did what he could to keep some Federalist influence in the government, specifically in the federal courts. Adams asked Jay if he would consider returning to the chief justice position on the Supreme Court to replace the retiring Oliver Ellsworth (1745–1807). However, Jay was determined to retire because of health problems, both his own and those of his wife. In addition, he thought the chief justice position held too little power and prestige. Adams next turned to Secretary of State John Marshall (1755–1835; see entry in volume 2), who accepted the job and later built it into a very powerful position.
The Jays settled onto an 800-acre farm in Bedford, New York, a two-day ride from New York City. Jay was greatly saddened by the death of his wife in 1802, so soon after Jay had left public service. Jay spent twenty-eight years in retirement but stayed away from politics. He took an active role in church affairs and was elected president of the Westchester Bible Society in 1818 and the American Bible Society in 1821. Jay died in May 1829.
For More Information
Combs, Jerald A. The Jay Treaty: Political Background of the Founding Fathers. Berkeley: University of California Press, 1970.
Morris, Richard B. John Jay, the Nation, and the Court. Boston: Boston University Press, 1967.
Morris, Richard B. Witnesses at the Creation: Hamilton, Madison, Jay, and the Constitution. New York: Holt, Rinehart, and Winston, 1985.
"Jay's Treaty." Archiving Early America.http://earlyamerica.com/earlyamerica/milestones/jaytreaty/ (accessed on August 14, 2005).
First chief justice of the U.S. Supreme Court, lawyer, diplomat
John Jay was a highly respected lawyer who distinguished himself in several different high state and federal offices, before, during, and after the Revolutionary War (1775–83). He helped negotiate two major treaties with foreign nations that were of tremendous benefit to the newly formed United States. As chief justice, his fairness and courage in making unpopular decisions secured the public's respect for the U.S. Supreme Court.
John Jay was born in 1745 in New York City. He was the eighth child of Peter Jay, a merchant, and Mary Van Cortlandt, whose ancestors were some of the original Dutch settlers of New York. Peter Jay was widely known and respected as a man of wealth and good character.
Little has been written about John Jay's early years. He was raised in the Protestant religion on his father's comfortable farm in Rye, New York. He was educated at home before leaving at age fourteen to attend King's College (now Columbia University) in New York City.
After graduating from college in 1764, Jay prepared for a career in the law. He did it in the usual colonial way, by serving as a clerk for an already established lawyer and studying in his free time. In 1768 he opened his own law office in New York City.
In 1769 Jay did legal work for the New York–New Jersey Boundary Commission. In those days, the legal boundaries of the colonies were not clearly defined. Each colony had its own laws, and disputes often arose over which laws had to be obeyed and where. Jay found these issues fascinating, and he learned a lot about how to settle legal squabbles. This early training proved invaluable to him when he later had to settle disputes between the United States and foreign nations.
Jay became well known among his peers for his fine legal mind and his hardworking ways. He was a highly moral young man with a strong religious faith. Those close to him knew Jay as cheerful and possessing a good sense of humor.
Marries into wealth; opposes Revolutionary movement
In John Jay's time, much of the land in the colony of New York was owned by about twenty wealthy and powerful families, whose members were descended from the early Dutch or English settlers. The families were connected by marriage. Jay's mother belonged to one such family. With his marriage to Sarah Livingston in 1774, Jay joined another powerful family: Sarah was the daughter of the governor of New Jersey and had other wealthy connections. Jay and Sarah were devoted to one another. Jay's future seemed bright.
Back in 1686, John Jay's Protestant grandfather, Auguste Jay, had been forced to flee his French homeland because of religious persecution and fear of imprisonment by French Catholics. As a result, the Jay family had no love for either the French or Catholics. Unlike the majority of colonists (who had English backgrounds), Jay's family had no particular affection for England either. Still, when talk of American independence from England began in the 1760s, Jay did not at first support the movement.
As a lawyer, Jay was intimately familiar with the legal issues involved in the dispute between England and the colonies. He believed that many policies adopted by the British between 1765 and 1774 were violations of colonists' rights. Among those policies were British efforts to restrict the power of colonial lawmaking bodies and courts.
Jay saw the struggle between England and America as a fight by the colonists for their rights as Englishmen. He was appalled by colonists who expressed their anger through violence and by persecuting Loyalists (people who remained loyal to England). New York had a great many Loyalists.
Attends First and Second Continental Congresses
In 1774 Great Britain adopted a series of measures called the Intolerable Acts. They were intended to punish the citizens of Boston, Massachusetts, for their violent protests against British taxation. Spurred by sympathy for Boston's sufferings, delegates from twelve of the thirteen colonies met on September 5, 1774, for the First Continental Congress. Jay gave up his law practice and went to Philadelphia, Pennsylvania, as New York's representative in Congress.
At first Jay, like the majority of delegates to the Congress, was not in favor of declaring independence. He feared it would lead to mob rule and chaos. Jay helped prepare several documents sent by Congress to England's King George III see entry. The documents outlined colonial grievances, urged peace, and threatened to end trade with England. Jay's major contribution was an address "to the oppressed inhabitants of Canada" asking Canadians to join the colonists in opposing British policies (they did not). He became known as a skillful writer and a man who favored reasoning and compromise over hasty action.
When King George did not respond to Congress's peace overtures, Jay decided to support the patriots. He worked hard for the adoption of the Declaration of Independence, which was approved by the Second Continental Congress on July 3, 1776.
With the signing of the Declaration, the former colonies were now states. In 1777 Jay was back in New York to help his state write a new constitution. New York's Loyalists were shocked by Jay's support of the Revolution. But this did not stand in the way of his being named chief justice of the New York supreme court. In his short time there, he unhappily presided over many war-crimes trials (including murder, assault, and theft). Jay was glad to return to Philadelphia in 1778 to become president of the Continental Congress. He assisted in many of the complicated tasks of running the Revolutionary War.
Sent on mission to Spain
By 1779 it was clear that America needed foreign aid in its struggle with Great Britain. America had very few men who had experience in negotiating with foreign governments. Despite his youth and inexperience, thirty-four-year-old John Jay was appointed America's minister to Spain. He resigned his positions as New York's chief justice and president of the Continental Congress, and in late October 1779, he set sail for Spain to begin his new career in diplomacy.
The voyage to Spain was a rough one, and Jay was terribly seasick. His ship stopped at an island in the Caribbean, where Jay saw slaves being cruelly treated. He decided that if he ever had the chance, he would do what he could to end slavery in America.
Jay had little luck with the Spanish. Spain did enter the war against Great Britain as an ally of France, but Spain refused to recognize American independence and would not ally itself with the United States. Nor would Spain accept Jay as a representative of an independent nation. Jay was insulted both for himself and his country. He was upset with France, too, for not helping him out in his dealings with the Spanish.
Negotiates Treaty of Paris ending American Revolution
When the war ended in 1781, Jay was still in Spain. Benjamin Franklin see entry was in France, and he asked Jay to join him in negotiating the Treaty of Paris that would officially end the Revolutionary War. Jay went, but he was still angry with both Spain and France. He was further upset when France tried to push its own agenda in the peace negotiations. He believed that France was trying to win favors for itself from Great Britain, favors that would hurt America. When the complicated negotiating was all over, Jay, Franklin, and John Adams see entry had gotten very favorable terms for America. American independence was recognized, and America's western borders were extended to the Mississippi River. According to historian Richard B. Morris, "Jay's diplomatic achievements at Paris in 1782 still stand unrivalled in the annals of American diplomacy." Morris called the Treaty of Paris one of "the two most advantageous treaties ever negotiated for the United States."
John Jay returned home to a hero's welcome. While abroad, he had learned enough about the Old World that he wanted to keep America out of its clutches. Upon his arrival, though, he was informed that the Articles of Confederation (the forerunner of the Constitution) had just been adopted, and he was the new U.S. secretary for foreign affairs.
Almost at once Jay realized that he could not carry out his duties under the Articles of Confederation. The articles called for a loose union of all the states. There was no central government with powers to make and enforce treaties with foreign governments. Before Jay could do anything to settle disputes with Great Britain and Spain, he had to get directions from Congress, and the way the government was organized, congressmen could never agree on any directions.
Works to get Constitution ratified; suffers illness, injury
The Constitutional Convention of 1787 drafted a Constitution that proposed the strong central government that Jay favored. He could not attend the convention, but when the document was sent around to the states for ratification (approval), Jay worked hard for its passage in New York. To convince New Yorkers to ratify the Constitution, Jay worked with James Madison and Alexander Hamilton (see entries) to produce eighty-five newspaper articles that described why the Articles of Confederation were inadequate and explained the proposed Constitution in depth. The papers were later published in book form as The Federalist. The Federalist is still the best explanation of the U.S. Constitution.
The word "federalist" referred to the belief in a strong central government. The Constitution was opposed by "anti-federalists," men such as Patrick Henry, George Mason, and Samuel Adams (see entries). They had many objections to a strong central government. According to Richard Morris, Jay was "committed to the ideals of a republic in which the people, directed by a virtuous [moral] and educated elite, would govern, and to a national government with power to act."
While the debate over the Constitution was going on, Jay suffered a crippling bout of arthritis (painful joints). Before he had completely recovered, he was struck on the head when members of a mob that had gathered outside the New York City jail began throwing stones. The mob was protesting the then-new concept of doctors performing autopsies (examinations of dead bodies to determine the cause of death). The doctors fled for safety reasons into the jailhouse; Jay was on his way to help rescue them. For a time it was feared that he had suffered permanent brain damage, but he finally recovered.
The Constitution went into effect in 1789. George Washington see entry was elected the nation's first president and assumed office in New York, then the nation's capital. Washington appointed Jay the first chief justice of the U.S. Supreme Court. When arguments arise, the Supreme Court has the last word on the meaning of all U.S. laws and the Constitution.
Jay used his time on the Supreme Court to stress the importance of the states giving way to the authority of the federal government. He also emphasized the importance of treaties of war, peace, and trade. Jay was still serving as chief justice in 1794 when President Washington asked him to go to England and negotiate a treaty.
For more than ten years, tensions had been building between the British and America over the terms of the 1783 Treaty of Paris (ending the Revolutionary War). The British complained that America had not honored its end of the bargain—it had not paid pre-war debts to British merchants and had not paid Loyalists for property taken from them during the war. Therefore, the British were refusing to withdraw their soldiers from forts on the American frontier. Then the British Navy seized American ships at sea, and the two countries nearly went to war. Jay was sent to try and restore good relations between the two countries.
In the treaty that bears his name, Jay got the British to agree to withdraw their troops, some trade agreements were made, and more or less friendly relations were restored. But Jay's Treaty was seen at home as favoring the British, and it was not popular. Dummies representing Jay were hanged and burned in his home town and he was widely criticized. Still, he managed to keep the young country out of war until the time came when America was better able to defend itself.
On his return from England in 1795, Jay retired from the position of Supreme Court justice. He then assumed the governorship of New York—in spite of Jay's Treaty, Jay was so popular in New York that he was elected without having to run for office. He served two three-year terms. To his great satisfaction, he helped pass a law that would gradually do away with slavery in New York.
After Jay left the governorship, then-President John Adams asked him to return to the Supreme Court, but Jay refused, indicating that he was too tired and in poor health. Jay retired to his estate in Bedford, New York, where he lived quietly until his death in 1829.
In an encyclopedia article about Jay, Mark Boatner quoted historian Samuel Flagg Bemis: "Jay was a very able man but not a genius." In personal character "he was second to none of the [founding] Fathers."
For More Information
Boatner, Mark M. "Jay, John" and "Jay's Treaty" in Encyclopedia of the American Revolution. Mechanicsburg, PA: Stackpole Books, 1994, pp. 551–53.
Combs, Jerald A. "Jay, John" in American National Biography. John A. Garraty and Mark C. Carnes, eds. New York: Oxford University Press, 1999, vol. 11, pp. 891–94.
Cooper, James Fenimore. The Spy. London: J. T. Devison, 1821.
Morris, Richard B. John Jay, the Nation and the Court. Boston: Boston University Press, 1967, pp. x, 28–9, 37.
Morris, Richard B. The Peacemakers. New York: Harper & Row, 1965, pp. 2–4, 206, 282.
Morris, Richard B. Witnesses at the Creation: Hamilton, Madison, Jay, and the Constitution. New York: Holt, Rinehart, and Winston, 1985.
"The Indispensable Mr. Jay." [Online] Available http://www.thehistorynet.com/ (accessed on September 29, 1999).
Sally Jay, Toast of Two Continents
Sarah "Sally" Jay, beloved wife of John Jay and mother of his seven children, was beautiful, charming, and lively. She brought to their marriage not only money and family connections but also a pleasing personality. Her husband, whose own personality in public was one of cold formality, was open and loving with his wife; still, she always called him Mr. Jay. In the early days of their marriage, the Jays were often separated as John Jay went about the business of making America an independent nation. While apart, they wrote to each other three times a week. One of John Jay's letters to his wife concluded with these words: "depend upon it, nothing but actual imprisonment will be able to keep me from you."
When John Jay began the diplomatic phase of his career, Sally Jay often traveled with him. By then she was known and liked by many prominent Revolutionary figures. When she accompanied Jay on his failed trip to Spain (1779–82), General George Washington sent her a lock of his hair as a going-away gift. In Madrid, Spain, the Jays suffered the tragedy of the death of their first daughter at the age of four weeks. Their young son had remained at home.
When Jay went to France in 1782 to help negotiate a peace treaty, Sally followed soon after, bringing their newborn daughter, Maria. In 1783, she gave birth to Ann in Paris. Sally was the only wife of an American peace commissioner to be present at the peace talks in Paris. She loved Paris, and Parisians loved her. She was soon involved in the active social life at the court of King Louis XVI see entry. Sally is said to have strongly resembled the king's wife, Marie-Antoinette, and her public appearances caused quite a commotion.
When the Jays returned to the United States in 1783, they built a three-story stone house on Broadway in New York City. Sally was a popular hostess, and invitations to her dinner parties were sought after. The Jays had a second home on over nine hundred acres of farming land in Bedford, New York, then a two-day ride from New York City. Jay retired there in 1801; the next year Sally died. Jay then occupied himself with "conversation, books, and recollections" until his death. Jay's two sons, Peter Augustus and William, had distinguished careers and carried on the family name. Five generations of the Jay family lived in the home, which is preserved today as the John Jay Homestead.
Historian Richard Morris wrote that Sally Jay brought a "light touch" to her marriage that balanced "Jay's deadly earnestness and strong sense of responsibility." Morris noted that John Jay was a loving parent to his own children and also took care of four siblings who were either mentally or physically handicapped as well as many other members of his large family.
Born on 12 December 1745, John Jay was an active leader of the Revolution and a key figure in the founding of the nation. During the period of the early Republic he served in Congress, as a diplomat, as chief justice of the United States, and as governor of New York. He was also a co-author of the Federalist Papers and president of the New York Society for the Manumission of Slaves.
Jay's grandfather was a Huguenot who had been imprisoned in France before escaping to America. His father, Peter Jay, was a successful merchant; his mother, Mary Van Cortlandt, came from a Dutch patroon family in the Hudson Valley, one of the most aristocratic families in the American colonies. Jay graduated from King's College (now Columbia University) in 1764 and was admitted to the bar four years later. By the eve of the Revolution, he was a prosperous and effective lawyer, who, unlike most New York attorneys, and most members of the wealthy landed gentry, was a committed Whig. In 1774 he increased his status and access to power by marrying Sarah Livingston, daughter of one of the leading families in New Jersey, whose father, William, would be a signer of the Constitution and a governor of his state. The couple would have seven children, including William Jay, a future judge and abolitionist.
In 1774 Jay was elected to New York City's Committee of Correspondence and later as one of the colony's five delegates to the First Continental Congress. Jay was relatively conservative within the Congress, but went along with, and supported, the more radical members who denounced acts of Parliament as "unconstitutional" and urged local militias to arm themselves. Jay drafted the Address to the People of Great Britain, which Congress used to justify its radical moves. Here he rejected the idea that Parliament could tax the colonists or subordinate them within the imperial economy. Americans, he asserted, would never become the "hewers of wood or drawers of water" for their English cousins.
Jay had returned to New York by early 1776 and was a member of the colonial legislature. In that position he opposed declaring independence but after July 1776 was fully committed to the Revolution and independence. He helped obtain munitions for the troops, investigate traitors, and organize spies. More important, in 1777 he helped write New York's first constitution. Like many others in the founding generation, Jay had experience with constitution-making well before the United States wrote its constitution in 1787. The New York document of 1777 was the only constitution of the period to have no religious tests for officeholding, reflecting his French Huguenot background and his respect for religious freedom. On the other hand, the constitution also required that foreigners seeking naturalization as citizens of New York renounce allegiance to any foreign "prince or potentate," an anti-Catholic measure that reflected his Huguenot ancestry and his family's memory of Catholic persecution.
With the adoption of the New York Constitution, Jay became chief justice of the state's Supreme Court while at the same time serving as a delegate to the Continental Congress. He was elected president of the Congress in 1778 and helped negotiate the treaty that led to the French alliance. In 1779 Congress made him minister plenipotentiary to Spain, where he arrived with his wife in 1780. This first diplomatic mission for Jay was mostly a failure. Spain refused to give him diplomatic status, recognize the new American nation, or acknowledge Americans' navigation rights on the Mississippi. The government in Madrid feared—correctly, as it would turn out—that American independence would be the first step leading to the destruction of Spain's New World empire.
In the spring of 1782 Benjamin Franklin asked Jay to come to Paris to help negotiate the treaty of peace with England. Jay declined to formally meet with the English envoys, however, because their credentials directed them to meet with representatives of the American "colonies" and not with the United States. Franklin ultimately joined Jay in taking this position, and the British acquiesced, getting new instructions from London. This position put him at odds with America's French allies, who urged a more speedy negotiation. Jay soon came to suspect that France was attempting to negotiate a separate peace with England, and on his own, without consulting Franklin, contacted an official in Britain to derail this possibility. Ultimately, Jay, Franklin, and John Adams, who had just arrived from the United States, negotiated a separate peace with England that recognized American nationhood and secured rights to all British possessions on the continent south of Canada, including all territory bordering the Mississippi River. The skillful negotiations of Jay, Adams, and Franklin led in 1783 to the comprehensive Treaty of Paris signed by Britain, France, Spain, and the world's newest nation, the United States of America.
Jay triumphantly returned to his homeland and was immediately appointed secretary for foreign affairs in the government under the Articles of Confederation, which had been ratified in his absence. This made the American ministers to France (Thomas Jefferson) and England (John Adams) his subordinates. Despite the weakness of the Confederation government, in 1786 Jay negotiated a trade agreement with Spain, known as the Jay-Gardoqui Treaty, in which the United States agreed to give up any navigational rights on the Mississippi for thirty years. This was perhaps Jay's greatest mistake in this period, because it infuriated Southerners, who believed the New Yorker had sacrificed their vital interest in access to the Mississippi in return for trading rights that helped only the Northeast. Congress did not ratify the treaty, but Southerners continued to mistrust Jay for the rest of his career.
Throughout the convention period Jay remained frustrated by the weakness of the national government. Thus he enthusiastically supported the Constitutional Convention of 1787, although he was not a delegate. After the convention he joined James Madison and Alexander Hamilton in writing essays to gain support for the new Constitution in New York. These became The Federalist Papers. Jay became ill shortly after the project began and wrote only five of the essays. When the Constitution was ratified, the new president, George Washington, nominated Jay to be the first chief justice of the United States. He held that post until 1795, but his legacy was minimal. His most important decision, in Chisholm v. Georgia (1793), in which he interpreted the Constitution to allow a citizen of one state to sue another state, outraged almost all the states and led to the Eleventh Amendment (1798), which reversed this ruling.
More significant than his jurisprudence was Jay's diplomacy. In 1793 he drafted Washington's Proclamation of Neutrality as war broke out in Europe. In 1794 he went to England at Washington's request and successfully negotiated what became known as Jay's Treaty. Under this treaty England finally vacated forts on the American side of the Great Lakes; the treaty also helped the United States obtain British support for access to the Mississippi. However, the settlement signaled a tilt toward Britain in its emerging conflict with France, and supporters of Jefferson attacked it as pro-British and pro-North. Ultimately, however, the Senate ratified most of the treaty.
While in England Jay had been elected governor of New York, and when he returned to the United States he resigned from the Supreme Court to become chief executive of his home state. He held this position for two terms, retiring in 1801. While governor he signed into law a gradual abolition act (1799) that led to the end of slavery in the state. In 1800 he refused to follow Hamilton's suggestion that he alter the way the state chose its presidential electors, in order to secure the electors for Adams. The end result was that New York, and the election, went to Jefferson. The lame duck Adams offered the chief justiceship to Jay, but he declined. Adams then gave the position to John Marshall. Jay then retired to his home in Westchester County, after more than twenty-five years of public service at home and abroad. He died 17 May 1829.
See alsoAbolition of Slavery in the North; Adams, John; Articles of Confederation; Chisholm v. Georgia; Constitution: Ratification of; Constitutional Convention; Constitutionalism: State Constitution Making; Emancipation and Manumission; Federalist Papers; Founding Fathers; French; Hamilton, Alexander; Jefferson, Thomas; Jay's Treaty; Madison, James; Supreme Court; Treaty of Paris .
Combs, Jerald A. The Jay Treaty: Political Battleground of the Founding Fathers. Berkeley: University of California Press, 1970.
Monaghan, Frank. John Jay: Defender of Liberty. New York: Bobbs-Merrill, 1935; New York: AMS Press, 1972.
Morris, Richard B., Floyd M. Shumway, Ene Sirvet, and Elaine G. Brown, eds. John Jay. Vol. 1: The Making of a Revolutionary: Unpublished Papers, 1745–1780. Vol. 2: The Winning of the Peace: Unpublished Papers, 1780–1784. New York: Harper and Row, 1975–1980.
John Jay (1745-1829), American diplomat and politician, guided American foreign policy from the end of the Revolution until George Washington's first administration was under way. Jay headed the U.S. Supreme Court during its formative years.
Long accustomed to a colonial status, Americans were ill-prepared to negotiate with foreign powers after the Revolution. The handful of men with diplomatic skill who emerged worked from a difficult position as the new nation experienced crises of credit and unity. John Jay's tenacity helped him survive the sectional battles and placed him in the inner councils of the Federalist party. Inclined to favor northern interests, he worked in a trying atmosphere until the Constitutional Convention of 1787 set a firmer tone for both domestic and diplomatic concerns. Jay's treaty with England, though highly controversial, probably avoided war. As chief justice, he gave the Supreme Court a national approach under the new Constitution.
John Jay was born on Dec. 12, 1745, in New York; he was the eighth child in a wealthy merchant family. Descended from French-Dutch stock and reared in the Huguenot tradition, Jay had few of the sentimental ties with England that made some Americans ambivalent in their allegiance after 1765. He graduated from King's College (later Columbia University) and trained in the law by a 5-year apprenticeship.
Admitted to the bar in 1768, Jay was briefly in partnership with Robert R. Livingston. Before 1774 Jay served on a royal commission formed to settle a boundary dispute between New York and a neighboring state, thus gaining his first experience as a negotiator. As a member of the "Moot Club" in New York, he associated with the lawyers who led the resistance movement against England a few years later. He married the beautiful and ambitious Sarah Livingston, daughter of William Livingston, on April 28, 1774.
Coming of Revolution
Almost before his honeymoon was over, Jay was serving on the New York Committee of Fifty-one, organized to control local anti-British measures. The committee manifesto, reportedly drafted by Jay, urging a convocation of deputies from all the Colonies to aid Boston and seek a "security of our common rights," led to the First Continental Congress. The cautious tone of the manifesto, however, brought some criticism from more militant groups that favored immediate boycott of British goods.
The Congress began Sept. 4, 1774; as Jay saw it, the Colonies were bound to try negotiations, to suspend commerce with Great Britain if these failed, and to go to war only when all other methods proved futile. Prudent to the point of timidity, Jay favored the narrowly defeated Galloway Plan of reconciliation. In Congress, Jay won a reputation as a skillful writer and moderate Whig, qualities that bore him into the New York Convention of 1775 and back to the Second Continental Congress. Meanwhile, the first battles of the Revolution at Lexington-Concord made discussion of a peaceful solution academic.
Jay's capacity for hard work brought him into the vortex of the congressional struggle. He served on the committee that drafted the July 6 declaration justifying armed resistance against England, but he also worked for one last attempt at reconciliation. By November 1775 he was on a secret congressional committee charged with engendering friendship abroad.
In May 1776, upon his return to New York, Jay cautiously supported a motion that disavowed any declaration favoring independence from Great Britain. However, the votes of his colleagues back in Philadelphia compelled Jay to submerge his views and work for independence.
President of the Continental Congress
In 1777 Jay took a leading part in drafting the New York constitution, an essentially conservative document peppered with Jay's concept of justice and blended with the mercantile spirit of the Dutch-Huguenot merchants. Jay himself became chief justice of New York in the transition government, but because of wartime circumstances the court functioned in desultory fashion. In 1778 he was chosen president of the Continental Congress. While Congress tottered on the verge of bankruptcy, many private citizens made paper fortunes in land dealings and mercantile speculations. Jay wrote Washington that there was "as much intrigue in this State House as in the Vatican, but as little secrecy as in a boarding-school." On Aug. 10, 1779, Jay resigned as chief justice of New York, and on Oct. 1 he left the Congress to resume his law practice.
Instead of returning to private life, however, Jay was appointed minister to Spain in October 1779. He was instructed to seek a commercial treaty with Charles III which would establish American rights to Mississippi navigation and to secure a sizable loan. The Spanish court withheld formal recognition (possibly because of its own colonial interests), and Jay ended his mission in May 1782 on a note of failure.
Sectional jealousy had made the negotiations with Spain difficult, for New England congressmen were eager to trade away navigation rights on the Mississippi provided their fisheries gained a Spanish market. Jay showed little sympathy for the Kentuckians, who insisted that they needed a waterway to market their products, and ultimately their anger brought into focus the conflict of interests between the North and South. Jay found the Spanish ministry too arrogant to negotiate anyway, and he journeyed to Paris in June 1782 for the preliminary peace negotiations then in motion. Suspicious of French motives, Jay led the American commissioners in Paris to sign a separate agreement with England, in violation of their instructions from Congress. The French were not pleased.
Secretary of Foreign Affairs
Jay declined posts as minister to both France and Great Britain, but Congress would not permit him to retire from public service. In July 1784 he was appointed secretary of foreign affairs, although New York had also elected him to serve in Congress. Jay resigned the congressional seat and took the foreign affairs assignment.
Jay's immediate concerns as foreign secretary were the British occupation of western posts (in defiance of a treaty) and the festering Mississippi problem. Jay made indiscreet remarks supporting British complaints that they would hold the forts until prewar debts were paid, and the Spanish emissary, Diego de Gardoqui, reported that Jay was "a very self-centered man" with a vain and domineering wife. The Spanish emissary had instructions that permitted negotiation of a treaty that would have pleased the North because it promised hard cash for fish but would have kept the gateway to the West closed. The gift of a prized stallion from Charles III to Jay may have been only incidental; at any rate, Jay decided to recommend concessions which the Spaniards believed would restrict America's western expansion.
Jay explained the commercial treaty to Congress in August but did not mention the military alliance Gardoqui also sought. Congress, voting along sectional lines, approved the pact, but by less than the required two-thirds majority. Tempers on both sides were heated, and the matter was unresolved when the Constitution was sent to the states for ratification.
Though not a delegate to the Constitutional Convention, Jay was to be an outspoken supporter of its handiwork. He joined Alexander Hamilton and James Madison in supplying articles for New York newspapers in support of the Constitution under the pen name "Publius." Of these Federalist papers, Jay wrote Publius 2, 3, 4, 5, and 63. He might have contributed more but for an injury received in the "Doctor's Riot" of April 1788.
Jay recovered in time to write An Address to the People of New York, which pointed out the unique dangers inherent in New York's failure to ratify the Constitution. Such a prospect was likely, as a 2-to-1 Antifederalist majority had been elected to go to the state ratifying convention scheduled for June. Jay himself was a delegate from New York and, with Hamilton, worked a political miracle: the convention voted for ratification by a slender majority. The Federalist victory was tempered by instructions to Jay to prepare a circular letter to all the states seeking a second constitutional convention. Though some Federalists feared that this device would create trouble, its effect was dissipated by the general goodwill apparent in the winter of 1788/1789.
In the interim period Jay continued to serve as foreign secretary to the expiring Continental Congress, more as a caretaker than a policy maker. American relations with France had remained generally on an excellent footing, but Jay's policy toward the Barbary pirates was ineffective. Jay served as acting secretary of state until Thomas Jefferson returned from France and assumed the office in March 1790. Meanwhile, George Washington had prevailed on Jay to accept the position of chief justice of the Supreme Court. Jay held this office until 1796 and presided over several fundamental cases.
While still chief justice, Jay undertook negotiations to end Anglo-American differences stemming from irritating events that had followed their 1783 peace treaty. Known to history as Jay's Treaty, the new document bore Jay's signature, but it was chiefly the work of Alexander Hamilton, whose advice and information leaks allowed the British diplomats to move confidently. Jay became a special envoy at Washington's request. He left for England in 1794 and signed a treaty with Lord Grenville that gained a British promise to evacuate western posts and negotiate boundaries but made considerable concessions to British creditors and to the British concept of neutrality. France interpreted the treaty as a direct rebuff, and its hostile reception in America strengthened the rising opposition to Washington's government by followers of Thomas Jefferson. The treaty was ratified by the Senate after a stormy debate.
Meanwhile, Jay had been elected governor of New York. Four years earlier Jay had won the popular vote for governor, but a legislative board had nullified his election. His victory in 1795 was clear-cut, however, and Jay gave up his Court position to serve in his last public office. His administration (1795-1801) was conservative and consolidating, marked by a refusal in 1800 to rig an election at Hamilton's suggestion. After two terms Jay announced his retirement and declined the offer to resume his old place on the Supreme Court. Within a year after his long-delayed return to Bedford, N.Y., Jay's wife (who had seven children) died. But for this, Jay's long retreat from public life bore out his repeated expectations of a pleasant "domestic life in rural leisure passed." He died on May 7, 1829, at Bedford.
Frank Monaghan, John Jay (1935), is readable but uncritical. A good short account is in Samuel Flagg Bemis, ed., The American Secretaries of State, vol. 1 (1927). Also valuable is Bemis's Jay's Treaty (1923; rev. ed. 1962). See also Henry P. Johnston, ed., Correspondence and Public Papers of John Jay (4 vols., 1890-1893).
Johnson, Herbert Alan, John Jay, colonial lawyer, New York: Garland Pub., 1989.
McLean, Jennifer P., The Jays of Bedford: the story of five generations of the Jay family who lived in the John Jay Homestead, Katonah, N.Y.: Friends of John Jay Homestead, 1984.
Pellew, George, John Jay, New York: Chelsea House, 1980. □
Jay, John (1745-1829)
John Jay (1745-1829)
First chief justice of the supreme court
Early Years. John Jay was born on 12 December 1745 in New York City. The son of a prosperous merchant family and nephew of a judge, Jay benefited from a solid and well-rounded education. He graduated from King’s College (now Columbia University) in 1760 fluent in French, Greek, and Latin. Jay began his apprenticeship in the law in 1764, serving as clerk to Benjamin Kissam, and soon became known for quickness of mind and the strength of his reasoning. After being licensed to practice law on 26 October 1768, he began a partnership with Robert Livingston, a friend since their college days. Jay and Livingston became a preeminent New York law firm, taking on all manner of cases and building important reputations.
Public Service. Jay’s public career began in 1774 as a delegate to the First Continental Congress. There followed afterward a virtual explosion of public service. In 1775 he attended the Second Continental Congress and served on the New York Provincial Congress. The following year Jay collaborated with Gouverneur Morris and William Duer to draft a new state constitution for New York. Jay served as chief justice of New York’s Supreme Court from 1777 to 1779 and in 1778 served as president of the Continental Congress. He was sent to Paris in 1782, along with John Adams and Benjamin Franklin, to negotiate a peace treaty with England. Upon his return to America in 1784 he was named Secretary of Foreign Affairs for the United States under the Articles of Confederation.
Staunch Federalist. The significant question of those first years of independence was whether the former colonies, now loosely connected by the unsatisfactory Articles of Confederation, should adopt a new constitution in order to “form a more perfect union.” Jay joined with James Madison and Alexander Hamilton to write The Federalist (1788), a series of newspaper essays that addressed the question. Overcome by illness in the fall of 1787, Jay wrote only five of the eighty-five papers—numbers 2 through 5 and 64. Nevertheless, he penned one of the most memorable lines of the series. In essay number 2 he wrote “This country and this people seem to have been made for each other.”
Supreme Court. Jay’s contributions to the formation and development of the new nation, and his renown as a lawyer, made him a clear candidate for selection to the Supreme Court. President George Washington, who had been lobbied by Livingston and others for the post of chief justice, turned to Jay for this high honor. In his letter of appointment to Jay, Washington wrote: “In nominating you for the important station which you now fill, I not only acted in conformity to my best judgment, but I trust I did a grateful thing to the good citizens of these United States.” Jay accepted Washington’s appointment and was quickly confirmed by the Senate in late 1789. He joined with Associate Justices William Cushing and James Wilson for the first meeting of the Supreme Court in New York City on 1 February 1790.
Tenure. Jay’s service as America’s first chief justice is largely unremarkable. Few cases of any importance came before the Supreme Court during his tenure. Much time and energy went into the grueling requirement that the justices “ride the circuit,” that is, travel throughout a designated region to hold court in places not easily accessible. Jay’s circuit assignment required him to travel throughout New York and New England, a challenging task in a time when roads were either poor or nonexistent. Perhaps Jay’s most important contribution as chief justice was his firm but polite refusal to advise President Washington and Treasury Secretary Alexander Hamilton on questions of public policy. Jay’s refusal affirmed the separation of powers.
Test Case. The most significant decision to come before the Jay Court, and the first great case to be decided by that body, occurred in 1793. Chisholm v. Georgia raised important issues of state sovereignty. The question to be decided by the Court was whether a citizen of another state could sue the State of Georgia in federal court. Jay and the Court (except Judge James Iredell) said yes, that Georgia had abandoned its sovereignty when it joined the Union and thus could be sued. This first expression of federal primacy caused a stir throughout the states and prompted congressional reversal through the adoption of the Eleventh Amendment. In a less celebrated case Jay wrote the Court’s opinion in Glass v. The Sloop Betsey (1794), where the question was whether foreign consuls or U.S. courts had authority over captured vessels brought to American ports by foreign ships. Jay struck an important blow for American sovereignty when he held that foreign consuls had no admiralty jurisdiction in the United States.
Treaty. Washington sent Jay to England in 1794 to negotiate several matters still outstanding between the new nation and the old mother country. Antagonism was particularly strong over British trade restrictions in the Caribbean and boundary lines in the Northwest. Jay’s Treaty was roundly criticized by many Americans who believed he had given too much away. The most notorious item was Jay’s agreement that American molasses, sugar, cotton, and coffee would not be shipped to Europe. The Senate adopted the treaty in the summer of 1795, but without the offending trade restrictions. That same year Jay resigned as chief justice to become governor of New York, a post he held until 1801.
Reappointment. On 18 December 1800 President John Adams offered Jay reappointment as chief justice to replace Oliver Ellsworth. In his letter to Jay, President Adams urged him to accept the position for a second time in order to maintain a Federalist point of view at the highest levels of government. The “firmest security we can have against the effects of visionary schemes … will be in a solid judiciary,” wrote Adams, “and nothing will cheer the hopes of the best men so much as your acceptance of this appointment.” Jay declined the honor. The rigors of riding the circuit and the relative lack of consequence of court proceedings up to that time made the post singularly unattractive, and Jay retired from public service. He died in New York on 17 May 1829.
Leon Friedman and Fred L. Israel, The Justices of the United States Supreme Court: Their Lives and Major Opinions (New York: Chelsea House, 1969);
Frank Monighan John Jay (New York: AMS Press, 1935).
JAY, JOHN. (1745–1829). Statesman, diplomat. New York. Born New York City on 12 December 1745, Jay graduated from King's College (now Columbia) in 1764, was admitted four years later to the bar, and became a successful New York City lawyer. Marriage in 1774 to Sarah, daughter of William Livingston of New Jersey, further extended his family connections. When the Revolution started he supported the Patriot cause, although with moderation. He became a member of the New York City Committee of Correspondence and served in the first and Second Continental Congresses. Although he was opposed to independence in the beginning, and had returned to office in the state legislation when the Declaration of Independence came up for a vote, he nevertheless became ardent in his dedication to the new United States. He helped to get cannon for General George Washington's army, set up a spy ring, and chaired the committee dedicated to battling Loyalists in New York. He guided the formulation of the 1777 state constitution, and served as Chief Justice of New York from 3 May of that year until 1779. Re-elected to Congress in December 1778, he became president of that body on the 10th and held this post until he was named minister to Spain, on 27 September 1779. Meanwhile, he had been elected colonel of the state militia in 1775, but had no military service in the field.
Spain's attitude toward the American Revolution was such that Jay had no chance of getting that country's recognition of the United States, even though it had declared war on Britain. Arriving at Cadiz with his wife on 22 January 1780 and remaining in the country two years, Jay accomplished little more than raising a small loan and getting the Spanish to keep up their secret assistance in war supplies. On 23 June 1782 Jay reached Paris to take part in the Peace Negotiations. He shared John Adams's suspicion of Charles Gravier, Comte de Vergennes, and helped Adams convince Benjamin Franklin to sign preliminary articles of peace with the British without awaiting French concurrence.
On 24 July 1784 Jay reached New York, having declined the post of minister to London, and found he had been drafted for the post of Secretary of Foreign Affairs. Jay held this post until Thomas Jefferson became the first Secretary of State, on 22 March 1790. His most vexatious problems during this period stemmed from British and Spanish refusal to withdraw their garrisons from territory claimed by the United States. The impotence of the American Confederation weakened Jay's hand, and he became one of the strongest advocates of a strong federal government. He wrote five of the Federalist Papers, blaming ill health for keeping him from contributing more.
Becoming the first Chief Justice of the United States on 4 March 1789 (but serving as ad interim Secretary of State until Jefferson arrived to be sworn in on 22 March 1790), he sat during the first five years during which the Supreme Court's procedures were formed. While Chief Justice he was sent in the summer of 1794 to arrange a peaceful settlement of controversies with Great Britain that threatened war, leading to the politically divisive Jay's Treaty.
Jay had been defeated by George Clinton in 1792 for the governorship of New York, even though Jay got more votes. He returned from England in 1795 to find himself elected, and he served six years (two terms). His administration was conservative and upright, but no great issues arose to challenge it.
Republican strength assured Jay's defeat for governor in 1800, and he declined to run for re-election. His mind set on retirement, he also refused Adams's offer of reappointment as Chief Justice. Jay spent his last twenty-eight years in complete retirement on his 800-acre property at Bedford, Westchester County, New York, where he died on 17 May 1829.
Johnston, Henry P., ed. Correspondence and Public Papers of John Jay. 4 vols. New York: G.P. Putnam's Sons, 1890–1893.
Monaghan, Frank. John Jay: Defender of Liberty. New York, Indianapolis, Ind.: Bobbs-Merrill, 1935.
Morris, Richard B., ed. John Jay. The Making of a Revolutionary: Unpublished Papers, 1745–1780. New York: Harper & Row, 1975.
――――――, ed. John Jay, The Winning of the Peace: Unpublished Papers, 1780–1784. New York: Harper & Row, 1980.
――――――. The Peacemakers: The Great Powers and American Independence. New York: Harper & Row, 1965.
revised by Michael Bellesiles
John Jay was a politician, statesman, and the first chief justice of the Supreme Court. He was one
of the authors of The Federalist, a collection of influential papers written with james madison and alexander hamilton prior to the ratification of the Constitution.
Jay was born in New York City on December 12, 1745. Unlike most of the colonists in the New World, who were English, Jay traced his ancestry to the French Huguenots, His grandfather, August Jay, immigrated to New York in the late seventeenth century to escape the persecution of non-Catholics under Louis XIV. Jay graduated from King's College, now known as Columbia University, in 1764. He was admitted to the bar in New York City in 1768.
One of Jay's earliest achievements was his participation in the settlement of the boundary line between New York and New Jersey in 1773. During the time preceding the Revolutionary War, Jay actively protested against British treatment of the colonies but did not fully advocate independence until 1776, when the Declaration of Independence was created. Jay then supported independence wholeheartedly. He was a member of the continental congress from 1774 to 1779, acting as its president from 1778 to 1779.
In 1776, Jay was a member of the Provincial Congress of New York and was instrumental in the formation of the constitution of that state. From 1776 to 1778, he performed the duties of New York chief justice.
"A distinctive character of the National Government, the mark of its legitimacy, is that it owes its existence to the act of the whole people who created it."
Jay next embarked on a foreign service career. His first appointment was to the post of minister plenipotentiary to Spain in 1779, where he succeeded in gaining financial assistance for the colonies.
In 1782, Jay joined benjamin franklin in Paris for a series of peace negotiations with Great Britain. In 1784, Jay became secretary of foreign affairs and performed these duties until 1789. During his term, Jay participated in the arbitration of various international disputes.
Jay recognized the limitations of his powers in foreign service under the existing government of the articles of confederation, and this made him a strong supporter of the Constitution. He publicly displayed his views in the five papers he composed for The Federalist in 1787 and 1788. Jay argued for ratification of the Constitution and the creation of a strong federal government.
In 1789, Jay earned the distinction of becoming the first chief justice of the United States. During his term, which lasted until 1795, Jay rendered a decision in chisholm v. georgia, 2 U.S. (2 Dall.) 419, 1 L.Ed. 440 (1793), which subsequently led to the enactment of the eleventh amendment to the Constitution. This 1793 case involved the ability of inhabitants of one state to sue another state. The Supreme Court recognized this right but, in response, Congress passed the Eleventh Amendment denying the right of a state to be prosecuted or sued by a resident of another state in federal court.
During Jay's tenure on the Supreme Court, he was again called upon to act in foreign service. In 1794 he negotiated a treaty with Great Britain known as Jay's Treaty. This agreement regulated commerce and navigation and settled many outstanding disputes between the United States and Great Britain. The treaty, under which disputes were resolved before an international commission, was the origin of modern international arbitration.
In 1795 Jay was elected governor of New York. He served two terms, until 1801, at which time he retired.
He died May 17, 1829.
Bernstein, R.B. 1996. "Documentary Editing and the Jay Court: Opening New Lines of Inquiry." Journal of Supreme Court History (annual): 17–22.
——. 1996. "John Jay, Judicial Independence, and Advising Coordinate Branches." Journal of Supreme Court History (annual): 23–9.
Jay, William. 1833. The Life of John Jay. New York: Harper.
Monaghan, Frank. 1935. John Jay: Defender of Liberty. New York: Bobbs-Merrill.
Morris, Richard B., ed. 1985. Witnesses at the Creation: Hamilton, Madison, Jay, and the Constitution. New York: Holt, Rinehart & Winston.
——. 1975. John Jay: The Making of a Revolutionary. New York: Harper & Row.
Pellew, George. 1997. John Jay. Broomall, Pa.: Chelsea House.
Rossiter, Clinton Lawrence. 1964. Alexander Hamilton and the Constitution. New York: Harcourt, Brace & World. |
Air quality index
An air quality index (AQI) is a number used by government agencies to communicate to the public how polluted the air currently is or how polluted it is forecast to become. As the AQI increases, an increasingly large percentage of the population is likely to experience increasingly severe adverse health effects. Different countries have their own air quality indices, corresponding to different national air quality standards. Some of these are the Air Quality Health Index (Canada), the Air Pollution Index (Malaysia), and the Pollutant Standards Index (Singapore).
Definition and usageEdit
Computation of the AQI requires an air pollutant concentration over a specified averaging period, obtained from an air monitor or model. Taken together, concentration and time represent the dose of the air pollutant. Health effects corresponding to a given dose are established by epidemiological research. Air pollutants vary in potency, and the function used to convert from air pollutant concentration to AQI varies by pollutant. Air quality index values are typically grouped into ranges. Each range is assigned a descriptor, a color code, and a standardized public health advisory.
The AQI can increase due to an increase of air emissions (for example, during rush hour traffic or when there is an upwind forest fire) or from a lack of dilution of air pollutants. Stagnant air, often caused by an anticyclone, temperature inversion, or low wind speeds lets air pollution remain in a local area, leading to high concentrations of pollutants, chemical reactions between air contaminants and hazy conditions.
On a day when the AQI is predicted to be elevated due to fine particle pollution, an agency or public health organization might:
- advise sensitive groups, such as the elderly, children, and those with respiratory or cardiovascular problems to avoid outdoor exertion.
- declare an "action day" to encourage voluntary measures to reduce air emissions, such as using public transportation.
- recommend the use of masks to keep fine particles from entering the lungs
During a period of very poor air quality, such as an air pollution episode, when the AQI indicates that acute exposure may cause significant harm to the public health, agencies may invoke emergency plans that allow them to order major emitters (such as coal burning industries) to curtail emissions until the hazardous conditions abate.
Most air contaminants do not have an associated AQI. Many countries monitor ground-level ozone, particulates, sulfur dioxide, carbon monoxide and nitrogen dioxide, and calculate air quality indices for these pollutants.
The definition of the AQI in a particular nation reflects the discourse surrounding the development of national air quality standards in that nation. A website allowing government agencies anywhere in the world to submit their real-time air monitoring data for display using a common definition of the air quality index has recently become available.
Indices by locationEdit
Air quality in Canada has been reported for many years with provincial Air Quality Indices (AQIs). Significantly, AQI values reflect air quality management objectives, which are based on the lowest achievable emissions rate, and not exclusively concern for human health. The Air Quality Health Index or (AQHI) is a scale designed to help understand the impact of air quality on health. It is a health protection tool used to make decisions to reduce short-term exposure to air pollution by adjusting activity levels during increased levels of air pollution. The Air Quality Health Index also provides advice on how to improve air quality by proposing behavioural change to reduce the environmental footprint. This index pays particular attention to people who are sensitive to air pollution. It provides them with advice on how to protect their health during air quality levels associated with low, moderate, high and very high health risks.
The Air Quality Health Index provides a number from 1 to 10+ to indicate the level of health risk associated with local air quality. On occasion, when the amount of air pollution is abnormally high, the number may exceed 10. The AQHI provides a local air quality current value as well as a local air quality maximums forecast for today, tonight, and tomorrow, and provides associated health advice.
|Risk:||Low (1–3)||Moderate (4–6)||High (7–10)||Very high (above 10)|
|Health Risk||Air Quality Health Index||Health Messages|
|At Risk population||*General Population|
|Low||1–3||Enjoy your usual outdoor activities.||Ideal air quality for outdoor activities|
|Moderate||4–6||Consider reducing or rescheduling strenuous activities outdoors if you are experiencing symptoms.||No need to modify your usual outdoor activities unless you experience symptoms such as coughing and throat irritation.|
|High||7–10||Reduce or reschedule strenuous activities outdoors. Children and the elderly should also take it easy.||Consider reducing or rescheduling strenuous activities outdoors if you experience symptoms such as coughing and throat irritation.|
|Very high||Above 10||Avoid strenuous activities outdoors. Children and the elderly should also avoid outdoor physical exertion.||Reduce or reschedule strenuous activities outdoors, especially if you experience symptoms such as coughing and throat irritation.|
On the 30th December 2013 Hong Kong replaced the Air Pollution Index with a new index called the Air Quality Health Index. This index is on a scale of 1 to 10+ and considers four air pollutants: ozone; nitrogen dioxide; sulphur dioxide and particulate matter (including PM10 and PM2.5). For any given hour the AQHI is calculated from the sum of the percentage excess risk of daily hospital admissions attributable to the 3-hour moving average concentrations of these four pollutants. The AQHIs are grouped into five AQHI health risk categories with health advice provided:
|Health risk category||AQHI|
Each of the health risk categories has advice with it. At the low and moderate levels the public are advised that they can continue normal activities. For the high category, children, the elderly and people with heart or respiratory illnesses are advising to reduce outdoor physical exertion. Above this (very high or serious) the general public are also advised to reduce or avoid outdoor physical exertion.
China's Ministry of Environmental Protection (MEP) is responsible for measuring the level of air pollution in China. As of 1 January 2013, MEP monitors daily pollution level in 163 of its major cities. The Air Pollution Index (API) level is based on the level of 6 atmospheric pollutants, namely sulfur dioxide (SO2), nitrogen dioxide (NO2), suspended particulates smaller than 10 μm in aerodynamic diameter (PM10), suspended particulates smaller than 2.5 μm in aerodynamic diameter (PM2.5), carbon monoxide (CO), and ozone (O3) measured at the monitoring stations throughout each city.
An individual score (IAQI) is assigned to the level of each pollutant and the final AQI is the highest of those 6 scores. The pollutants can be measured quite differently. PM2.5、PM10 concentration are measured as average per 24h. SO2, NO2, O3, CO are measured as average per hour. The final API value is calculated per hour according to a formula published by the MEP.
The scale for each pollutant is non-linear, as is the final AQI score. Thus an AQI of 100 does not mean twice the pollution of AQI at 50, nor does it mean twice as harmful. While an AQI of 50 from day 1 to 182 and AQI of 100 from day 183 to 365 does provide an annual average of 75, it does not mean the pollution is acceptable even if the benchmark of 100 is deemed safe. This is because the benchmark is a 24-hour target. The annual average must match against the annual target. It is entirely possible to have safe air every day of the year but still fail the annual pollution benchmark.
AQI and Health Implications (HJ 663-2012)
|0–50||Excellent||No health implications.|
|51–100||Good||Few hypersensitive individuals should reduce outdoor exercise.|
|101–150||Lightly Polluted||Slight irritations may occur, individuals with breathing or heart problems should reduce outdoor exercise.|
|151–200||Moderately Polluted||Slight irritations may occur, individuals with breathing or heart problems should reduce outdoor exercise.|
|201–300||Heavily Polluted||Healthy people will be noticeably affected. People with breathing or heart problems will experience reduced endurance in activities. These individuals and elders should remain indoors and restrict activities.|
|300+||Severely Polluted||Healthy people will experience reduced endurance in activities. There may be strong irritations and symptoms and may trigger other illnesses. Elders and the sick should remain indoors and avoid exercise. Healthy individuals should avoid outdoor activities.|
The Minister for Environment, Forests & Climate Change Shri Prakash Javadekar launched The National Air Quality Index (AQI) in New Delhi on 17 September 2014 under the Swachh Bharat Abhiyan. It is outlined as ‘One Number- One Colour-One Description’ for the common man to judge the air quality within his vicinity. The index constitutes part of the Government’s mission to introduce the culture of cleanliness. Institutional and infrastructural measures are being undertaken in order to ensure that the mandate of cleanliness is fulfilled across the country and the Ministry of Environment, Forests & Climate Change proposed to discuss the issues concerned regarding quality of air with the Ministry of Human Resource Development in order to include this issue as part of the sensitisation programme in the course curriculum.
While the earlier measuring index was limited to three indicators, the current measurement index had been made quite comprehensive by the addition of five additional parameters. Under the current measurement of air quality there are 8 parameters . The initiatives undertaken by the Ministry recently aimed at balancing environment and conservation and development as air pollution has been a matter of environmental and health concerns, particularly in urban areas.
The Central Pollution Control Board along with State Pollution Control Boards has been operating National Air Monitoring Program (NAMP) covering 240 cities of the country having more than 342 monitoring stations. In addition, continuous monitoring systems that provide data on near real-time basis are also installed in a few cities. They provide information on air quality in public domain in simple linguistic terms that is easily understood by a common person. Air Quality Index (AQI) is one such tool for effective dissemination of air quality information to people. As such an Expert Group comprising medical professionals, air quality experts, academia, advocacy groups, and SPCBs was constituted and a technical study was awarded to IIT Kanpur. IIT Kanpur and the Expert Group recommended an AQI scheme in 2014.
There are six AQI categories, namely Good, Satisfactory, Moderately polluted, Poor, Very Poor, and Severe. The proposed AQI will consider eight pollutants (PM10, PM2.5, NO2, SO2, CO, O3, NH3, and Pb) for which short-term (up to 24-hourly averaging period) National Ambient Air Quality Standards are prescribed. Based on the measured ambient concentrations, corresponding standards and likely health impact, a sub-index is calculated for each of these pollutants. The worst sub-index reflects overall AQI. Associated likely health impacts for different AQI categories and pollutants have been also been suggested, with primary inputs from the medical expert members of the group. The AQI values and corresponding ambient concentrations (health breakpoints) as well as associated likely health impacts for the identified eight pollutants are as follows:
|AQI Category (Range)||PM10 (24hr)||PM2.5 (24hr)||NO2 (24hr)||O3 (8hr)||CO (8hr)||SO2 (24hr)||NH3 (24hr)||Pb (24hr)|
|Moderately polluted (101-200)||101-250||61-90||81-180||101-168||2.1-10||81-380||401-800||1.1-2.0|
|Very poor (301-400)||351-430||121-250||281-400||209-748||17-34||801-1600||1200-1800||3.1-3.5|
|AQI||Associated Health Impacts|
|Good (0-50)||Minimal impact|
|Satisfactory (51-100)||May cause minor breathing discomfort to sensitive people.|
|Moderately polluted (101–200)||May cause breathing discomfort to people with lung disease such as asthma, and discomfort to people with heart disease, children and older adults.|
|Poor (201-300)||May cause breathing discomfort to people on prolonged exposure, and discomfort to people with heart disease.|
|Very poor (301-400)||May cause respiratory illness to the people on prolonged exposure. Effect may be more pronounced in people with lung and heart diseases.|
|Severe (401-500)||May cause respiratory impact even on healthy people, and serious health impacts on people with lung/heart disease. The health impacts may be experienced even during light physical activity.|
The air quality in Mexico City is reported in IMECAs. The IMECA is calculated using the measurements of average times of the chemicals ozone (O3), sulphur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), particles smaller than 2.5 micrometers (PM2.5), and particles smaller than 10 micrometers (PM10).
Singapore uses the Pollutant Standards Index to report on its air quality, with details of the calculation similar but not identical to that used in Malaysia and Hong Kong The PSI chart below is grouped by index values and descriptors, according to the National Environment Agency.
|PSI||Descriptor||General Health Effects|
|51–100||Moderate||Few or none for the general population|
|101–200||Unhealthy||Mild aggravation of symptoms among susceptible persons i.e. those with underlying conditions such as chronic heart or lung ailments; transient symptoms of irritation e.g. eye irritation, sneezing or coughing in some of the healthy population.|
|201–300||Very Unhealthy||Moderate aggravation of symptoms and decreased tolerance in persons with heart or lung disease; more widespread symptoms of transient irritation in the healthy population.|
|301–400||Hazardous||Early onset of certain diseases in addition to significant aggravation of symptoms in susceptible persons; and decreased exercise tolerance in healthy persons.|
|Above 400||Hazardous||PSI levels above 400 may be life-threatening to ill and elderly persons. Healthy people may experience adverse symptoms that affect normal activity.|
The Ministry of Environment of South Korea uses the Comprehensive Air-quality Index (CAI) to describe the ambient air quality based on the health risks of air pollution. The index aims to help the public easily understand the air quality and protect people's health. The CAI is on a scale from 0 to 500, which is divided into six categories. The higher the CAI value, the greater the level of air pollution. Of values of the five air pollutants, the highest is the CAI value. The index also has associated health effects and a colour representation of the categories as shown below.
|0–50||Good||A level that will not impact patients suffering from diseases related to air pollution.|
|51–100||Moderate||A level that may have a meager impact on patients in case of chronic exposure.|
|101–150||Unhealthy for sensitive groups||A level that may have harmful impacts on patients and members of sensitive groups.|
|151–250||Unhealthy||A level that may have harmful impacts on patients and members of sensitive groups (children, aged or weak people), and also cause the general public unpleasant feelings.|
|251–500||Very unhealthy||A level that may have a serious impact on patients and members of sensitive groups in case of acute exposure.|
The N Seoul Tower on Namsan Mountain in central Seoul, South Korea, is illuminated in blue, from sunset to 23:00 and 22:00 in winter, on days where the air quality in Seoul is 45 or less. During the spring of 2012, the Tower was lit up for 52 days, which is four days more than in 2011.
The most commonly used air quality index in the UK is the Daily Air Quality Index recommended by the Committee on Medical Effects of Air Pollutants (COMEAP). This index has ten points, which are further grouped into 4 bands: low, moderate, high and very high. Each of the bands comes with advice for at-risk groups and the general population.
|Air pollution banding||Value||Health messages for At-risk individuals||Health messages for General population|
|Low||1–3||Enjoy your usual outdoor activities.||Enjoy your usual outdoor activities.|
|Moderate||4–6||Adults and children with lung problems, and adults with heart problems, who experience symptoms, should consider reducing strenuous physical activity, particularly outdoors.||Enjoy your usual outdoor activities.|
|High||7–9||Adults and children with lung problems, and adults with heart problems, should reduce strenuous physical exertion, particularly outdoors, and particularly if they experience symptoms. People with asthma may find they need to use their reliever inhaler more often. Older people should also reduce physical exertion.||Anyone experiencing discomfort such as sore eyes, cough or sore throat should consider reducing activity, particularly outdoors.|
|Very High||10||Adults and children with lung problems, adults with heart problems, and older people, should avoid strenuous physical activity. People with asthma may find they need to use their reliever inhaler more often.||Reduce physical exertion, particularly outdoors, especially if you experience symptoms such as cough or sore throat.|
The index is based on the concentrations of 5 pollutants. The index is calculated from the concentrations of the following pollutants: Ozone, Nitrogen Dioxide, Sulphur Dioxide, PM2.5 (particles with an aerodynamic diameter less than 2.5 μm) and PM10. The breakpoints between index values are defined for each pollutant separately and the overall index is defined as the maximum value of the index. Different averaging periods are used for different pollutants.
|Index||Ozone, Running 8 hourly mean (μg/m3)||Nitrogen Dioxide, Hourly mean (μg/m3)||Sulphur Dioxide, 15 minute mean (μg/m3)||PM2.5 Particles, 24 hour mean (μg/m3)||PM10 Particles, 24 hour mean (μg/m3)|
|10||≥ 241||≥ 601||≥ 1065||≥ 71||≥ 101|
To present the air quality situation in European cities in a comparable and easily understandable way, all detailed measurements are transformed into a single relative figure: the Common Air Quality Index (or CAQI) Three different indices have been developed by Citeair to enable the comparison of three different time scale:.
- An hourly index, which describes the air quality today, based on hourly values and updated every hours,
- A daily index, which stands for the general air quality situation of yesterday, based on daily values and updated once a day,
- An annual index, which represents the city's general air quality conditions throughout the year and compare to European air quality norms. This index is based on the pollutants year average compare to annual limit values, and updated once a year.
However, the proposed indices and the supporting common web site www.airqualitynow.eu are designed to give a dynamic picture of the air quality situation in each city but not for compliance checking.
The hourly and daily common indicesEdit
These indices have 5 levels using a scale from 0 (very low) to > 100 (very high), it is a relative measure of the amount of air pollution. They are based on 3 pollutants of major concern in Europe: PM10, NO2, O3 and will be able to take into account to 3 additional pollutants (CO, PM2.5 and SO2) where data are also available.
The calculation of the index is based on a review of a number of existing air quality indices, and it reflects EU alert threshold levels or daily limit values as much as possible. In order to make cities more comparable, independent of the nature of their monitoring network two situations are defined:
- Background, representing the general situation of the given agglomeration (based on urban background monitoring sites),
- Roadside, being representative of city streets with a lot of traffic, (based on roadside monitoring stations)
The indices values are updated hourly (for those cities that supply hourly data) and yesterdays daily indices are presented.
Common air quality index legend:
The common annual air quality indexEdit
The common annual air quality index provides a general overview of the air quality situation in a given city all the year through and regarding to the European norms.
It is also calculated both for background and traffic conditions but its principle of calculation is different from the hourly and daily indices. It is presented as a distance to a target index, this target being derived from the EU directives (annual air quality standards and objectives):
- If the index is higher than 1: for one or more pollutants the limit values are not met.
- If the index is below 1: on average the limit values are met.
The annual index is aimed at better taking into account long term exposure to air pollution based on distance to the target set by the EU annual norms, those norms being linked most of the time to recommendations and health protection set up by World Health Organisation.
The United States Environmental Protection Agency (EPA) has developed an Air Quality Index that is used to report air quality. This AQI is divided into six categories indicating increasing levels of health concern. An AQI value over 300 represents hazardous air quality and below 50 the air quality is good.
The AQI is based on the five "criteria" pollutants regulated under the Clean Air Act: ground-level ozone, particulate matter, carbon monoxide, sulfur dioxide, and nitrogen dioxide. The EPA has established National Ambient Air Quality Standards (NAAQS) for each of these pollutants in order to protect public health. An AQI value of 100 generally corresponds to the level of the NAAQS for the pollutant. The Clean Air Act (USA) (1990) requires EPA to review its National Ambient Air Quality Standards every five years to reflect evolving health effects information. The Air Quality Index is adjusted periodically to reflect these changes.
Computing the AQIEdit
The air quality index is a piecewise linear function of the pollutant concentration. At the boundary between AQI categories, there is a discontinuous jump of one AQI unit. To convert from concentration to AQI this equation is used:
- = the (Air Quality) index,
- = the pollutant concentration,
- = the concentration breakpoint that is ≤ ,
- = the concentration breakpoint that is ≥ ,
- = the index breakpoint corresponding to ,
- = the index breakpoint corresponding to .
|O3 (ppb)||O3 (ppb)||PM2.5 (µg/m3)||PM10 (µg/m3)||CO (ppm)||SO2 (ppb)||NO2 (ppb)||AQI||AQI|
|Clow - Chigh (avg)||Clow - Chigh (avg)||Clow- Chigh (avg)||Clow - Chigh (avg)||Clow - Chigh (avg)||Clow - Chigh (avg)||Clow - Chigh (avg)||Ilow - Ihigh||Category|
|0-54 (8-hr)||-||0.0-12.0 (24-hr)||0-54 (24-hr)||0.0-4.4 (8-hr)||0-35 (1-hr)||0-53 (1-hr)||0-50||Good|
|55-70 (8-hr)||-||12.1-35.4 (24-hr)||55-154 (24-hr)||4.5-9.4 (8-hr)||36-75 (1-hr)||54-100 (1-hr)||51-100||Moderate|
|71-85 (8-hr)||125-164 (1-hr)||35.5-55.4 (24-hr)||155-254 (24-hr)||9.5-12.4 (8-hr)||76-185 (1-hr)||101-360 (1-hr)||101-150||Unhealthy for Sensitive Groups|
|86-105 (8-hr)||165-204 (1-hr)||55.5-150.4 (24-hr)||255-354 (24-hr)||12.5-15.4 (8-hr)||186-304 (1-hr)||361-649 (1-hr)||151-200||Unhealthy|
|106-200 (8-hr)||205-404 (1-hr)||150.5-250.4 (24-hr)||355-424 (24-hr)||15.5-30.4 (8-hr)||305-604 (24-hr)||650-1249 (1-hr)||201-300||Very Unhealthy|
|-||405-504 (1-hr)||250.5-350.4 (24-hr)||425-504 (24-hr)||30.5-40.4 (8-hr)||605-804 (24-hr)||1250-1649 (1-hr)||301-400||Hazardous|
|-||505-604 (1-hr)||350.5-500.4 (24-hr)||505-604 (24-hr)||40.5-50.4 (8-hr)||805-1004 (24-hr)||1650-2049 (1-hr)||401-500|
Suppose a monitor records a 24-hour average fine particle (PM2.5) concentration of 12.0 micrograms per cubic meter. The equation above results in an AQI of:
corresponding to air quality in the "Good" range. To convert an air pollutant concentration to an AQI, EPA has developed a calculator.
If multiple pollutants are measured at a monitoring site, then the largest or "dominant" AQI value is reported for the location. The ozone AQI between 100 and 300 is computed by selecting the larger of the AQI calculated with a 1-hour ozone value and the AQI computed with the 8-hour ozone value.
8-hour ozone averages do not define AQI values greater than 300; AQI values of 301 or greater are calculated with 1-hour ozone concentrations. 1-hour SO2 values do not define higher AQI values greater than 200. AQI values of 201 or greater are calculated with 24-hour SO2 concentrations.
Real time monitoring data from continuous monitors are typically available as 1-hour averages. However, computation of the AQI for some pollutants requires averaging over multiple hours of data. (For example, calculation of the ozone AQI requires computation of an 8-hour average and computation of the PM2.5 or PM10 AQI requires a 24-hour average.) To accurately reflect the current air quality, the multi-hour average used for the AQI computation should be centered on the current time, but as concentrations of future hours are unknown and are difficult to estimate accurately, EPA uses surrogate concentrations to estimate these multi-hour averages. For reporting the PM2.5, PM10 and ozone air quality indices, this surrogate concentration is called the NowCast. The Nowcast is a particular type of weighted average that provides more weight to the most recent air quality data when air pollution levels are changing.
Public Availability of the AQIEdit
Real time monitoring data and forecasts of air quality that are color-coded in terms of the air quality index are available from EPA's AirNow web site. Historical air monitoring data including AQI charts and maps are available at EPA's AirData website.
History of the AQIEdit
The AQI made its debut in 1968, when the National Air Pollution Control Administration undertook an initiative to develop an air quality index and to apply the methodology to Metropolitan Statistical Areas. The impetus was to draw public attention to the issue of air pollution and indirectly push responsible local public officials to take action to control sources of pollution and enhance air quality within their jurisdictions.
Jack Fensterstock, the head of the National Inventory of Air Pollution Emissions and Control Branch, was tasked to lead the development of the methodology and to compile the air quality and emissions data necessary to test and calibrate resultant indices.
The initial iteration of the air quality index used standardized ambient pollutant concentrations to yield individual pollutant indices. These indices were then weighted and summed to form a single total air quality index. The overall methodology could use concentrations that are taken from ambient monitoring data or are predicted by means of a diffusion model. The concentrations were then converted into a standard statistical distribution with a preset mean and standard deviation. The resultant individual pollutant indices are assumed to be equally weighted, although values other than unity can be used. Likewise, the index can incorporate any number of pollutants although it was only used to combine SOx, CO, and TSP because of a lack of available data for other pollutants.
While the methodology was designed to be robust, the practical application for all metropolitan areas proved to be inconsistent due to the paucity of ambient air quality monitoring data, lack of agreement on weighting factors, and non-uniformity of air quality standards across geographical and political boundaries. Despite these issues, the publication of lists ranking metropolitan areas achieved the public policy objectives and led to the future development of improved indices and their routine application.
- "International Air Quality". Retrieved 20 August 2015.
- National Weather Service Corporate Image Web Team. "NOAA's National Weather Service/Environmental Protection Agency - United States Air Quality Forecast Guidance". Retrieved 20 August 2015.
- "Step 2 - Dose-Response Assessment". Retrieved 20 August 2015.
- Myanmar government (2007). "Haze". Archived from the original on 27 January 2007. Retrieved 2007-02-11.
- "Air Quality Index - American Lung Association". American Lung Association. Archived from the original on 28 August 2015. Retrieved 20 August 2015.
- "Spare the Air - Summer Spare the Air". Retrieved 20 August 2015.
- "FAQ: Use of masks and availability of masks". Retrieved 20 August 2015.
- "Air Quality Index (AQI) - A Guide to Air Quality and Your Health". US EPA. 9 December 2011. Retrieved 8 August 2012.
- Jay Timmons (13 August 2014). "The EPA's Latest Threat to Economic Growth". WSJ. Retrieved 20 August 2015.
- "World Air Quality Index". Retrieved 20 August 2015.
- "Environment Canada - Air - AQHI categories and explanations". Ec.gc.ca. 2008-04-16. Retrieved 2011-11-11.
- Hsu, Angel. "China’s new Air Quality Index: How does it measure up?". Archived from the original on 17 July 2013. Retrieved 8 February 2014.
- "Air Quality Health Index". Government of the Hong Kong Special Administrative Region. Retrieved 9 February 2014.
- "Focus on urban air quality daily". Archived from the original on 2004-10-25.
- "People's Republic of China Ministry of Environmental Protection Standard: Technical Regulation on Ambient Air Quality Index (Chinese PDF)" (PDF).
- Rama Lakshmi (17 October 2014). "India launches its own Air Quality Index. Can its numbers be trusted?". Washington Post. Retrieved 20 August 2015.
- "National Air Quality Index (AQI) launched by the Environment Minister AQI is a huge initiative under ‘Swachh Bharat’". Retrieved 20 August 2015.
- "Ambient Air Quality Monitoring Stations". 2016-08-15. Retrieved 2016-08-16.
- "India launches index to measure air quality". timesofindia-economictimes. Retrieved 20 August 2015.
- "::: Central Pollution Control Board :::". Retrieved 20 August 2015.
- "Dirección de Monitoreo Atmosférico". www.aire.cdmx.gob.mx. Retrieved 2016-06-15.
- "MEWR - Key Environment Statistics - Clean Air". App.mewr.gov.sg. 2011-06-08. Archived from the original on 2011-10-09. Retrieved 2011-11-11.
- ."National Environment Agency - Calculation of PSI" (PDF). Archived from the original (PDF) on 2013-05-15. Retrieved 2012-06-15.
- "National Environment Agency". App2.nea.gov.sg. Archived from the original on 2011-11-25. Retrieved 2011-11-11.
- "What's CAI". Air Korea. Retrieved 25 October 2015.
- "Improved Air Quality Reflected in N Seoul Tower". Chosun Ilbo. 18 May 2012. Retrieved 29 July 2012.
- COMEAP. "Review of the UK Air Quality Index". COMEAP website.[permanent dead link]
- "Daily Air Quality Index". Air UK Website. Defra.
- Garcia, Javier; Colosio, Joëlle (2002). Air-quality indices : elaboration, uses and international comparisons. Presses des MINES. ISBN 2-911762-36-3.
- "Indices definition". Air quality. Retrieved 9 August 2012.
- David Mintz (February 2009). Technical Assistance Document for the Reporting of Daily Air Quality – the Air Quality Index (AQI) (PDF). North Carolina: US EPA Office of Air Quality Planning and Standards. EPA-454/B-09-001. Retrieved 9 August 2012.
- Revised Air Quality Standards For Particle Pollution And Updates To The Air Quality Index (AQI) (PDF). North Carolina: US EPA Office of Air Quality Planning and Standards. 2013.
- "AQI Calculator: Concentration to AQI". Retrieved 9 August 2012.
- "AirNow API Documentation". Retrieved 20 August 2015.
- "How are your ozone maps calculated?". Retrieved 20 August 2015.
- "AirNow". Retrieved 9 August 2012..
- "AirData - US Environmental Protection Agency". Retrieved 20 August 2015.
- J.C Fensterstock et al., " The Development and Utilization of an Air Quality Index," Paper No. 69-73, presented at the 62nd Annual Meeting of the Air Pollution Control Administration, June 1969.
- World Air Quality Index
- CAQI in Europe- AirqualityNow website
- CAI at Airkorea.or.kr - website of South Korea Environmental Management Corp.
- AQI at airnow.gov - cross-agency U.S. Government site
- New Mexico Air Quality and API data - Example of how New Mexico Environment Department publishes their Air Quality and API data.
- AQI at Meteorological Service of Canada
- The UK Air Quality Archive
- API at JAS (Malaysian Department of Environment)
- API at Hong Kong - Environmental Protection Department of the Government of the Hong Kong Special Administrative Region
- San Francisco Bay Area Spare-the-Air - AQI explanation
- Malaysia Air Pollution Index
- AQI in Thailand
- Unofficial PM25 AQI in Hanoi, Vietnam |
Proton-exchange membrane fuel cell
Proton-exchange membrane fuel cells (PEMFC), also known as polymer electrolyte membrane (PEM) fuel cells, are a type of fuel cell being developed mainly for transport applications, as well as for stationary fuel-cell applications and portable fuel-cell applications. Their distinguishing features include lower temperature/pressure ranges (50 to 100 °C) and a special proton-conducting polymer electrolyte membrane. PEMFCs generate electricity and operate on the opposite principle to PEM electrolysis, which consumes electricity. They are a leading candidate to replace the aging alkaline fuel-cell technology, which was used in the Space Shuttle.
PEMFCs are built out of membrane electrode assemblies (MEA) which include the electrodes, electrolyte, catalyst, and gas diffusion layers. An ink of catalyst, carbon, and electrode are sprayed or painted onto the solid electrolyte and carbon paper is hot pressed on either side to protect the inside of the cell and also act as electrodes. The pivotal part of the cell is the triple phase boundary (TPB) where the electrolyte, catalyst, and reactants mix and thus where the cell reactions actually occur. Importantly, the membrane must not be electrically conductive so the half reactions do not mix. Operating temperatures above 100 °C are desired so the water byproduct becomes steam and water management becomes less critical in cell design.
A proton exchange membrane fuel cell transforms the chemical energy liberated during the electrochemical reaction of hydrogen and oxygen to electrical energy, as opposed to the direct combustion of hydrogen and oxygen gases to produce thermal energy.
A stream of hydrogen is delivered to the anode side of the MEA. At the anode side it is catalytically split into protons and electrons. This oxidation half-cell reaction or hydrogen oxidation reaction (HOR) is represented by:
At the anode:
The newly formed protons permeate through the polymer electrolyte membrane to the cathode side. The electrons travel along an external load circuit to the cathode side of the MEA, thus creating the current output of the fuel cell. Meanwhile, a stream of oxygen is delivered to the cathode side of the MEA. At the cathode side oxygen molecules react with the protons permeating through the polymer electrolyte membrane and the electrons arriving through the external circuit to form water molecules. This reduction half-cell reaction or oxygen reduction reaction (ORR) is represented by:
At the cathode:
The reversible reaction is expressed in the equation and shows the reincorporation of the hydrogen protons and electrons together with the oxygen molecule and the formation of one water molecule. The potentials in each case are given with respect to the standard hydrogen electrode.
Polymer electrolyte membrane
To function, the membrane must conduct hydrogen ions (protons) but not electrons as this would in effect "short circuit" the fuel cell. The membrane must also not allow either gas to pass to the other side of the cell, a problem known as gas crossover. Finally, the membrane must be resistant to the reducing environment at the cathode as well as the harsh oxidative environment at the anode.
Splitting of the hydrogen molecule is relatively easy by using a platinum catalyst. Unfortunately however, splitting the oxygen molecule is more difficult, and this causes significant electric losses. An appropriate catalyst material for this process has not been discovered, and platinum is the best option.
The PEMFC is a prime candidate for vehicle and other mobile applications of all sizes down to mobile phones, because of its compactness.
Fuel Cells based on PEM still have many issues:
1. Water management
Water management is crucial to performance: if water is evaporated too slowly, it will flood the membrane and the accumulation of water inside of field flow plate will impede the flow of oxygen into the fuel cell, but if water evaporates too fast, the membrane will dry and the resistance across it increases. Both cases will cause damage to stability and power output. Water management is a very difficult subject in PEM systems, primarily because water in the membrane is attracted toward the cathode of the cell through polarization.
A wide variety of solutions for managing the water exist including integration of an electroosmotic pump.
Another innovative method to resolve the water recirculation problem is the 3D fine mesh flow field design used in the Toyota Mirai, 2014. Conventional design of FC stack recirculates water from the air outlet to the air inlet through a humidifier with a straight channel and porous metal flow fields.The flow field is a structure made up of a rib and channels. However, the rib partially covers the gas diffusion layer (GDL) and the resultant gas-transport distance is longer than the inter-channel distance. Furthermore, the contact pressure between the GDL and the rib also compresses the GDL, making its thickness non-uniform across the rib and channel. The large width and non-uniform thickness of the rib will increase potential for water vapor to accumulate and the oxygen will be compromised. As a result, oxygen will be impeded to diffuse into catalyst layer, leading to nonuniform power generation in the FC.
This new design enabled the first FC stack functions without a humidifying system meanwhile overcoming water recirculation issues and achieving high power output stability. The 3D micro lattice allows more pathways for gas flow; therefore, it promotes airflow toward membrane electrode and gas diffusion layer assembly (MEGA) and promotes O2 diffusion to the catalyst layer. Unlike conventional flow fields, the 3D micro-lattices in the complex field, which act as baffles and induce frequent micro-scale interfacial flux between the GDL and flow-fields. Due to this repeating micro-scale convective flow, oxygen transport to catalyst layer (CL) and liquid water removal from GDL is significantly enhanced. The generated water is quickly drawn out through the flow field, preventing accumulation within the pores. As a result, the power generation from this flow field is uniform across the cross-section and self-humidification is enabled.
2. Vulnerability of the Catalyst
The platinum catalyst on the membrane is easily poisoned by carbon monoxide, which is often present in product gases formed by methane reforming (no more than one part per million is usually acceptable). This generally necessitates the use of the water gas shift reaction to eliminate CO from product gases and form more hydrogen. Additionally, the membrane is sensitive to the presences of metal ions, which may impair proton conduction mechanisms and can be introduced by corrosion of metallic bipolar plates, metallic components in the fuel cell system or from contaminants in the fuel/oxidant.
PEM systems that use reformed methanol were proposed, as in Daimler Chrysler Necar 5; reforming methanol, i.e. making it react to obtain hydrogen, is however a very complicated process, that also requires purification from the carbon monoxide the reaction produces. A platinum-ruthenium catalyst is necessary as some carbon monoxide will unavoidably reach the membrane. The level should not exceed 10 parts per million. Furthermore, the start-up times of such a reformer reactor are of about half an hour. Alternatively, methanol, and some other biofuels can be fed to a PEM fuel cell directly without being reformed, thus making a direct methanol fuel cell (DMFC). These devices operate with limited success.
3. Limitation of Operating Temperature
The most commonly used membrane is Nafion by Chemours, which relies on liquid water humidification of the membrane to transport protons. This implies that it is not feasible to use temperatures above 80 to 90 °C, since the membrane would dry. Other, more recent membrane types, based on polybenzimidazole (PBI) or phosphoric acid, can reach up to 220 °C without using any water management (see also High Temperature Proton Exchange Membrane fuel cell, HT-PEMFC): higher temperature allow for better efficiencies, power densities, ease of cooling (because of larger allowable temperature differences), reduced sensitivity to carbon monoxide poisoning and better controllability (because of absence of water management issues in the membrane); however, these recent types are not as common. PBI can be doped with phosphoric or sulfuric acid and the conductivity scales with amount of doping and temperature. At high temperatures, it is difficult to keep Nafion hydrated, but this acid doped material does not use water as a medium for proton conduction. It also exhibits better mechanical properties, higher strength, than Nafion and is cheaper. However, acid leaching is a considerable issue and processing, mixing with catalyst to form ink, has proved tricky. Aromatic polymers, such as PEEK, are far cheaper than Teflon (PTFE and backbone of Nafion) and their polar character leads to hydration that is less temperature dependent than Nafion. However, PEEK is far less ionically conductive than Nafion and thus is a less favorable electrolyte choice. Recently, protic ionic liquids and protic organic ionic plastic crystals have been shown as promising alternative electrolyte materials for high temperature (100–200 °C) PEMFCs.
An electrode typically consists of carbon support, Pt particles, Nafion ionomer, and/or Teflon binder. The carbon support functions as an electrical conductor; the Pt particles are reaction sites; the ionomer provides paths for proton conduction, and the Teflon binder increases the hydrophobicity of the electrode to minimize potential flooding. In order to enable the electrochemical reactions at the electrodes, protons, electrons and the reactant gases (hydrogen or oxygen) must gain access to the surface of the catalyst in the electrodes, while the product water, which can be in either liquid or gaseous phase, or both phases, must be able to permeate from the catalyst to the gas outlet. These properties are typically realized by porous composites of polymer electrolyte binder (ionomer) and catalyst nanoparticles supported on carbon particles. Typically platinum is used as the catalyst for the electrochemical reactions at the anode and cathode, while nanoparticles realize high surface to weight ratios (as further described below) reducing the amount of the costly platinum. The polymer electrolyte binder provides the ionic conductivity, while the carbon support of the catalyst improves the electric conductivity and enables low platinum metal loading. The electric conductivity in the composite electrodes is typically more than 40 times higher as the proton conductivity.
Gas diffusion layer
The GDL electrically connects the catalyst and current collector. It must be porous, electrically conductive, and thin. The reactants must be able to reach the catalyst, but conductivity and porosity can act as opposing forces. Optimally, the GDL should be composed of about one third Nafion or 15% PTFE. The carbon particles used in the GDL can be larger than those employed in the catalyst because surface area is not the most important variable in this layer. GDL should be around 15–35 µm thick to balance needed porosity with mechanical strength. Often, an intermediate porous layer is added between the GDL and catalyst layer to ease the transitions between the large pores in the GDL and small porosity in the catalyst layer. Since a primary function of the GDL is to help remove water, a product, flooding can occur when water effectively blocks the GDL. This limits the reactants ability to access the catalyst and significantly decreases performance. Teflon can be coated onto the GDL to limit the possibility of flooding. Several microscopic variables are analyzed in the GDLS such as: porosity, tortuosity and permeability. These variables have incidence over the behavior of the fuel cells.
The practical efficiency of a PEMs is in the range of 50–60% . Main factors that create losses are:
- Activation losses
- Ohmic losses
- Mass transport losses
Metal-organic frameworks (MOFs) are a relatively new class of porous, highly crystalline materials that consist of metal nodes connected by organic linkers. Due to the simplicity of manipulating or substituting the metal centers and ligands, there are a virtually limitless number of possible combinations, which is attractive from a design standpoint. MOFs exhibit many unique properties due to their tunable pore sizes, thermal stability, high volume capacities, large surface areas, and desirable electrochemical characteristics. Among their many diverse uses, MOFs are promising candidates for clean energy applications such as hydrogen storage, gas separations, supercapacitors, Li-ion batteries, solar cells, and fuel cells. Within the field of fuel cell research, MOFs are being studied as potential electrolyte materials and electrode catalysts that could someday replace traditional polymer membranes and Pt catalysts, respectively.
As electrolyte materials, the inclusion of MOFs seems at first counter-intuitive. Fuel cell membranes generally have low porosity to prevent fuel crossover and loss of voltage between the anode and cathode. Additionally, membranes tend to have low crystallinity because the transport of ions is more favorable in disordered materials. On the other hand, pores can be filled with additional ion carriers that ultimately enhance the ionic conductivity of the system and high crystallinity makes the design process less complex.
The general requirements of a good electrolyte for PEMFCs are: high proton conductivity (>10−2 S/cm for practical applications) to enable proton transport between electrodes, good chemical and thermal stability under fuel cell operating conditions (environmental humidity, variable temperatures, resistance to poisonous species, etc.), low cost, ability to be processed into thin-films, and overall compatibility with other cell components. While polymeric materials are currently the preferred choice of proton-conducting membrane, they require humidification for adequate performance and can sometimes physically degrade due to hydrations effects, thereby causing losses of efficiency. As mentioned, Nafion is also limited by a dehydration temperature of < 100 °C, which can lead to slower reaction kinetics, poor cost efficiency, and CO poisoning of Pt electrode catalysts. Conversely, MOFs have shown encouraging proton conductivities in both low and high temperature regimes as well as over a wide range of humidity conditions. Below 100 °C and under hydration, the presence of hydrogen bonding and solvent water molecules aid in proton transport, whereas anhydrous conditions are suitable for temperatures above 100 °C. MOFs also have the distinct advantage of exhibiting proton conductivity by the framework itself in addition to the inclusion of charge carries (i.e., water, acids, etc.) into their pores.
A low temperature example is work by Kitagawa, et al. who used a two-dimensional oxalate-bridged anionic layer framework as the host and introduced ammonium cations and adipic acid molecules into the pores to increase proton concentration. The result was one of the first instances of a MOF showing “superprotonic” conductivity (8 × 10−3 S/cm) at 25 °C and 98% relative humidity (RH). They later found that increasing the hydrophilic nature of the cations introduced into the pores could enhance proton conductivity even more. In this low temperature regime that is dependent on degree of hydration, it has also been shown that proton conductivity is heavily dependent on humidity levels.
A high temperature anhydrous example is PCMOF2, which consists of sodium ions coordinated to a trisulfonated benzene derivative. To improve performance and allow for higher operating temperatures, water can be replaced as the proton carrier by less volatile imidazole or triazole molecules within the pores. The maximum temperature achieved was 150 °C with an optimum conductivity of 5 × 10−4 S/cm, which is lower than other current electrolyte membranes. However, this model holds promise for its temperature regime, anhydrous conditions, and ability to control the quantity of guest molecules within the pores, all of which allowed for the tunability of proton conductivity. Additionally, the triazole-loaded PCMOF2 was incorporated into a H2/air membrane-electrode assembly and achieved an open circuit voltage of 1.18 V at 100 °C that was stable for 72 hours and managed to remain gas tight throughout testing. This was the first instance that proved MOFs could actually be implemented into functioning fuel cells, and the moderate potential difference showed that fuel crossover due to porosity was not an issue.
To date, the highest proton conductivity achieved for a MOF electrolyte is 4.2 × 10−2 S/cm at 25 °C under humid conditions (98% RH), which is competitive with Nafion. Some recent experiments have even successfully produced thin-film MOF membranes instead of the traditional bulk samples or single crystals, which is crucial for their industrial applicability. Once MOFs are able to consistently achieve sufficient conductivity levels, mechanical strength, water stability, and simple processing, they have the potential to play an important role in PEMFCs in the near future.
MOFs have also been targeted as potential replacements of platinum group metal (PGM) materials for electrode catalysts, although this research is still in the early stages of development. In PEMFCs, the oxygen reduction reaction (ORR) at the Pt cathode is significantly slower than the fuel oxidation reaction at the anode, and thus non-PGM and metal-free catalysts are being investigated as alternatives. The high volumetric density, large pore surface areas, and openness of metal-ion sites in MOFs make them ideal candidates for catalyst precursors. Despite promising catalytic abilities, the durability of these proposed MOF-based catalysts is currently less than desirable and the ORR mechanism in this context is still not completely understood.
Much of the current research on catalysts for PEM fuel cells can be classified as having one of the following main objectives:
- to obtain higher catalytic activity than the standard carbon-supported platinum particle catalysts used in current PEM fuel cells
- to reduce the poisoning of PEM fuel cell catalysts by impurity gases
- to reduce the cost of the fuel cell due to use of platinum-based catalysts
- to enhance the ORR activity of platinum group metal-free electrocatalysts
Examples of these approaches are given in the following sections.
Increasing catalytic activity
As mentioned above, platinum is by far the most effective element used for PEM fuel cell catalysts, and nearly all current PEM fuel cells use platinum particles on porous carbon supports to catalyze both hydrogen oxidation and oxygen reduction. However, due to their high cost, current Pt/C catalysts are not feasible for commercialization. The U.S. Department of Energy estimates that platinum-based catalysts will need to use roughly four times less platinum than is used in current PEM fuel cell designs in order to represent a realistic alternative to internal combustion engines. Consequently, one main goal of catalyst design for PEM fuel cells is to increase the catalytic activity of platinum by a factor of four so that only one-fourth as much of the precious metal is necessary to achieve similar performance.
One method of increasing the performance of platinum catalysts is to optimize the size and shape of the platinum particles. Decreasing the particles’ size alone increases the total surface area of catalyst available to participate in reactions per volume of platinum used, but recent studies have demonstrated additional ways to make further improvements to catalytic performance. For example, one study reports that high-index facets of platinum nanoparticles (that is Miller indexes with large integers, such as Pt (730)) provide a greater density of reactive sites for oxygen reduction than typical platinum nanoparticles.
Since the most common and effective catalyst, platinum, is extremely expensive, alternative processing is necessary to maximize surface area and minimize loading. Deposition of nanosized Pt particles onto carbon powder (Pt/C) provides a large Pt surface area while the carbon allows for electrical connection between the catalyst and the rest of the cell. Platinum is so effective because it has high activity and bonds to the hydrogen just strongly enough to facilitate electron transfer but not inhibit the hydrogen from continuing to move around the cell. However, platinum is less active in the cathode oxygen reduction reaction. This necessitates the use of more platinum, increasing the cell's expense and thus feasibility. Many potential catalyst choices are ruled out because of the extreme acidity of the cell.
The most effective ways of achieving the nanoscale Pt on carbon powder, which is currently the best option, are through vacuum deposition, sputtering, and electrodeposition. The platinum particles are deposited onto carbon paper that is permeated with PTFE. However, there is an optimal thinness to this catalyst layer, which limits the lower cost limit. Below 4 nm, Pt will form islands on the paper, limiting its activity. Above this thickness, the Pt will coat the carbon and be an effective catalyst. To further complicate things, Nafion cannot be infiltrated beyond 10 um, so using more Pt than this is an unnecessary expense. Thus the amount and shape of the catalyst is limited by the constraints of other materials.
A second method of increasing the catalytic activity of platinum is to alloy it with other metals. For example, it was recently shown that the Pt3Ni(111) surface has a higher oxygen reduction activity than pure Pt(111) by a factor of ten. The authors attribute this dramatic performance increase to modifications to the electronic structure of the surface, reducing its tendency to bond to oxygen-containing ionic species present in PEM fuel cells and hence increasing the number of available sites for oxygen adsorption and reduction.
Further efficiencies can be realized using an Ultrasonic nozzle to apply the platinum catalyst to the electrolyte layer or to carbon paper under atmospheric conditions resulting in high efficiency spray. Studies have shown that due to the uniform size of the droplets created by this type of spray, due to the high transfer efficiency of the technology, due to the non-clogging nature of the nozzle and finally due to the fact that the ultrasonic energy de-agglomerates the suspension just before atomization, fuel cells MEA's manufactured this way have a greater homogeneity in the final MEA, and the gas flow through the cell is more uniform, maximizing the efficiency of the platinum in the MEA. Recent studies using inkjet printing to deposit the catalyst over the membrane have also shown high catalyst utilization due to the reduced thickness of the deposited catalyst layers.
Very recently, a new class of ORR electrocatalysts have been introduced in the case of Pt-M (M-Fe and Co) systems with an ordered intermetallic core encapsulated within a Pt-rich shell. These intermetallic core-shell (IMCS) nanocatalysts were found to exhibit an enhanced activity and most importantly, an extended durability compared to many previous designs. While the observed enhancement in the activities is ascribed to a strained lattice, the authors report that their findings on the degradation kinetics establish that the extended catalytic durability is attributable to a sustained atomic order.
The other popular approach to improving catalyst performance is to reduce its sensitivity to impurities in the fuel source, especially carbon monoxide (CO). Presently, pure hydrogen gas is becoming economical to mass-produce by electrolysis. However, at the moment hydrogen gas is produced by steam reforming light hydrocarbons, a process which produces a mixture of gases that also contains CO (1–3%), CO2 (19–25%), and N2 (25%). Even tens of parts per million of CO can poison a pure platinum catalyst, so increasing platinum's resistance to CO is an active area of research.
For example, one study reported that cube-shaped platinum nanoparticles with (100) facets displayed a fourfold increase in oxygen reduction activity compared to randomly faceted platinum nanoparticles of similar size. The authors concluded that the (111) facets of the randomly shaped nanoparticles bonded more strongly to sulfate ions than the (100) facets, reducing the number of catalytic sites open to oxygen molecules. The nanocubes they synthesized, in contrast, had almost exclusively (100) facets, which are known to interact with sulfate more weakly. As a result, a greater fraction of the surface area of those particles was available for the reduction of oxygen, boosting the catalyst's oxygen reduction activity.
In addition, researchers have been investigating ways of reducing the CO content of hydrogen fuel before it enters a fuel cell as a possible way to avoid poisoning the catalysts. One recent study revealed that ruthenium-platinum core–shell nanoparticles are particularly effective at oxidizing CO to form CO2, a much less harmful fuel contaminant. The mechanism that produces this effect is conceptually similar to that described for Pt3Ni above: the ruthenium core of the particle alters the electronic structure of the platinum surface, rendering it better able to catalyze the oxidation of CO.
The challenge for the viability of PEM fuel cells today still remains in their cost and stability. The high cost can in large part be attributed to the use of the precious metal of platinum in the catalyst layer of PEM cells. The electrocatalyst currently accounts for nearly half of the fuel cell stack cost. Although the Pt loading of PEM fuel cells has been reduced by two orders of magnitude over the past decade, further reduction is necessary to make the technology economically viable for commercialization. Whereas some research efforts aim to address this issue by improving the electrocatalytic activity of Pt-based catalysts, an alternative is to eliminate the use of Pt altogether by developing a non-platinum-group-metal (non-PGM) cathode catalyst whose performance rivals that of Pt-based technologies. The U.S. Department of Energy has been setting milestones for the development of fuel cells, targeting a durability of 5000 hours and a non-PGM catalyst ORR volumetric activity of 300 A cm−3.
Promising alternatives to Pt-based catalysts are Metal/Nitrogen/ Carbon-catalysts (M/N/C-catalysts). To achieve high power density, or output of power over surface area of the cell, a volumetric activity of at least 1/10 that of Pt-based catalysts must be met, along with good mass transport properties. While M/N/C-catalysts still demonstrate poorer volumetric activities than Pt-based catalysts, the reduced costs of such catalysts allows for greater loading to compensate. However, increasing the loading of M/N/C-catalysts also renders the catalytic layer thicker, impairing its mass transport properties. In other words, H2, O2, protons, and electrons have greater difficulty in migrating through the catalytic layer, decreasing the voltage output of the cell. While high microporosity of the M/N/C catalytic network results in high volumetric activity, improved mass transport properties are instead associated to macroporosity of the network. These M/N/C materials are synthesized using high temperature pyrolysis and other high temperature treatments of precursors containing the metal, nitrogen, and carbon.
Recently, researchers have developed a Fe/N/C catalyst derived from iron (II) acetate (FeAc), phenanthroline (Phen), and a metal-organic-framework (MOF) host. The MOF is a Zn(II) zeolitic imidazolate framework (ZIF) called ZIF-8, which demonstrates a high microporous surface area and high nitrogen content conducive to ORR activity. The power density of the FeAc/Phen/ZIF-8-catalyst was found to be 0.75 W cm−2 at 0.6 V. This value is a significant improvement over the maximal 0.37 W cm−2 power density of previous M/N/C-catalysts and is much closer to matching the typical value of 1.0–1.2 W cm−2 for Pt-based catalysts with a Pt loading of 0.3 mg cm−2. The catalyst also demonstrated a volumetric activity of 230 A·cm−3, the highest value for non-PGM catalysts to date, approaching the U.S. Department of Energy milestone.
While the power density achieved by the novel FeAc/Phen/ZIF-8-catalyst is promising, its durability remains inadequate for commercial application. It is reported that the best durability exhibited by this catalyst still had a 15% drop in current density over 100 hours in H2/air. Hence while the Fe-based non-PGM catalysts rival Pt-based catalysts in their electrocatalytic activity, there is still much work to be done in understanding their degradation mechanisms and improving their durability.
The major application of PEM fuel cells focuses on transportation primarily because of their potential impact on the environment, e.g. the control of emission of the green house gases (GHG). Other applications include distributed/stationary and portable power generation. Most major motor companies work solely on PEM fuel cells due to their high power density and excellent dynamic characteristics as compared with other types of fuel cells. Due to their light weight, PEMFCs are most suited for transportation applications. PEMFCs for buses, which use compressed hydrogen for fuel, can operate at up to 40% efficiency. Generally PEMFCs are implemented on buses over smaller cars because of the available volume to house the system and store the fuel. Technical issues for transportation involve incorporation of PEMs into current vehicle technology and updating energy systems. Full fuel cell vehicles are not advantageous if hydrogen is sourced from fossil fuels; however, they become beneficial when implemented as hybrids. There is potential for PEMFCs to be used for stationary power generation, where they provide 5 kW at 30% efficiency; however, they run into competition with other types of fuel cells, mainly SOFCs and MCFCs. Whereas PEMFCs generally require high purity hydrogen for operation, other fuel cell types can run on methane and are thus more flexible systems. Therefore, PEMFCs are best for small scale systems until economically scalable pure hydrogen is available. Furthermore, PEMFCs have the possibility of replacing batteries for portable electronics, though integration of the hydrogen supply is a technical challenge particularly without a convenient location to store it within the device.
Before the invention of PEM fuel cells, existing fuel cell types such as solid-oxide fuel cells were only applied in extreme conditions. Such fuel cells also required very expensive materials and could only be used for stationary applications due to their size. These issues were addressed by the PEM fuel cell. The PEM fuel cell was invented in the early 1960s by Willard Thomas Grubb and Leonard Niedrach of General Electric. Initially, sulfonated polystyrene membranes were used for electrolytes, but they were replaced in 1966 by Nafion ionomer, which proved to be superior in performance and durability to sulfonated polystyrene.
Parallel with Pratt and Whitney Aircraft, General Electric developed the first proton exchange membrane fuel cells (PEMFCs) for the Gemini space missions in the early 1960s. The first mission to use PEMFCs was Gemini V. However, the Apollo space missions and subsequent Apollo-Soyuz, Skylab and Space Shuttle missions used fuel cells based on Bacon's design, developed by Pratt and Whitney Aircraft.
Extremely expensive materials were used and the fuel cells required very pure hydrogen and oxygen. Early fuel cells tended to require inconveniently high operating temperatures that were a problem in many applications. However, fuel cells were seen to be desirable due to the large amounts of fuel available (hydrogen and oxygen).
Despite their success in space programs, fuel cell systems were limited to space missions and other special applications, where high cost could be tolerated. It was not until the late 1980s and early 1990s that fuel cells became a real option for wider application base. Several pivotal innovations, such as low platinum catalyst loading and thin film electrodes, drove the cost of fuel cells down, making development of PEMFC systems more realistic. However, there is significant debate as to whether hydrogen fuel cells will be a realistic technology for use in automobiles or other vehicles. (See hydrogen economy.) A large part of PEMFC production is for the Toyota Mirai. The US Department of Energy estimates a 2016 price at $53/kW if 500,000 units per year were made.
- Dynamic hydrogen electrode
- Gas diffusion electrode
- Glossary of fuel cell terms
- High Temperature Proton Exchange Membrane fuel cell
- Hydrogen sulfide sensor
- Power-to-weight ratio
- Reversible hydrogen electrode
- Timeline of hydrogen technologies
- Loyselle, Patricia; Prokopius, Kevin (August 2011). "Teledyne Energy Systems, Inc., Proton Exchange Member (PEM) Fuel Cell Engineering Model Powerplant. Test Report: Initial Benchmark Tests in the Original Orientation". NASA. Glenn Research Center. hdl:2060/20110014968.
- Millington, Ben; Du, Shangfeng; Pollet, Bruno G. (2011). "The Effect of Materials on Proton Exchange Membrane Fuel Cell Electrode Performance". Journal of Power Sources. 196 (21): 9013–017. Bibcode:2011JPS...196.9013M. doi:10.1016/j.jpowsour.2010.12.043.
- Bratsch, Stephen G. (1989). "Standard Electrode Potentials and Temperature Coefficients in Water at 298.15 K". J. Phys. Chem. Ref. Data. 18 (1): 1–21. Bibcode:1989JPCRD..18....1B. doi:10.1063/1.555839. S2CID 97185915.
- Yin, Xi; Lin, Ling; Chung, Hoon T; Komini Babu, Siddharth; Martinez, Ulises; Purdy, Geraldine M; Zelenay, Piotr (4 August 2017). "Effects of MEA Fabrication and Ionomer Composition on Fuel Cell Performance of PGM-Free ORR Catalyst". ECS Transactions. 77 (11): 1273–1281. Bibcode:2017ECSTr..77k1273Y. doi:10.1149/07711.1273ecst. OSTI 1463547.
- Schalenbach, Maximilian; Hoefner, Tobias; Paciok, Paul; Carmo, Marcelo; Lueke, Wiebke; Stolten, Detlef (2015-10-28). "Gas Permeation through Nafion. Part 1: Measurements". The Journal of Physical Chemistry C. 119 (45): 25145–25155. doi:10.1021/acs.jpcc.5b04155.
- Schalenbach, Maximilian; Hoeh, Michael A.; Gostick, Jeff T.; Lueke, Wiebke; Stolten, Detlef (2015-10-14). "Gas Permeation through Nafion. Part 2: Resistor Network Model". The Journal of Physical Chemistry C. 119 (45): 25156–25169. doi:10.1021/acs.jpcc.5b04157.
- "Wang, Y., & Chen, K. S. (2013). PEM fuel cells: thermal and water management fundamentals. Momentum Press". Cite journal requires
- Coletta, Vitor C., et al. "Cu-Modified SrTiO3 Perovskites Toward Enhanced Water–Gas Shift Catalysis: A Combined Experimental and Computational Study." ACS Applied Energy Materials (2021), 4, 1, 452–461
- Lee, J. S.; et al. (2006). "Polymer electrolyte membranes for fuel cells" (PDF). Journal of Industrial and Engineering Chemistry. 12: 175–183. doi:10.1021/ie050498j.
- Wainright, J. S. (1995). "Acid-Doped Polybenzimidazoles: A New Polymer Electrolyte". Journal of the Electrochemical Society. 142 (7): L121. Bibcode:1995JElS..142L.121W. doi:10.1149/1.2044337.
- [O'Hayre, Ryan P. Fuel Cell Fundamentals. Hoboken, NJ: John Wiley & Sons, 2006. Print.].
- Jiangshui Luo; Jin Hu; Wolfgang Saak; Rüdiger Beckhaus; Gunther Wittstock; Ivo F. J. Vankelecom; Carsten Agert; Olaf Conrad (2011). "Protic ionic liquid and ionic melts prepared from methanesulfonic acid and 1H-1,2,4-triazole as high temperature PEMFC electrolytes". Journal of Materials Chemistry. 21 (28): 10426–10436. doi:10.1039/C0JM04306K.
- Jiangshui Luo; Annemette H. Jensen; Neil R. Brooks; Jeroen Sniekers; Martin Knipper; David Aili; Qingfeng Li; Bram Vanroy; Michael Wübbenhorst; Feng Yan; Luc Van Meervelt; Zhigang Shao; Jianhua Fang; Zheng-Hong Luo; Dirk E. De Vos; Koen Binnemans; Jan Fransaer (2015). "1,2,4-Triazolium perfluorobutanesulfonate as an archetypal pure protic organic ionic plastic crystal electrolyte for all-solid-state fuel cells". Energy & Environmental Science. 8 (4): 1276–1291. doi:10.1039/C4EE02280G. S2CID 84176511.
- Jiangshui Luo; Olaf Conrad & Ivo F. J. Vankelecom (2013). "Imidazolium methanesulfonate as a high temperature proton conductor". Journal of Materials Chemistry A. 1 (6): 2238–2247. doi:10.1039/C2TA00713D.
- Litster, S.; McLean, G. (2004-05-03). "PEM fuel cell electrodes". Journal of Power Sources. 130 (1–2): 61–76. Bibcode:2004JPS...130...61L. doi:10.1016/j.jpowsour.2003.12.055.
- Gasteiger, H. A.; Panels, J. E.; Yan, S. G. (2004-03-10). "Dependence of PEM fuel cell performance on catalyst loading". Journal of Power Sources. Eighth Ulmer Electrochemische Tage. 127 (1–2): 162–171. Bibcode:2004JPS...127..162G. doi:10.1016/j.jpowsour.2003.09.013.
- Schalenbach, Maximilian; Zillgitt, Marcel; Maier, Wiebke; Stolten, Detlef (2015-07-29). "Parasitic Currents Caused by Different Ionic and Electronic Conductivities in Fuel Cell Anodes". ACS Applied Materials & Interfaces. 7 (29): 15746–15751. doi:10.1021/acsami.5b02182. ISSN 1944-8244. PMID 26154401.
- Litster, S.; Mclean, G. (2004). "PEM Fuel Cell Electrodes". Journal of Power Sources. 130 (1–2): 61–76. Bibcode:2004JPS...130...61L. doi:10.1016/j.jpowsour.2003.12.055.
- Espinoza, Mayken (2015). "Compress effects on porosity, gas-phase tortuosity, and gas permeability in a simulated PEM gas diffusion layer". International Journal of Energy Research. 39 (11): 1528–1536. doi:10.1002/er.3348.
- Ramaswamy, Padmini; Wong, Norman E.; Shimizu, George K. H. (2014). "MOFs as proton conductors – challenges and opportunities". Chem. Soc. Rev. 43 (16): 5913–5932. doi:10.1039/c4cs00093e. PMID 24733639.
- Li, Shun-Li; Xu, Qiang (2013). "Metal–organic frameworks as platforms for clean energy". Energy & Environmental Science. 6 (6): 1656. doi:10.1039/c3ee40507a.
- Kitagawa, Hiroshi (2009). "Metal–organic frameworks: Transported into fuel cells". Nature Chemistry. 1 (9): 689–690. Bibcode:2009NatCh...1..689K. doi:10.1038/nchem.454. PMID 21124353.
- Lux, Lacey; Williams, Kia; Ma, Shengqian (2015). "Heat-treatment of metal–organic frameworks for green energy applications". CrystEngComm. 17 (1): 10–22. doi:10.1039/c4ce01499e.
- "Department of Energy Announces $39 million for Innovative Hydrogen and Fuel Cell Technologies Research and Development". Archived from the original on 2018-06-15.
- Hydrogen, Fuel Cells & Infrastructure Technologies Program Multi-Year Research, Development and Demonstration Plan Archived 2015-09-24 at the Wayback Machine, U.S. Department of Energy, October 2007.
- N. Tian; Z.-Y. Zhou; S.-G. Sun; Y. Ding; Z. L. Wang (2007). "Synthesis of tetrahexahedral platinum nanocrystals with high-index facets and high electro-oxidation activity". Science. 316 (5825): 732–735. Bibcode:2007Sci...316..732T. doi:10.1126/science.1140484. PMID 17478717. S2CID 939992.
- V. R. Stamenkovic, B. Fowler, B. S. Mun, G. Wang, P. N. Ross, C. A. Lucas, N. M. Marković. Activity on Pt3Ni(111) via increased surface site availability (2007). "Improved Oxygen Reduction Activity on Pt3Ni(111) via Increased Surface Site Availability". Science. 315 (5811): 493–497. Bibcode:2007Sci...315..493S. doi:10.1126/science.1135941. PMID 17218494. S2CID 39722200.CS1 maint: multiple names: authors list (link)
- Koraishy, Babar (2009). "Manufacturing of membrane electrode assemblies for fuel cells" (PDF). 6.2.1: Singapore University of Technology and Design. p. 9.CS1 maint: location (link)
- Engle, Robb (2011-08-08). Maximizing the Use of Platinum Catalyst by Ultrasonic Spray Application (PDF). Proceedings of Asme 2011 5Th International Conference on Energy Sustainability & 9Th Fuel Cell Science, Engineering and Technology Conference. ESFUELCELL2011-54369. pp. 637–644. doi:10.1115/FuelCell2011-54369. ISBN 978-0-7918-5469-3.
- Shukla, S (2015). "Analysis of Low Platinum Loading Thin Polymer Electrolyte Fuel Cell Electrodes Prepared by Inkjet Printing". Electrochimica Acta. 156: 289–300. doi:10.1016/j.electacta.2015.01.028.
- Shukla, S (2016). "Analysis of Inkjet Printed PEFC Electrodes with Varying Platinum Loading". Journal of the Electrochemical Society. 163 (7): F677–F687. doi:10.1149/2.1111607jes.
- Sagar Prabhudev; Matthieu Bugnet; Christina Bock; Gianluigi Botton (2013). "Strained Lattice with Persistent Atomic Order in Pt3Fe2 Intermetallic Core–Shell Nanocatalysts". ACS Nano. 7 (7): 6103–6110. doi:10.1021/nn4019009. PMID 23773037.
- Minna Cao, Dongshuang Wu & Rong Cao (2014). "Recent Advances in the Stabilization of Platinum Electrocatalysts for Fuel-Cell Reactions". ChemCatChem. 6 (1): 26–45. doi:10.1002/cctc.201300647. S2CID 97620646.
- G. Hoogers (2003). Fuel Cell Technology Handbook. Boca Raton, FL: CRC Press. pp. 6–3. ISBN 978-0-8493-0877-2.
- C. Wang, H. Daimon, T. Onodera, T. Koda, S. Sun. A general approach to the size- and shape-controlled synthesis of platinum nanoparticles and their catalytic reduction of oxygen (2008). "A General Approach to the Size- and Shape-Controlled Synthesis of Platinum Nanoparticles and Their Catalytic Reduction of Oxygen". Angewandte Chemie International Edition. 47 (19): 3588–3591. doi:10.1002/anie.200800073. PMID 18399516.CS1 maint: multiple names: authors list (link)
- S. Alayoglu, A. U. Nilekar, M. Mavrikakis, B. Eichhorn. Ru–Pt core–shell nanoparticles for preferential oxidation of carbon monoxide in hydrogen (2008). "Ru–Pt core–shell nanoparticles for preferential oxidation of carbon monoxide in hydrogen". Nature Materials. 7 (4): 333–338. Bibcode:2008NatMa...7..333A. doi:10.1038/nmat2156. PMID 18345004.CS1 maint: multiple names: authors list (link)
- E. Proietti, F. Jaouen, M. Lefevre, N. Larouche, J. Tian, J. Herranz, and J.-P. Dodelet. 2011 Iron-based cathode catalyst with enhanced power density in polymer electrolyte membrane fuel cells" Nature Communications 2(1),
- Litster, S.; McLean, G. (2004). "PEM fuel cell electrodes". Journal of Power Sources. 130 (1–2): 61–76. Bibcode:2004JPS...130...61L. doi:10.1016/j.jpowsour.2003.12.055.
- "Y. Wang, Daniela Fernanda Ruiz Diaz, Ken S. Chen, Zhe Wang, and Xavier Cordobes Adroher. "Materials, technological status, and fundamentals of PEM fuel cells–A review." Materials Today, 32 (2020) 178-203" (PDF). doi:10.1016/j.mattod.2019.06.005. S2CID 201288395. Cite journal requires
- Serov, A.; Artyushkova, K.; Atanassov, P. (2014). "Fe-N-C Oxygen Reduction Fuel Cell Catalyst Derived from Carbendazim: Synthesis, Structure, and Reactivity". Adv. Energy Mater. 4 (10): 1301735. doi:10.1002/aenm.201301735.
- Yin, Xi; Zelenay, Piotr (13 July 2018). "Kinetic Models for the Degradation Mechanisms of PGM-Free ORR Catalysts". ECS Transactions. 85 (13): 1239–1250. doi:10.1149/08513.1239ecst. OSTI 1471365.
- Martinez, Ulises; Babu, Siddharth Komini; Holby, Edward F.; Zelenay, Piotr (April 2018). "Durability challenges and perspective in the development of PGM-free electrocatalysts for the oxygen reduction reaction". Current Opinion in Electrochemistry. 9: 224–232. doi:10.1016/j.coelec.2018.04.010. OSTI 1459825.
- Y. Wang, Ken S. Chen, Jeffrey Mishler, Sung Chan Cho, Xavier Cordobes Adroher, A Review of Polymer Electrolyte Membrane Fuel Cells: Technology, Applications, and Needs on Fundamental Research, Applied Energy 88 (2011) 981-1007.
- [ Wee, Jung-Ho. "Applications of Proton Exchange Membrane Fuel Cell Systems." Renewable and Sustainable Energy Reviews 11.8 (2007): 1720-738. Web.]
- PEM Fuel Cells. Americanhistory.si.edu. Retrieved on 2013-04-19.
- Eberle, Ulrich; Mueller, Bernd; von Helmolt, Rittmar (2012-07-15). "Fuel cell electric vehicles and hydrogen infrastructure: status 2012". Royal Society of Chemistry. Retrieved 2013-01-08.
- Klippenstein, Matthew (24 April 2017). "Is Toyota's hydrogen fuel-cell fervor foolish, or foresighted? (with charts)". Retrieved 13 May 2017.
Toyota's 2,000 or so Mirai sales in 2016 represented more than three times the megawattage of PEMFCs produced worldwide in 2014. |
While Earth is only the fifth largest planet in the solar system, it is the only world in our solar system with liquid water on the surface. Just slightly larger than nearby Venus, Earth is the biggest of the four planets closest to the Sun, all of which are made of rock and metal.
Earth is the only planet in the solar system whose English name does not come from Greek or Roman mythology. The name was taken from Old English and Germanic. It simply means "the ground." There are, of course, many names for our planet in the thousands of languages spoken by the people of the third planet from the Sun.
The name Earth is at least 1,000 years old. All of the planets, except for Earth, were named after Greek and Roman gods and goddesses. However, the name Earth is a Germanic word, which simply means “the ground.”
Earth has a very hospitable temperature and mix of chemicals that have made life abundant here. Most notably, Earth is unique in that most of our planet is covered in liquid water, since the temperature allows liquid water to exist for extended periods of time. Earth's vast oceans provided a convenient place for life to begin about 3.8 billion years ago.
With an equatorial diameter of 7926 miles (12,760 kilometers), Earth is the biggest of the terrestrial planets and the fifth largest planet in our solar system.
From an average distance of 93 million miles (150 million kilometers), Earth is exactly one astronomical unit away from the Sun because one astronomical unit (abbreviated as AU), is the distance from the Sun to Earth. This unit provides an easy way to quickly compare planets' distances from the Sun.
It takes about eight minutes for light from the Sun to reach our planet.
As Earth orbits the Sun, it completes one rotation every 23.9 hours. It takes 365.25 days to complete one trip around the Sun. That extra quarter of a day presents a challenge to our calendar system, which counts one year as 365 days. To keep our yearly calendars consistent with our orbit around the Sun, every four years we add one day. That day is called a leap day, and the year it's added to is called a leap year.
Earth's axis of rotation is tilted 23.4 degrees with respect to the plane of Earth's orbit around the Sun. This tilt causes our yearly cycle of seasons. During part of the year, the northern hemisphere is tilted toward the Sun, and the southern hemisphere is tilted away. With the Sun higher in the sky, solar heating is greater in the north producing summer there. Less direct solar heating produces winter in the south. Six months later, the situation is reversed. When spring and fall begin, both hemispheres receive roughly equal amounts of heat from the Sun.
Earth is the only planet that has a single moon. Our Moon is the brightest and most familiar object in the night sky. In many ways, the Moon is responsible for making Earth such a great home. It stabilizes our planet's wobble, which has made the climate less variable over thousands of years.
Earth sometimes temporarily hosts orbiting asteroids or large rocks. They are typically trapped by Earth's gravity for a few months or years before returning to an orbit around the Sun. Some asteroids will be in a long “dance” with Earth as both orbit the Sun.
Some moons are bits of rock that were captured by a planet's gravity, but our Moon is likely the result of a collision billions of years ago. When Earth was a young planet, a large chunk of rock smashed into it, displacing a portion of Earth's interior. The resulting chunks clumped together and formed our Moon. With a radius of 1,080 miles (1,738 kilometers), the Moon is the fifth largest moon in our solar system (after Ganymede, Titan, Callisto, and Io).
The Moon is an average of 238,855 miles (384,400 kilometers) away from Earth. That means 30 Earth-sized planets could fit in between Earth and its Moon.
Earth has no rings.
When the solar system settled into its current layout about 4.5 billion years ago, Earth formed when gravity pulled swirling gas and dust in to become the third planet from the Sun. Like its fellow terrestrial planets, Earth has a central core, a rocky mantle, and a solid crust.
Earth is composed of four main layers, starting with an inner core at the planet's center, enveloped by the outer core, mantle, and crust.
The inner core is a solid sphere made of iron and nickel metals about 759 miles (1,221 kilometers) in radius. There the temperature is as high as 9,800 degrees Fahrenheit (5,400 degrees Celsius). Surrounding the inner core is the outer core. This layer is about 1,400 miles (2,300 kilometers) thick, made of iron and nickel fluids.
In between the outer core and crust is the mantle, the thickest layer. This hot, viscous mixture of molten rock is about 1,800 miles (2,900 kilometers) thick and has the consistency of caramel. The outermost layer, Earth's crust, goes about 19 miles (30 kilometers) deep on average on land. At the bottom of the ocean, the crust is thinner and extends about 3 miles (5 kilometers) from the seafloor to the top of the mantle.
Like Mars and Venus, Earth has volcanoes, mountains, and valleys. Earth's lithosphere, which includes the crust (both continental and oceanic) and the upper mantle, is divided into huge plates that are constantly moving. For example, the North American plate moves west over the Pacific Ocean basin, roughly at a rate equal to the growth of our fingernails. Earthquakes result when plates grind past one another, ride up over one another, collide to make mountains, or split and separate.
Earth's global ocean, which covers nearly 70% of the planet's surface, has an average depth of about 2.5 miles (4 kilometers) and contains 97% of Earth's water. Almost all of Earth's volcanoes are hidden under these oceans. Hawaii's Mauna Kea volcano is taller from base to summit than Mount Everest, but most of it is underwater. Earth's longest mountain range is also underwater, at the bottom of the Arctic and Atlantic oceans. It is four times longer than the Andes, Rockies and Himalayas combined.
Near the surface, Earth has an atmosphere that consists of 78% nitrogen, 21% oxygen, and 1% other gases such as argon, carbon dioxide, and neon. The atmosphere affects Earth's long-term climate and short-term local weather and shields us from much of the harmful radiation coming from the Sun. It also protects us from meteoroids, most of which burn up in the atmosphere, seen as meteors in the night sky, before they can strike the surface as meteorites.
Our planet's rapid rotation and molten nickel-iron core give rise to a magnetic field, which the solar wind distorts into a teardrop shape in space. (The solar wind is a stream of charged particles continuously ejected from the Sun.) When charged particles from the solar wind become trapped in Earth's magnetic field, they collide with air molecules above our planet's magnetic poles. These air molecules then begin to glow and cause aurorae, or the northern and southern lights.
The magnetic field is what causes compass needles to point to the North Pole regardless of which way you turn. But the magnetic polarity of Earth can change, flipping the direction of the magnetic field. The geologic record tells scientists that a magnetic reversal takes place about every 400,000 years on average, but the timing is very irregular. As far as we know, such a magnetic reversal doesn't cause any harm to life on Earth, and a reversal is very unlikely to happen for at least another thousand years. But when it does happen, compass needles are likely to point in many different directions for a few centuries while the switch is being made. And after the switch is completed, they will all point south instead of north.
- Measuring Up - If the Sun were as tall as a typical front door, Earth would be the size of a nickel.
- We're On It - Earth is a rocky planet with a solid and dynamic surface of mountains, canyons, plains and more. Most of our planet is covered in water.
- Breathe Easy - Earth's atmosphere is 78 percent nitrogen, 21 percent oxygen and 1 percent other ingredients—the perfect balance to breathe and live.
- Our Cosmic Companion - Earth has one moon.
- Ringless - Earth has no rings.
- Orbital Science - Many orbiting spacecraft study the Earth from above as a whole system—observing the atmosphere, ocean, glaciers, and the solid earth.
- Home, Sweet Home - Earth is the perfect place for life as we know it.
- Protective Shield - Our atmosphere protects us from incoming meteoroids, most of which break up in our atmosphere before they can strike the surface. |
To map our home planet, Google Earth depends mostly on satellite imagery for land surfaces and sonar imagery for the sea floor. Maps of the Universe likewise depend on different kinds of detectors for different kinds of features. Maps of the cosmic microwave background (CMB), for example, depend on measuring minute differences in the temperature of the sky.
When astrophysicist Julian Borrill came to Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC) in 1997, his first project was designing computational tools for future CMB experiments, a toolbox capable of handling an expected flood of cosmic data. He and his colleagues Radek Stompor and Andrew Jaffe devised the Microwave Anisotropy Dataset Computational Analysis Package, or MADCAP. An essential part of the kit was a module for making maps.
Signal versus noise
Mapping the CMB requires accurately accounting for noise in the data. Each pixel begins as part noise and part signal. “White” noise has the property that each measurement is independent of all the others and can accurately be averaged, so it’s easy to account for the noise and estimate the signal’s contribution to the mix.
“Colored” or correlated noise is more challenging: here pixel noise varies across the sky, and its values are interrelated according to the particular path that the telescope has scanned during an exposure.
“You can’t account for correlated noise by just averaging it,” says Borrill, now with the Computational Cosmology Center (C3) in Berkeley Lab’s Computational Research Division. “To make a map it takes a special code to weigh and account for the noise in each pixel at each point in time.”
The detectors used to measure the temperature of the cosmic microwave background are particularly susceptible to colored noise, so the MADCAP bundle of codes included one specially designed to make maps from data where the noise is not white. Programmed by C3 member Christopher Cantalupo, the code is named MADmap.
The best detectors for measuring radiation at wavelengths between a millimeter and a fifth of a millimeter, where much of the CMB radiation lies, are bolometers. (CMB radiation at lower frequencies is measured with radiometers.) A bolometer gauges how much an incoming photon heats up a very cold detector, whose temperature is kept at a tiny fraction of a degree above zero degrees Kelvin. Correlated or colored noise is a known characteristic of bolometers.
“Because a bolometer’s temperature can never be at absolute zero, it will always have some thermal noise,” Borrill says. This noise level varies as the bolometer’s temperature changes. “Another source of noise is that when a photon hits a bolometer, it ‘rings’ for a while.”
As for the first kind of noise, Cantalupo says, “because the refrigerator is not perfect there are long-term drifts in temperature; the noise changes slowly with time.”
He compares the colored-noise problem to the situation of a traffic patrolman using a radar gun to determine the expected speed of passing cars. “If there is very little traffic, the speed of one car will be largely independent of the speed of the others. But if the traffic becomes dense, cars traveling near each other are likely to be traveling at similar speeds,” he explains. “There will still be some variation in speed, and this scatter in the measurements is colored noise.”
To determine the expected speed of a passing car based on the speeds of cars previously measured, the correlation among cars when traffic is dense must be taken into account – especially the noise of the scatter of measurements in heavy traffic – so that these measurements are not given too much significance in the final estimate.
“If we give more significance to measurements taken farther apart in time, we can make a better estimate of the underlying signal,” Cantalupo says.
Cantalupo describes the MADmap process as first collecting the basic data, a widely varying curve with fine structure imposed on large excursions, including information on where the instrument is pointed in the sky and the time during which the data was collected. The data is filtered to remove “average” noise — “but of course we haven’t just filtered noise but signal too,” says Cantalupo.
The math that determines how the noise is correlated from time to time within each pixel is performed on this smoothed-out, filtered data. Then the filtering is undone to restore the signal – which for CMB data is the temperature of the sky for each pixel in the map.
MADmap spreads its wings
Borrill says that although MADmap was designed with CMB data in mind, “it was always intended to be independent of the specifics of any one experiment.”
MADmap has been used for CMB experiments from the balloon-borne MAXIMA, which mapped a portion of the northern sky in 1998, and BOOMERANG, which circled the South Pole in 1999, on up to the European Space Agency’s Planck satellite, launched on an Ariane rocket from French Guiana in May 2009; all these experiments and others record data in different formats, so MADmap’s flexibility is essential.
MADmap is so flexible, in fact, that it is applicable to any kind of experiment whose data is similar to the model it was built for. From the beginning, it has been posted on the Internet as open-source software.
Enter Herschel, a satellite that by coincidence was launched on the same Ariane rocket as Planck. Unlike Planck, Herschel is an infrared observatory. It carries a 3.5-meter telescope, the most powerful infrared telescope ever flown in space. The principal detectors for one of its three instruments, the Photoconductor Array Camera and Spectrometer (PACS), are two arrays of highly sensitive bolometers. In 2007, long before Herschel and Planck were launched, Cantalupo got a call from Pierre Chanial, a PACS scientist who was developing the instrument’s mapping software.
“He wanted know if it was okay with us if he used MADmap as the core map-making software for PACS,” Cantalupo says. “He said it was suggested to him by Andrew Jaffe, who had designed the original MADCAP with Julian.”
Cantalupo and Borrill and their colleagues were delighted that MADmap promised to be useful in unanticipated ways. The PACS bolometers are photometers designed to collect far-infrared light, mapping galaxies and other objects whose internal structures are obscured, such as clouds of gas and dust where new stars are being born or disks in which solar systems may be forming. But the novel application of MADmap to the infrared data introduced some challenges.
“The PACS data-transfer pipeline needs to use Java, which had not been contemplated when MADmap was written,” Cantalupo says. “So we were able to be of some use in helping with the rewrite.”
Different questions arose when Herschel began making images in July after reaching orbit at Lagrangian Point 2, where the combined gravity of Earth and Sun maintain the satellite mostly in the Earth’s shadow — thus an excellent place for an observatory. C3’s Theodore Kisner became involved in the effort to help the PACS team make the best of MADmap.
“There was some trouble with the real data relating to the character of the noise,” Kisner says. “Since I’ve been working with noise estimation, I was able to contribute to this aspect.”
The way PACS makes an image is different from the way a CMB instrument maps the sky: a CMB experiment essentially scans across the sky in one smooth stroke after another, whereas, Kisner says, “Herschel kind of wobbles around, looking at the same region sort of like looking through a hole in a fence.”
Noise estimation is easier for Herschel in some ways, not only because its bolometers are very stable but because they map specific regions; unlike the ubiquitous cosmic microwave background, “parts of the sky in a Herschel image are actually dark,” Kisner says. “No signal at all is a perfect baseline for accounting for noise.”
Nevertheless, what the two kinds of observations have in common is that both depend on time-streams of data, which is where correlated noise resides.
“Our major hope now is that we can persuade the PACS folks away from using Java and instead toward using our latest version of MADmap,” Cantalupo says. “Some of their observations require very long exposures, where our new Version 2 will be very helpful. On the smaller observations, the Java version is okay.”
Already the C3 team has devised a new secondary format for Herschel that will be able to handle various kinds of data. MADmap 2 readily reads data in different formats and would be easier to use and more flexible that the present version.
“It’s completely their decision,” says Cantalupo. “We’re just happy to be useful.” He and Kisner have attended data processing workshops at the NASA Herschel Science Center at Caltech to show scientists who are using Herschel good ways of using MADmap.
For his part, Julian Borrill is delighted that a program initially developed with support from NASA’s Applied Information Systems Research program and Berkeley Lab’s Laboratory Directed Research and Development program has spread its wings and has already proved its merit for analyzing very different kinds of astronomical data.
More about tackling CMB data with MADCAP
Wikipedia’s article on bolometers
More about the launch of Planck (and Herschel!)
More about Herschel and PACS
More about NASA Applied Information Systems Research |
Functions on a Cartesian Plane
We represent functions graphically by plotting points on a coordinate plane (also sometimes called the Cartesian plane). The coordinate plane is a grid formed by a horizontal number line and a vertical number line that cross at a point called the origin. The origin has this name because it is the “starting” location; every other point on the grid is described in terms of how far it is from the origin.
The horizontal number line is called the axis and the vertical line is called the axis. We can represent each value of a function as a point on the plane by representing the value as a distance along the axis and the value as a distance along the axis. For example, if the value of a function is 2 when the value is 4, we can represent this pair of values with a point that is 4 units to the right of the origin (that is, 4 units along the axis) and 2 units up (2 units in the direction).
We write the location of this point as (4, 2).
Plotting Points on a Cartesian Plane
Plot the following coordinate points on the Cartesian plane.
a) (5, 3)
b) (-2, 6)
c) (3, -4)
d) (-5, -7)
Here are all the coordinate points on the same plot.
Notice that we move to the right for a positive value and to the left for a negative one, just as we would on a single number line. Similarly, we move up for a positive value and down for a negative one.
The and axes divide the coordinate plane into four quadrants. The quadrants are numbered counter-clockwise starting from the upper right, so the plotted point for (a) is in the first quadrant, (b) is in the second quadrant, (c) is in the fourth quadrant, and (d) is in the third quadrant.
Graph a Function From a Table
If we know a rule or have a table of values that describes a function, we can draw a graph of the function. A table of values gives us coordinate points that we can plot on the Cartesian plane.
Graphing a Function Given a Table of Values
1. Graph the function that has the following table of values.
The table gives us five sets of coordinate points: (-2, 6), (-1, 8), (0, 10), (1, 12), (2, 14).
To graph the function, we plot all the coordinate points. Since we are not told the domain of the function or given a real-world context, we can just assume that the domain is the set of all real numbers. To show that the function holds for all values in the domain, we connect the points with a smooth line (which, we understand, continues infinitely in both directions).
2. Graph the function that has the following table of values.
The table gives us five sets of coordinate points: (0, 0), (1, 1), (2, 4), (3, 9), and (4, 16).
To graph the function, we plot all the coordinate points. Since we are not told the domain of the function, we can assume that the domain is the set of all non-negative real numbers. To show that the function holds for all values in the domain, we connect the points with a smooth curve. The curve does not make sense for negative values of the independent variable, so it stops at , but it continues infinitely in the positive direction.
Graph the function that has the following table of values.
This function represents the total cost of the balloons delivered to your house. Each balloon is $3 and the store delivers if you buy a dozen balloons or more. The delivery charge is a $5 flat fee.
The table gives us five sets of coordinate points: (12, 41), (13, 44), (14, 47), (15, 50), and (16, 53).
To graph the function, we plot all the coordinate points. Since the values represent the number of balloons for 12 balloons or more, the domain of this function is all integers greater than or equal to 12. In this problem, the points are not connected by a line or curve because it doesn’t make sense to have non-integer values of balloons.
In order to draw a graph of a function given the function rule, we must first make a table of values to give us a set of points to plot. Choosing good values for the table is a skill you will develop throughout this course. When you pick values, here are some of the things you should keep in mind.
- Pick only values from the domain of the function.
- If the domain is the set of real numbers or a subset of the real numbers, the graph will be a continuous curve.
- If the domain is the set of integers of a subset of the integers, the graph will be a set of points not connected by a curve.
- Picking integer values is best because it makes calculations easier, but sometimes we need to pick other values to capture all the details of the function.
- Often we start with one set of values. Then after drawing the graph, we realize that we need to pick different values and redraw the graph.
For 1-5, plot the coordinate points on the Cartesian plane.
- (4, -4)
- (2, 7)
- (-3, -5)
- (6, 3)
- (-4, 3)
- Give the coordinates for each point in this Cartesian plane. License: CC BY-NC 3.0
For 7-10, graph the function that has the following table of values.
To view the Review answers, open this PDF file and look for section 1.12. |
A government bond or sovereign bond is an instrument of indebtedness (a bond) issued by a national government to support government spending. It generally includes a commitment to pay periodic interest, called coupon payments, and to repay the face value on the maturity date. For example, a bondholder invests $20,000 (called face value) into a 10-year government bond with a 10% annual coupon; the government would pay the bondholder 10% of the $20,000 each year. At the maturity date the government would give back the original $20,000.
Government bonds can be denominated in a foreign currency or the government's domestic currency. Countries with less stable economies tend to denominate their bonds in the currency of a country with a more stable economy (i.e. a hard currency). When governments with less stable economies issue bonds, there is a possibility they will be unable to make the interest payments and may default. All bonds carry a default risk. International credit rating agencies provide ratings for each country's bonds. Bondholders generally demand higher yields from riskier bonds. For instance, on May 24, 2016, 10-year government bonds issued by the Canadian government offered a yield of 1.34%, while 10-year government bonds issued by the Brazilian government offered a yield of 12.84%.
The Dutch Republic became the first state to finance its debt through bonds when it assumed bonds issued by the city of Amsterdam in 1517. The average interest rate at that time fluctuated around 20%.
The first official government bond issued by a national government was issued by the Bank of England in 1694 to raise money to fund a war against France. The form of these bonds was both lottery and annuity. The Bank of England and government bonds were introduced in England by William III of England (also called William of Orange), who financed England's war efforts by copying the approach of issuing bonds and raising government debt from the Seven Dutch Provinces, where he ruled as a Stadtholder.
Later, governments in Europe started following the trend and issuing perpetual bonds (bonds with no maturity date) to fund wars and other government spending. The use of perpetual bonds ceased in the 20th century, and currently governments issue bonds of limited term to maturity.
During the American Revolution, the U.S. government started to issue bonds in order to raise money, these bonds were called loan certificates. The total amount generated by bonds was $27 million and helped finance the war.
A government bond in a country's own currency is strictly speaking a risk-free bond, because the government can if necessary create additional currency in order to redeem the bond at maturity. There have however been instances where a government has chosen to default on its domestic currency debt rather than create additional currency, such as Russia in 1998 (the "ruble crisis") (see national bankruptcy).
Investors may use rating agencies to assess credit risk. The Securities and Exchange Commission (SEC) has designated ten rating agencies as nationally recognized statistical rating organizations.
Currency risk is the risk that the value of the currency a bond pays out will decline compared to the holder's reference currency. For example, a German investor would consider United States bonds to have more currency risk than German bonds (since the dollar may go down relative to the euro); similarly, a United States investor would consider German bonds to have more currency risk than United States bonds (since the euro may go down relative to the dollar). A bond paying in a currency that does not have a history of keeping its value may not be a good deal even if a high interest rate is offered. The currency risk is determined by the fluctuation of exchange rates.
Inflation risk is the risk that the value of the currency a bond pays out will decline over time. Investors expect some amount of inflation, so the risk is that the inflation rate will be higher than expected. Many governments issue inflation-indexed bonds, which protect investors against inflation risk by linking both interest payments and maturity payments to a consumer price index. In the UK these bonds are called Index-linked bonds.
Interest rate risk
Also referred to as market risk, all bonds are subject to interest rate risk. Interest rate changes can affect the value of a bond. If the interest rates fall, then the bond prices rise and if the interest rates rise, bond prices fall. When interest rates rise, bonds are more attractive because investors can earn higher coupon rate, thereby holding period risk may occur. Interest rate and bond price have negative correlation. Lower fixed-rate bond coupon rates meaning higher interest rate risk and higher fixed-rate bond coupon rates meaning lower interest rate risk. Maturity of a bond also has an impact on the interest rate risk. Indeed, longer maturity meaning higher interest rate risk and shorter maturity meaning lower interest rate risk.
If a central bank purchases a government security, such as a bond or treasury bill, it increases the money supply because a Central Bank injects liquidity (cash) into the economy. Doing this lowers the government bond's yield. On the contrary, when a Central Bank is fighting against inflation then a Central Bank decreases the money supply.
These actions of increasing or decreasing the amount of money in the banking system are called monetary policy.
In the UK, government bonds are called gilts. Older issues have names such as "Treasury Stock" and newer issues are called "Treasury Gilt". Inflation-indexed gilts are called Index-linked gilts., which means the value of the gilt rises with inflation. They are fixed-interest securities issued by the British government in order to raise money.
UK gilts have maturities stretching much further into the future than other European government bonds, which has influenced the development of pension and life insurance markets in the respective countries.
A conventional UK gilt might look like this – "Treasury stock 3% 2020". On the 27 of April 2019 the United Kingdom 10Y Government Bond had a 1.145% yield. Central Bank Rate is 0.10% and the United Kingdom rating is AA, according to Standard & Poor's.
- Savings bonds: they are considered one of the safest investments.
- Treasury notes (T-notes): maturity of these bonds is two, three, five or 10 years, they provided fixed coupon payments every six months and have face value of $1,000.
- Treasury bonds (T-bonds or long bonds): are the treasury bonds with the longest maturity, from twenty years to thirty years. They also have a coupon payment every six months.
- Treasury Inflation-Protected Securities (TIPS): are the inflation-indexed bond issued by the U.S. Treasury. The principal of these bonds is adjusted to the Consumer Price Index. In other words, the principal increases with inflation and decreases with deflation.
The principal argument for investors to hold U.S. government bonds is that the bonds are exempt from state and local taxes.
The bonds are sold through an auction system by the government. The bonds are buying and selling on the secondary market, the financial market in which financial instruments such as stock, bond, option and futures are traded. The secondary market may be separate into two market categories over-the-counter market and exchange market.
TreasuryDirect is the official website where investors can purchase treasury securities directly from the U.S. government. This online system allow investors to save money on commissions and fees taken with traditional channels. Investors can use banks or brokers to hold a bond.
- "Sovereign Bond Definition". investopedia.com. 2011. Retrieved 15 December 2011.
- "Sovereign Bond Definition". investopedia.com. 2011. Retrieved 15 December 2011.
- "What is Sovereign Debt".
- "Portugal sovereign debt crisis". Archived from the original on 2014-08-10. Retrieved 2014-08-02.
- "Brief history of bond investing".
- "Analysis: Counting the cost of currency risk in emerging bond markets". Reuters. 22 November 2013.
- "Daily Prices and Yields". UK Debt Management Office. Retrieved 19 August 2020.
- "Gilt Market: About gilts". UK Debt Management Office. Archived from the original on 2016-11-10. Retrieved 2011-06-13.
- "Gilt Market: Index-linked gilts". UK Debt Management Office. Archived from the original on 2011-07-18. Retrieved 2011-06-13.
- "Government bonds in the UK: the facts". Currency.com. Retrieved April 20, 2020.
- "Gilts and corporate bonds explained". 2 August 2016.
- "World government bonds UK".
- "Example of U.S. Government Bonds". |
- For the Wikipedia guidance essay, see Wikipedia:Cyberbullying.
Cyberbullying is the use of social networks to repeatedly harm or harass other people in a deliberate manner. According to U.S. Legal Definitions, "cyber-bullying could be limited to posting rumors or gossips about a person in the internet bringing about hatred in other’s minds; or it may go to the extent of personally identifying victims and publishing materials severely defaming and humiliating them".
- 1 Distinctions
- 2 Methods used
- 3 Law enforcement
- 4 Research
- 5 Legislation
- 6 Harmful effects
- 7 Adults and the workplace
- 8 Global nature
- 9 Awareness
- 10 In media and pop culture
- 11 See also
- 12 References
- 13 Further reading
- 14 External links
With the increase in use of these technologies, cyberbullying has become increasingly common, especially among teenagers. Awareness has also risen, due in part to high-profile cases like the suicide of Tyler Clementi.
Cyberbullying is defined in legal glossaries as
- actions that use information and communication technologies to support deliberate, repeated, and hostile behavior by an individual or group, that is intended to harm another or others.
- use of communication technologies for the intention of harming another person
- use of internet service and mobile technologies such as web pages and discussion groups as well as instant messaging or SMS text messaging with the intention of harming another person.
Examples of what constitutes cyberbullying include communications that seek to intimidate, control, manipulate, put down, falsely discredit, or humiliate the recipient. The actions are deliberate, repeated, and hostile behavior intended to harm another. Cyberbullying has been defined by The National Crime Prevention Council: “When the Internet, cell phones or other devices are used to send or post text or images intended to hurt or embarrass another person."
A cyberbully may be a person whom the target knows or an online stranger. A cyberbully may be anonymous and may solicit involvement of other people online who do not even know the target. This is known as a "digital pile-on."
Cyberbullying has been defined as "when the Internet, cell phones or other devices are used to send or post text or images intended to hurt or embarrass another person". Other researchers use similar language to describe the phenomenon.
Cyberbullying vs. cyberstalking
The practice of cyberbullying is not limited to children and, while the behavior is identified by the same definition when practiced by adults, the distinction in age groups sometimes refers to the abuse as cyberstalking or cyberharassment when perpetrated by adults toward adults. Common tactics used by cyberstalkers are performed in public forums, social media or online information sites and are intended to threaten a victim's earnings, employment, reputation, or safety. Behaviors may include encouraging others to harass the victim and trying to affect a victim's online participation. Many cyberstalkers try to damage the reputation of their victim and turn other people against them.
Cyberstalking may include false accusations, monitoring, making threats, identity theft, damage to data or equipment, the solicitation of minors for sex, or gathering information in order to harass. A repeated pattern of such actions and harassment against a target by an adult constitutes cyberstalking. Cyberstalking often features linked patterns of online and offline behavior. There are consequences of law in offline stalking and online stalking, and cyberstalkers can be put in jail. Cyberstalking is a form of cyberbullying.
Certain characteristics inherent in online technologies increase the likelihood that they will be exploited for deviant purposes. Unlike physical bullying, electronic bullies can remain virtually anonymous using temporary email accounts, pseudonyms in chat rooms, instant messaging programs, cell-phone text messaging, and other Internet venues to mask their identity; this perhaps frees them from normative and social constraints on their behavior.
Additionally, electronic forums often lack supervision. While chat hosts regularly observe the dialog in some chat rooms in an effort to police conversations and evict offensive individuals, personal messages sent between users (such as electronic mail or text messages) are viewable only by the sender and the recipient, thereby falling outside the regulatory reach of such authorities. In addition, when teenagers know more about computers and cellular phones than their parents or guardians, they are therefore able to operate the technologies without concern that a parent will discover their experience with bullying (whether as a victim or offender).
Another factor is the inseparability of a cellular phone from its owner, making that person a perpetual target for victimization. Users often need to keep their phone turned on for legitimate purposes, which provides the opportunity for those with malicious intentions to engage in persistent unwelcome behavior such as harassing telephone calls or threatening and insulting statements via the cellular phone’s text messaging capabilities. Cyberbullying thus penetrates the walls of a home, traditionally a place where victims could seek refuge from other forms of bullying. Compounding this infiltration into the home life of the cyberbully victim is the unique way in which the internet can "create simultaneous sensations of exposure (the whole world is watching) and alienation (no one understands)." For youth who experience shame or self-hatred, this effect is dangerous because it can lead to extreme self-isolation.
One possible advantage for victims of cyberbullying over traditional bullying is that they may sometimes be able to avoid it simply by avoiding the site/chat room in question. Email addresses and phone numbers can be changed; in addition, most email accounts now offer services that will automatically filter out messages from certain senders before they even reach the inbox, and phones offer similar caller ID functions.
However, this does not protect against all forms of cyberbullying. Publishing of defamatory material about a person on the internet is extremely difficult to prevent and once it is posted, many people or archiving services can potentially download and copy it, at which point it is almost impossible to remove from the Internet. Some perpetrators may post victims' photos, or victims' edited photos featuring defaming captions or pasting victims' faces on nude bodies. Examples of famous forums for disclosing personal data or photos to "punish" the "enemies" include the Hong Kong Golden Forum, Livejournal, and more recently JuicyCampus. Despite policies that describe cyberbullying as a violation of the terms of service, many social networking Web sites have been used to that end.
Cyberbullying is sometimes used by the targets of bullying to retaliate against their bullies, since factors such as anonymity, absence of the bully's supporting friends, and irrelevancy of physical strength in the online environment, make it safer to counterattack the bully by that means. Nancy E. Willard notes in Cyberbullying and Cyberthreats, "Unfortunately, students who retaliate against bullies online can be mistakenly perceived as the source of the problem. This can be especially true under circumstances where the original victimization left no tangible evidence, but the cyberbullying did."
Manuals to educate the public, teachers and parents summarize, "Cyberbullying is being cruel to others by sending or posting harmful material using a cell phone or the internet." Research, legislation and education in the field are ongoing. Basic definitions and guidelines to help recognize and cope with what is regarded as abuse of electronic communications have been identified.
- Cyberbullying involves repeated behavior with intent to harm.
- Cyberbullying is perpetrated through harassment, cyberstalking, denigration (sending or posting cruel rumors and falsehoods to damage reputation and friendships), impersonation, and exclusion (intentionally and cruelly excluding someone from an online group)
Cyberbullying can be as simple as continuing to send emails or text messages harassing someone who has said they want no further contact with the sender. It may also include public actions such as repeated threats, sexual remarks, pejorative labels (i.e., hate speech) or defamatory false accusations), ganging up on a victim by making the person the subject of ridicule in online forums, hacking into or vandalizing sites about a person, and posting false statements as fact aimed a discrediting or humiliating a targeted person. Cyberbullying could be limited to posting rumors about a person on the internet with the intention of bringing about hatred in others' minds or convincing others to dislike or participate in online denigration of a target. It may go to the extent of personally identifying victims of crime and publishing materials severely defaming or humiliating them.
Cyberbullies may disclose victims' personal data (e.g. real name, home address, or workplace/schools) at websites or forums or may use impersonation, creating fake accounts, comments or sites posing as their target for the purpose of publishing material in their name that defames, discredits or ridicules them. This can leave the cyberbully anonymous which can make it difficult for the offender to be caught or punished for their behavior, although not all cyberbullies maintain their anonymity. Text or instant messages and emails between friends can also constitute cyberbullying if what is said or displayed is hurtful to the participants.
The recent use of mobile applications and rise of smartphones have yielded to a more accessible form of cyberbullying. It is expected that cyberbullying via these platforms will be associated with bullying via mobile phones to a greater extent than exclusively through other more stationary internet platforms. In addition, the combination of cameras and Internet access and the instant availability of these modern smartphone technologies yield themselves to specific types of cyberbullying not found in other platforms. It is likely that those cyberbullied via mobile devices will experience a wider range of cyberbullying types than those exclusively bullied elsewhere.
Cyberbullying can take place on social media sites such as Facebook, Myspace, and Twitter. “By 2008, 93% of young people between the ages of 12 and 17 were online. In fact, youth spend more time with media than any single other activity besides sleeping.” There are many risks attached to social media sites, and cyberbullying is one of the larger risks. One million children were harassed, threatened or subjected to other forms of cyberbullying on Facebook during the past year, while 90 percent of social-media-using teens who have witnessed online cruelty say they have ignored mean behavior on social media, and 35 percent have done this frequently. 95 percent of social-media-using teens who have witnessed cruel behavior on social networking sites say they have seen others ignoring the mean behavior, and 55 percent witness this frequently. ”The most recent case of cyber-bullying and illegal activity on Facebook involved a memorial page for the young boys who lost their lives to suicide due to anti-gay bullying. The page quickly turned into a virtual grave desecration and platform condoning gay teen suicide and the murdering of homosexuals. Photos were posted of executed homosexuals, desecrated photos of the boys who died and supposed snuff photos of gays who have been murdered. Along with this were thousands of comments encouraging murder sprees against gays, encouragement of gay teen suicide, death threats etc. In addition, the page continually exhibited pornography to minors.”
Sexual harassment as a form of cyberbullying is common in video game culture. A study by the Journal of Experimental Social Psychology suggests that this harassment is due in part to the portrayal of women in video games. This harassment generally involves slurs directed towards women, sex role stereotyping, and overaggressive language.
In one case, in which Capcom sponsored an internet-streamed reality show pitting fighting game experts against each other for a prize of $25,000, one female gamer forfeited a match due to intense harassment. The coach of the opposing team, Aris Bakhtanians, stated, "The sexual harassment is part of the culture. If you remove that from the fighting game community, it's not the fighting game community… it doesn't make sense to have that attitude. These things have been established for years."
A study from National Sun Yat-sen University observed that children who enjoyed violent video games were far more likely to both experience and perpetrate cyberbullying.
Most law enforcement agencies have cyber-crime units and often Internet stalking is treated with more seriousness than reports of physical stalking. Help and resources can be searched by state or area.
The safety of schools is increasingly becoming a focus of state legislative action. There was an increase in cyberbullying enacted legislation between 2006–2010. Initiatives and curriclulum requirements also exist in the UK (the Ofsted eSafety guidance) and Australia (Overarching Learning Outcome 13). In 2012, a group of teens in New Haven, Connecticut developed an app to help fight bullying. Called "Back Off Bully" (BOB), the web app is an anonymous resource for computer, smart phone or iPad. When someone witnesses or is the victim of bullying, they can immediately report the incident. The app asks questions about time, location and how the bullying is happening. As well as providing positive action and empowerment over an incident, the reported information helps by going to a data base where administrators study it. Common threads are spotted so others can intervene and break the bully's pattern. BOB, the brainchild of fourteen teens in a design class, is being considered as standard operating procedure at schools across the state.
There are laws that only address online harassment of children or focus on child predators as well as laws that protect adult cyberstalking victims, or victims of any age. Currently, there are 45 cyberstalking (and related) laws on the books.
While some sites specialize in laws that protect victims age 18 and under, Working to Halt Online Abuse is a help resource containing a list of current and pending cyberstalking-related United States federal and state laws. It also lists those states that do not have laws yet and related laws from other countries. The Global Cyber Law Database (GCLD) aims to become the most comprehensive and authoritative source of cyber laws for all countries.
Children report being mean to each other online beginning as young as 2nd grade. According to research, boys initiate mean online activity earlier than girls do. However, by middle school, girls are more likely to engage in cyberbullying than boys. Whether the bully is male or female, his or her purpose is to intentionally embarrass others, harass, intimidate, or make threats online to one another. This bullying occurs via email, text messaging, posts to blogs, and web sites.
The National Crime Prevention Association lists tactics often used by teen cyberbullies.
- Pretend they are other people online to trick others
- Spread lies and rumors about victims
- Trick people into revealing personal information
- Send or forward mean text messages
- Post pictures of victims without their consent
Studies in the psychosocial effects of cyberspace have begun to monitor the impacts cyberbullying may have on the victims, and the consequences it may lead to. Consequences of cyberbullying are multi-faceted, and affect online and offline behavior. Research on adolescents reported that changes in the victims' behavior as a result of cyberbullying could be positive. Victims "created a cognitive pattern of bullies, which consequently helped them to recognize aggressive people." However, the Journal of Psychosocial Research on Cyberspace abstract reports critical impacts in almost all of the respondents’, taking the form of lower self-esteem, loneliness, disillusionment, and distrust of people. The more extreme impacts were self-harm. Children have killed each other and committed suicide after having been involved in a cyberbullying incident.
The most current research in the field defines cyberbullying as "an aggressive, intentional act or behaviour that is carried out by a group or an individual repeatedly and over time against a victim who cannot easily defend him or herself" (Smith & Slonje, 2007, p. 249). Though the use of sexual remarks and threats are sometimes present in cyberbullying, it is not the same as sexual harassment, typically occurs among peers, and does not necessarily involve sexual predators.
Stalking online has criminal consequences just as physical stalking. A target's understanding of why cyberstalking is happening is helpful to remedy and take protective action to restore remedy. Cyberstalking is an extension of physical stalking. Among factors that motivate stalkers are: envy, pathological obsession (professional or sexual), unemployment or failure with own job or life; intention to intimidate and cause others to feel inferior; the stalker is delusional and believes he/she "knows" the target; the stalker wants to instill fear in a person to justify his/her status; belief they can get away with it (anonymity). UK National Workplace Bullying Advice Line theorizes that bullies harass victims in order to make up for inadequacies in their own lives.
The US federal cyberstalking law is designed to prosecute people for using electronic means to repeatedly harass or threaten someone online. There are resources dedicated to assisting adult victims deal with cyberbullies legally and effectively. One of the steps recommended is to record everything and contact police.
The nation-wide Australian Covert Bullying Prevalence Survey (Cross et al., 2009) assessed cyber-bullying experiences among 7,418 students. Rates of cyber-bullying increased with age, with 4.9% of students in Year 4 reporting cyberbullying compared to 7.9% in year nine. Cross et al., (2009) reported that rates of bullying and harassing others were lower, but also increased with age. Only 1.2% of Year 4 students reported cyber-bullying others compared to 5.6% of Year 9 students.
Similarly, a Canadian study found:
- 23% of middle-schoolers surveyed had been bullied by e-mail
- 35% in chat rooms
- 41% by text messages on their cell phones
- Fully 41% did not know the identity of the perpetrators.
Across Europe the average 6% of the children (9–16 years old) have been bullied and only 3% of them confessed to be a bully. (Hasebrink et al., 2011). However in an earlier publication of Hasenbrink et al. (2009), reporting on the results from a meta analysis from European Union countries, the authors estimated (via median results) that approximately 18% of European young people had been "bullied/harassed/stalked" via the internet and mobile phones. Cyber-harassment rates for young people across the EU member states ranged from 10% to 52%. The decreasing numbers can caused by developing increasingly specific methods, dividing the tasks into different variables.
In addition to the current research, Sourander et al. (2010) conducted a population-based cross-sectional study that took place in Finland. The authors of this study took the self-reports of 2215 Finish adolescents between the ages of 13 to 16 years old about cyberbullying and cybervictimization during the past 6 months. It was found that, amongst the total sample, 4.8% were cybervictims only, 7.4% were cyberbullies only, and 5.4% were cyberbully-victims. Cybervictim-only status was associated with a variety of factors, including emotional and peer problems, sleeping difficulties, and feeling unsafe in school. Cyberbully-only status was associated with factors such as hyperactivity and low prosocial behavior, as well as conduct problems. Cyberbully-victim status was associated with all of the risk factors that were associated with both cybervictim-only status and cyberbully-only status. The authors of this study were able to conclude that cyberbullying as well as cybervictimization is associated not only with psychiatric issues, but psychosomatic issues. Many adolescents in the study reported headaches or difficulty sleeping. The authors believe that their results indicate a greater need for new ideas on how to prevent cyberbullying and what to do when it occurs. It is clearly a world-wide problem that needs to be taken seriously.
According to recent research, in Japan, 17 percent(compared with a 25-country average of 37 percent) of youth between the ages of 8 and 17 have been victim to online bullying activities. The number shows that online bullying is a serious concern in Japan. Teenagers who spend more than 10 hours a week on Internet are more likely to become the target of online bullying. Only 28 percent of the survey participants understood what cyberbullying is. However, they do notice the severity of the issue since 63 percent of the surveyed worry about being targeted as victims of cyberbullying.
With the advance of Internet technology, everyone can access the internet. Since teenagers find themselves congregating socially on the internet via social media, they become easy targets for cyberbullying. Forms of social media where cyberbullying occurs include but are not limited to email, text, chat rooms, mobile phones, mobile phone cameras and social websites (Facebook, Twitter). The ways a cyberbully potentially attacks a target include sending threatening email, online posting of targets' private contact information, sending numerous anonymous emails to the target to harass them, or talking about the target in chat rooms or text. Some cyberbullies even set up websites or blogs to post the target’s images, publicize their personal information, gossip about the target, express why they hate the target, request people to agree with the bully’s view, and sending links to the target to make sure they are watching the activity.
Much cyberbullying is an act of relational aggression, which involves alienating the victim from his or her peers through gossip or ostracism. This kind of attack can be easily launched via texting or other online activities. Here is an example of a 19-year-old teenager sharing his real experience of cyberbullying. When he was in high school, his classmates posted his photo online, insulted him constantly, and asked him to die. Because of the constant harassment, he did attempt suicide twice. Even when he quit school, the attacks did not stop.
Cyberbullying can cause serious psychological impact to the victims. They often feel anxious, nervous, tired, and depressed. Other examples of negative psychological trauma include losing confidence as a result being socially isolated from their schoolmates or friends. Mental psychological problems can also show up in the form of headaches, skin problems, abdominal pain, sleep problems, bed-wetting, and crying. It may also lead victims to commit suicide to end bullying.
A study at Waikato University in New Zealand (Spacey, 2015) was the first to examine the effect of cyberbullying on student performance. The study examined the hypothesis that in a country where Tall Poppy Syndrome and cyberbullying are accepted social problems, students will try not to stand out in course rankings where their position could be identified by others who may wish to bully them and this will result in lower average performance and more collusion on assessed course work.
The paper's results indicate that over 70% of modern students fear cyberbullying and those fears reduce average student performance by almost 20%.
A survey by the Crimes Against Children Research Center at the University of New Hampshire in 2000 found that 6% of the young people in the survey had experienced some form of harassment including threats and negative rumours and 2% had suffered distressing harassment.
The 2004 I-Safe.org survey of 1,500 students between grades 4 and 8 found:
- 42% of children had been bullied while online. One in four have had it happen more than once.
- 35% had been threatened online. Nearly one in five had had it happen more than once.
- 21% had received mean or threatening e-mails or other messages.
- 58% admitted that someone had said mean or hurtful things to them online. More than four out of ten said it had happened more than once.
- 58% had not told their parents or an adult about something mean or hurtful that had happened to them online.
The Youth Internet Safety Survey-2, conducted by the Crimes Against Children Research Center at the University of New Hampshire in 2005, found that 9% of the young people in the survey had experienced some form of harassment. The survey was a nationally representative telephone survey of 1,500 youth 10–17 years old. One third reported feeling distressed by the incident, with distress being more likely for younger respondents and those who were the victims of aggressive harassment (including being telephoned, sent gifts, or visited at home by the harasser). Compared to youth not harassed online, victims are more likely to have social problems. On the other hand, youth who harass others are more likely to have problems with rule breaking and aggression. Significant overlap is seen — youth who are harassed are significantly more likely to also harass others.
Hinduja and Patchin[who?] completed a study in the summer of 2005 of approximately 1,500 Internet-using adolescents and found that over one-third of youth reported being victimized online, and over 16% of respondents admitted to cyber-bullying others. While most of the instances of cyber-bullying involved relatively minor behavior (41% were disrespected, 19% were called names), over 12% were physically threatened and about 5% were scared for their safety. Notably, fewer than 15% of victims told an adult about the incident.
Additional research by Hinduja and Patchin in 2007 found that youth who report being victims of cyber-bullying also experience stress or strain that is related to offline problem behaviors such as running away from home, cheating on a school test, skipping school, or using alcohol or marijuana. The authors acknowledge that both of these studies provide only preliminary information about the nature and consequences of online bullying, due to the methodological challenges associated with an online survey.
According to a 2005 survey by the National Children's Home charity and Tesco Mobile of 770 youth between the ages of 11 and 19, 20% of respondents revealed that they had been bullied via electronic means. Almost three-quarters (73%) stated that they knew the bully, while 26% stated that the offender was a stranger. 10% of responders indicated that another person has taken a picture and/or video of them via a cellular phone camera, consequently making them feel uncomfortable, embarrassed, or threatened. Many youths are not comfortable telling an authority figure about their cyber-bullying victimization for fear their access to technology will be taken from them; while 24% and 14% told a parent or teacher respectively, 28% did not tell anyone while 41% told a friend.
According to the 2006 Harris Interactive Cyberbullying Research Report, commissioned by the National Crime Prevention Council, cyber-bullying is a problem that “affects almost half of all American teens”.
In 2007, Debbie Heimowitz, a Stanford University master's student, created Adina's Deck, a film based on Stanford accredited research. She worked in focus groups for ten weeks in three schools to learn about the problem of cyber-bullying in Northern California. The findings determined that over 60% of students had been cyber-bullied and were victims of cyber-bullying. The film is now being used in classrooms nationwide as it was designed around learning goals pertaining to problems that students had understanding the topic. The middle school of Megan Meier is reportedly using the film as a solution to the crisis in their town.
In the summer of 2008, researchers Sameer Hinduja (Florida Atlantic University) and Justin Patchin (University of Wisconsin-Eau Claire) published a book on cyber-bullying that summarized the current state of cyber-bullying research. (Bullying Beyond the Schoolyard: Preventing and Responding to Cyberbullying). Their research documents that cyber-bullying instances have been increasing over the last several years. They also report findings from the most recent study of cyber-bullying among middle-school students. Using a random sample of approximately 2000 middle-school students from a large school district in the southern United States, about 10% of respondents had been cyber-bullied in the previous 30 days while over 17% reported being cyber-bullied at least once in their lifetime. While these rates are slightly lower than some of the findings from their previous research, Hinduja and Patchin point out that the earlier studies were predominantly conducted among older adolescents and Internet samples. That is, older youth use the Internet more frequently and are more likely to experience cyber-bullying than younger children.
According to the 2011 National Crime Victimization Survey, conducted by the U.S. Department of Justice, Bureau of Justice Statistics, School Crime Supplement (SCS), 9% of students of ages 12–18 admittedly experienced cyberbullying during that school year (with a coefficient of variation between 30% and 50%).
In the Youth Risk Behavior Survey 2013, the Center for Surveillance, Epidemiology, and Laboratory Services of the Centers for Disease Control and Prevention published results of its survey as part of the Youth Risk Behavior Surveillance System (YRBSS) in June 2014, indicating in table 17 the percentage of school children being bullied through e-mail, chat rooms, instant messaging, Web sites, or texting (“electronically bullied”) during the course of the year 2013.
|Race/Ethnicity||Female||95% confidence interval||Male||95% confidence interval||Total||95% confidence interval|
|White W/O His.||25.2%||22.6%-28.0%||8.7%||7.5%-10.1%||16.9%||15.3%-18.7%|
|Black W/O His.||10.5%||8.7%-12.6%||6.9%||5.2%-9.0%||8.7%||7.3%-10.4%|
|Grade||Female||95% confidence interval||Male||95% confidence interval||Total||95% confidence interval|
In 2014, Mehari, Farrell, and Le published a study that focused on the literature on cyberbullying among adolescents. They found that researchers have generally assumed that cyberbullying is distinct from aggression perpetrated in person. They suggest that the media through which aggression is perpetrated may be best conceptualized as a new dimension on which aggression can be classified, rather than cyberbullying as a distinct counterpart to existing forms of aggression and that future research on cyberbullying should be considered within the context of theoretical and empirical knowledge of aggression in adolescence. Mary Howlett-Brandon's doctoral dissertation analyzed the National Crime Victimization Survey: Student Crime Supplement, 2009, to focus on the cyberbullying victimization of Black students and White students in specific conditions.
Legislation geared at penalizing cyberbullying has been introduced in a number of U.S. states including New York, Missouri, Rhode Island and Maryland. At least forty five states passed laws against digital harassment. Dardenne Prairie of Springfield, Missouri, passed a city ordinance making online harassment a misdemeanor. The city of St. Charles, Missouri has passed a similar ordinance. Missouri is among other states where lawmakers are pursuing state legislation, with a task forces expected to have “cyberbullying” laws drafted and implemented. In June, 2008, Rep. Linda Sanchez (D-Calif.) and Rep. Kenny Hulshof (R-Mo.) proposed a federal law that would criminalize acts of cyberbullying.
Lawmakers are seeking to address cyberbullying with new legislation because there's currently no specific law on the books that deals with it. A fairly new federal cyberstalking law might address such acts, according to Parry Aftab, but no one has been prosecuted under it yet. The proposed federal law would make it illegal to use electronic means to "coerce, intimidate, harass or cause other substantial emotional distress."
In August 2008, the California state legislature passed one of the first laws in the country to deal directly with cyberbullying. The legislation, Assembly Bill 86 2008, gives school administrators the authority to discipline students for bullying others offline or online. This law took effect, January 1, 2009. A law in New York's Albany County that criminalized cyberbullying was recently struck down as unconstitutional by the New York Court of Appeals in People v. Marquan M..
A recent ruling first seen in the UK determined that it is possible for an Internet Service Provider (ISP) to be liable for the content of sites which it hosts, setting a precedent that any ISP should treat a notice of complaint seriously and investigate it immediately.
criminalizes the making of threats via Internet.
Research on preventative legislation
Researchers suggest that programs be put in place for prevention of cyberbullying. These programs would be incorporated into school curricula and would include online safety and instruction on how to use the Internet properly. This could teach the victim proper methods of potentially avoiding the cyberbully, such as blocking messages or increasing the security on their computer.
Within this suggested school prevention model, even in a perfect world, not one crime can be stopped fully. That is why it is suggested that within this prevention method, effective coping strategies should be introduced and adopted. As with any crime, people learn to cope with what has happened, and the same goes for cyberbullying. People can adopt coping strategies to combat future cyberbullying events. An example of a coping strategy would be a social support group composed of various victims of cyberbullying. That could come together and share experiences, with a formal speaker leading the discussion. Something like a support group can allow students to share their stories, and allows that feeling of them being alone to be removed.
Teachers should be involved in all prevention educational models, as they are essentially the "police" of the classroom. Most cyberbullying often goes unreported as the victim feels nothing can be done to help in their current situation. However, if given the proper tools with preventative measures and more power in the classroom, teachers can be of assistance to the problem of cyber-bullying. If the parent, teacher, and victim can work together, a possible solution or remedy can be found.
There have been many legislative attempts to facilitate the control of bullying and cyberbullying. The problem is due to the fact that some existing legislation is incorrectly thought to be tied to bullying and cyberbullying (terms such as libel and slander). The problem is they do not directly apply to it nor define it as its own criminal behavior. Anti-cyberbullying advocates even expressed concern about the broad scope of applicability of some of the bills attempted to be passed.
In the United States, attempts were made to pass legislation against cyberbullying. Few states attempted to pass broad sanctions in an effort to prohibit cyberbullying. Problem include how to define cyberbullying and cyberstalking, and if charges are pressed, whether it violates the bully's freedom of speech. B. Walther has said that "Illinois is the only state to criminalize 'electronic communication(s) sent for the purpose of harassing another person' when the activity takes place outside a public school setting." Again this came under fire for infringement on freedom of speech.
Research had demonstrated a number of serious consequences of cyberbullying victimization. For example, victims have lower self-esteem, increased suicidal ideation, and a variety of emotional responses, retaliating, being scared, frustrated, angry, and depressed. People have reported that Cyberbullying can be more harmful than traditional bullying because there is no escaping it.
One of the most damaging effects is that a victim begins to avoid friends and activities, often the very intention of the cyberbully.
Cyberbullying campaigns are sometimes so damaging that victims have committed suicide. There are at least four examples in the United States where cyberbullying has been linked to the suicide of a teenager. The suicide of Megan Meier is a recent example that led to the conviction of the adult perpetrator of the attacks.
According to Lucie Russell, director of campaigns, policy and participation at youth mental health charity Young Minds, young people who suffer from mental disorder are vulnerable to cyberbullying as they are sometimes unable to shrug it off:
When someone says nasty things healthy people can filter that out, they're able to put a block between that and their self-esteem. But mentally unwell people don't have the strength and the self-esteem to do that, to separate it, and so it gets compiled with everything else. To them, it becomes the absolute truth – there's no filter, there's no block. That person will take that on, take it as fact.
Social media has allowed bullies to disconnect from the impact they may be having on others.
Intimidation, emotional damage, suicide
According to the Cyberbullying Research Center, "there have been several high‐profile cases involving teenagers taking their own lives in part because of being harassed and mistreated over the Internet, a phenomenon we have termed cyberbullicide – suicide indirectly or directly influenced by experiences with online aggression."
Cyberbullying is an intense form of psychological abuse, whose victims are more than twice as likely to suffer from mental disorders compared to traditional bullying.
The reluctance youth have in telling an authority figure about instances of cyberbullying has led to fatal outcomes. At least three children between the ages of 12 and 13 have committed suicide due to depression brought on by cyberbullying, according to reports by USA Today and the Baltimore Examiner. These would include the suicide of Ryan Halligan and the suicide of Megan Meier, the latter of which resulted in United States v. Lori Drew.
More recently, teenage suicides tied to cyberbullying have become more prevalent. The latest victim of cyberbullying through the use of mobile applications was Rebecca Ann Sedwick, who committed suicide after being terrorized through mobile applications such as Ask.fm, Kik Messenger and Voxer.
Adults and the workplace
Cyberbullying is not limited to personal attacks or children. Cyberharassment, referred to as cyberstalking when involving adults, takes place in the workplace or on company web sites, blogs or product reviews.
Cyberbullying can occur in product reviews along with other consumer-generated data are being more closely monitored and flagged for content that is deemed malicious and biased as these sites have become tools to cyberbully by way of malicious requests for deletion of articles, vandalism, abuse of administrative positions, and ganging up on products to post "false" reviews and vote products down.
Cyberstalkers use posts, forums, journals and other online means to present a victim in a false and unflattering light. The question of liability for harassment and character assassination is particularly salient to legislative protection since the original authors of the offending material are, more often than not, not only anonymous, but untraceable. Nevertheless, abuse should be consistently brought to company staffers' attention.
Common tactics used by cyberstalkers is to vandalize a search engine or encyclopedia, to threaten a victim's earnings, employment, reputation, or safety. Various companies provide cases of cyber-stalking (involving adults) follow the pattern of repeated actions against a target. While motives vary, whether romantic, a business conflict of interest, or personal dislike, the target is commonly someone whose life the stalker sees or senses elements lacking in his or her own life. Web-based products or services leveraged against cyberstalkers in the harassment or defamation of their victims.
The source of the defamation seems to come from four types of online information purveyors: Weblogs, industry forums or boards, and commercial Web sites. Studies reveal that while some motives are personal dislike, there is often direct economic motivation by the cyberstalker, including conflict of interest, and investigations reveal the responsible party is an affiliate or supplier of a competitor, or the competitor itself.
Cyberbullying is not necessarily confined to a particular location or even to a region. Climate scientists and climate activists, for example, may be confronted with abusive emails from any location in the world. These emails may be responses to public statements that merely report the widely accepted findings of climate science and their implications for the future production of greenhouse gases by humans and for the survival of future generations. Such emails may be sent in response to suggestions posted on climate denial websites, which are effectively requests to engage in cyberbullying. Climate scientists and climate activists may also be confronted with libelous Internet reports that aim to silence them or destroy their reputations.
The Cybersmile Foundation is a multi award-winning cyberbullying charity committed to tackling all forms of online bullying, abuse and hate campaigns. The charity was founded in 2010 in response to the increasing number of cyberbullying related incidents of depression, eating disorders, social isolation, self-harm and suicides devastating lives around the world. Cybersmile provides support to victims and their friends and families through social media interaction, email and helpline support. They also run an annual event, Stop Cyberbullying Day, to draw attention to the issue on the third Friday of June. The next Stop Cyberbullying Day falls on 20 June 2014.
There are multiple non-profit organizations that fight cyberbullying and cyberstalking. They advise victims, provide awareness campaigns, and report offenses to the police. These NGOs include the Protégeles, PantallasAmigas, Foundation Alia2, the non-profit initiative Actúa Contra el Ciberacoso, the National Communications Technology Institute (INTECO), the Agency of Internet quality, the Agencia Española de Protección de Datos, the Oficina de Seguridad del Internauta, the Spanish Internet users' Association, the Internauts' Association, and the Spanish Association of Mothers and Parents Internauts. The Government of Castile and León has also created a Plan de Prevención del Ciberacoso y Promoción de la Navegación Segura en Centro Escolares, and the Government of the Canary Islands has created a portal on the phenomenon called Viveinternet.
In March 2007, the Advertising Council in the United States, in partnership with the National Crime Prevention Council, U.S. Department of Justice, and Crime Prevention Coalition of America, joined to announce the launch of a new public service advertising campaign designed to educate preteens and teens about how they can play a role in ending cyber-bullying.
A Pew Internet and American Life survey found that 33% of teens were subject to some sort of cyber-bullying.
January 20, 2008 – the Boy Scouts of America's 2008 edition of The Boy Scout Handbook addresses how to deal with online bullying. A new First Class rank requirements adds: "Describe the three things you should avoid doing related to use of the Internet. Describe a cyberbully and how you should respond to one."
January 31, 2008 – KTTV Fox 11 News based in Los Angeles put out a report about organized cyber-bullying on sites like Stickam by people who call themselves "/b/rothas". The site had put out report on July 26, 2007, about a subject that partly featured cyberbullying titled "hackers on steroids".
June 2, 2008 – Parents, teens, teachers, and Internet executives came together at Wired Safety's International Stop Cyberbullying Conference, a two-day gathering in White Plains, New York and New York City. Executives from Facebook, Verizon, MySpace, Microsoft, and many others talked with hundreds about how to better protect themselves, personal reputations, children and businesses online from harassment. Sponsors of the conference included McAfee, AOL, Disney, Procter & Gamble, Girl Scouts of the USA, WiredTrust, Children’s Safety Research and Innovation Centre, KidZui.com and others. This conference was being delivered in conjunction and with the support of Pace University. Topics addressed included cyberbullying and the law, with discussions about laws governing cyberbullying and how to distinguish between rudeness and criminal harassment. Additional forums addressed parents’ legal responsibilities, the need for more laws, how to handle violent postings of videos be handled, as well as the differentiation between free speech and hate speech. Cyberharassment vs. cyberbullying was a forefront topic, where age makes a difference and abusive internet behavior by adults with repeated clear intent to harm, ridicule or damage a person or business was classified as stalking harassment vs. bullying by teens and young adults.
August 2012 – A new organized movement to make revenge porn illegal began in August 2012. It is known as End Revenge Porn . Currently revenge porn is only illegal in two states, but the demand for its criminalization is on the rise as digital technology has increased in the past few generations. The organization seeks to provide support for victims, educate the public, and gain activist support to bring new legislation before the United States Government.
Originated in Canada, Anti-Bullying day is a day of celebration for those who choose to participate wearing a symbol of colours (Pink, Blue or Purple) as a stance against bullying. A B.C. teacher founded the Stop A Bully movement, which uses pink wristbands to represent the wearer's stance to stop bullying.
Pink Shirt Day was inspired by David Shepherd and Travis Price. Their high school friends organized a protest in sympathy for a Grade 9 boy who was bullied for wearing a pink shirt. Their stance from wearing pink has been a huge inspiration in the Great Vancouver Mainland. "We know that victims of bullying, witnesses of bullying and bullies themselves all experience the very real and long term negative impacts of bullying regardless of its forms - physical, verbal, written, or on-line (cyberbullying)".
The ERASE (Expect Respect and A Safe Education) is an initiative started by the province of British Columbia to foster safe schools and prevent bullying. It builds on already-effective programs set up by the provincial government to ensure consistent policies and practices regarding the prevention of bullying.
A number organizations are in coalition to provide awareness, protection and recourse for the escalating problem. Some aim to inform and provide measures to avoid as well as effectively terminate cyberbullying and cyberharassment. Anti-bullying charity Act Against Bullying launched the CyberKind campaign in August 2009 to promote positive internet usage.
In 2007, YouTube introduced the first Anti-Bullying Channel for youth, (BeatBullying) engaging the assistance of celebrities to tackle the problem.
In March 2010, a 17-year old girl named Alexis Skye Pilkington was found dead in her room by her parents. Her parents claimed that after repeated cyberbullying, she was driven to suicide. Shortly after her death, attacks resumed. Members of eBaums World began to troll teens' memorial pages on Facebook, with the comments including expressions of pleasure over the death, with pictures of what seemed to be a banana as their profile pictures. Family and friends of the deceased teen responded by creating Facebook groups denouncing cyberbullying and trolling, with logos of bananas behind a red circle with a diagonal line through it.
In response and partnership to the 2011 film Bully, a grassroots effort to stop cyberbullying called The Bully Project was created. Their goal is “sparked a national movement to stop bullying that is transforming children's lives and changing a culture of bullying into one of empathy and action.”
In media and pop culture
- Adina's Deck— a film about three 8th-graders who help their friend who's been cyberbullied.
- Let's Fight It Together— a film produced by Childnet International to be used in schools to support discussion and awareness-raising around cyberbullying.
- Odd Girl Out— a film about a girl who is bullied at school and online.
- At a Distance— a short film produced by NetSafe for the 8-12-year-old audience. It highlights forms and effects of cyberbullying and the importance of bystanders.
- Cyberbully— a TV movie broadcast July 17, 2011 on ABC Family.It depicts a teenage girl who is subjected to a campaign of bullying through a social networking site.
- The Casual Vacancy – a young girl is subjected to harassing images repeatedly posted on her Facebook page.
- The Truth about Truman School, a 2008 children's book about a middle school girl who is cyberbullied by one of her classmates
- Chatroom, a 2010 British thriller film directed by Hideo Nakata about five teenagers who meet on the internet and encourage each other's bad behaviour.
- Star Wars: Jedi Academy: Return of the Padawan, a 2014 book by Jeffrey Brown features cyberbullying on "Holobook," a fictionalized Star Wars version of Facebook.
- "URL, Interrupted," an episode of CSI: Cyber, featured a storyline about a girl named Zoe Tan who was watched through her computer via malware and cyberbullied with a website called "Kill Yourself Zoe Tan."
- "What is Cyberbullying". U.S. Department of Health & Human Services.
- "Cyber Bullying Law and Legal Definition". U.S. Legal Definitions.
- "Cyberbullying: its nature and impact in secondary school pupils". The Journal of Child Psychology and Psychiatry.
- "Legal Debate Swirls Over Charges in a Student's Suicide". New York Times.
- Cyberbullying – Law and Legal Definitions US Legal
- Cyber-bullying Definition Legal Definitions
- "Cyberslammed". Retrieved 22 October 2012.
- National Crime Prevention Council. Ncpc.org. Retrieved on July 6, 2011.
- Patchin, J. W. & Hinduja, S. (2006). Bullies move beyond the schoolyard: A preliminary look at cyberbullying Youth Violence and Juvenile Justice, 4(2), 148–169.
- Bullying Beyond the Schoolyard: Preventing and Responding to Cyberbullying, by J.W. Patchin and S. Hinuja; Sage Publications, (Corwin Press, 2009)
- "Cyber stalking". THE TIMES OF INDIA. Jun 6, 2011.
- "Stalking and Cyber Stalking".
- "UCF Cyber Stalker’s Sentence Not Harsh Enough, Victim Says". ABC News. January 23, 2012. Retrieved 2012-10-21.
- "DIFFERENT FORMS OF CYBER BULLYING" (PDF).
- Puar, Jasbir. "Ecologies of Sex, Sensation, and Slow Death". Periscope. Social Text. Retrieved December 15, 2011.
- Cyberbullying Common, More So At Facebook And MySpace by Thomas Claburn, Information week; June 27, 2007
- Willard, Nancy E. (18 January 2007). Cyberbullying and Cyberthreats (2nd ed.). Research Press. ISBN 0878225374.
- An Educator's Guide to Cyberbullying Brown Senate.gov
- "Defining a Cyberbully". The National Science Foundation. Retrieved November 8, 2011.
- Görzig, Anke; Lara A. Frumkin (2013). "Cyberbullying experiences on-the-go: When social media can become distressing". Cyberpsychology: Journal of Psychosocial Research on Cyberspace.
- "Teen and Young Adult Internet Use". Pew Internet Project. Retrieved 5 January 2015.
- "Cyberbullying Statistics". Internet Safety 101. Retrieved 5 January 2015.
- Kalli Amorphous. "Demand For Facebook's Response To Cyber-Bullying On Their Pages". Change.org, Inc. Retrieved 5 January 2015.
- "Effects of exposure to sex-stereotyped video game characters on tolerance of sexual harassment". Journal of Experimental Social Psychology.
- "Gender Stereotypes, Aggression, and Computer Games: An Online Survey of Women". CyberPsychology & Behavior.
- "Sexual harassment as ethical imperative: how Capcom's fighting game reality show turned ugly". The Penny Arcade Report.
- "Paths to Bullying in Online Gaming: The Effects of Gender, Preference for Playing Violent Games, Hostility, and Aggressive Behavior on Bullying". National Sun Yat-sen University.
- Cyberstalking, cyberharassment and cyberbullying NCSL National Conference of State Legislatures
- Cyberstalking Washington State Legislature
- "What Is Cyberstalking?".
- Cyberbullying Enacted Legislation: 2006–2010 Legislation by State, NCSL
- CT teens develop bullying app to protect peers 7 News; June 2012
- Current and pending cyberstalking-related United States federal and state laws WHOA
- The Global Cyber Law Database GCLD
- Cyberbullying defies traditional stereotype: Girls are more likely than boys to engage in this new trend, research suggests September 1, 2010
- Cyberbullying among Teens The National Crime Prevention Association
- Cyberbullying in Adolescent Victims: Perception and Coping Journal of Psychosocial Research on Cyberspace
- "Stop Cyberbullying". Stop Cyberbullying. 2005-06-27. Retrieved 2013-10-08.
- Topping, Alexandra. "Cyberbullying on social networks spawning form of self-harm". Guardian News and Media Limited. Retrieved 6 August 2013.
- Englander, Elizabeth (June 2012). "Digital Self-Harm: Frequency, Type, Motivations, and Outcomes". MARC Research Reports (Massachusetts Agression Reduction Center, Bridgewater State University) 5.
- Cyberstalking – Introduction Crime Library, Criminal Minds and Methods
- Cyber-Stalking: Obsessional Pursuit and the Digital Criminal, by Wayne Petherick – Stalking Typologies and Pathologies
- UK National Workplace Bullying Advice Line Bullying
- Cyberbullying Stalking and Harassment
- What to Do About Cyberbullies: For Adults, by Rena Sherwood; YAHOO Contributor network
- Cross, D., Shaw, T., Hearn, L., Epstein, M., Monks, H., Lester, L., & Thomas, L. 2009. Australian Covert Bullying Prevalence Study (ACBPS). Child Health Promotion Research Centre, Edith Cowan University, Perth. Deewr.gov.au. Retrieved on July 6, 2011.
- Hasebrink, U (2011). "Patterns of risk and safety online. In-depth analyses from the EU Kids Online survey of 9- to 16-year-olds and their parents in 25 European countries" (PDF).
- Hasebrink, U., Livingstone, S., Haddon, L. and Ólafsson, K.(2009) Comparing children’s online opportunities and risks across Europe: Cross-national comparisons for EU Kids Online. LSE, London: EU Kids Online (Deliverable D3.2, 2nd edition), ISBN 978-0-85328-406-2 lse.ac.uk
- Sourander, A., Klomek, A.B., Ikonen, M., Lindroos, J., Luntamo, T., Koskeiainen, M., … Helenius, H. (2010). "Psychosocial risk factors associated with cyberbullying among adolescents: A population-based study.". Archives of General Psychiatry 67 (7): 720–728.
- Cross-Tab Marketing Services & Telecommunications Research Group for Microsoft Corporation
- Campbell, Marilyn A. (2005). Cyber bullying: An old problem in a new guise?
- Sugimori Shinkichi (2012). "Anatomy of Japanese Bullying". nippon.com. Retrieved January 5, 2015.
- "Cyber bullying bedevils Japan". The Sydney Morning Herald. Retrieved January 5, 2015.
- Cyber Bullying: Student Awareness Palm Springs Unified School District Retrieved 5 January 2015
- Spacey, S. 2015. Crab Mentality, Cyberbullying and "Name and Shame" Rankings. In Press, Waikato University, New Zealand. Retrieved on April 19th, 2015.
- Finkelhor, D., Mitchell, K.J., & Wolak, J. (2000). Online victimization: A report on the nation’s youth. Alexandria, VA: National Center for Missing and Exploited Children.
- "What Parents Need to Know About Cyberbullying". ABC News Primetime. ABC News Internet Ventures. 2006-09-12. Retrieved 2015-02-03.
- Ybarra, M.L., Mitchell, K.J., Wolak, J., & Finkelhor, D. Examining characteristics and associated distress related to Internet harassment: findings from the Second Youth Internet Safety Survey. Pediatrics. 2006 Oct;118(4):e1169-77
- Ybarra, M.L. & Mitchell, K.J. Prevalence and frequency of Internet harassment instigation: implications for adolescent health. J Adolesc Health. 2007 Aug;41(2):189-95
- "Statistics on Bullying" (PDF).
- Hinduja, S. & Patchin, J. W. (2008). Cyberbullying: An Exploratory Analysis of Factors Related to Offending and Victimization. Deviant Behavior, 29(2), 129–156.
- Hinduja, S. & Patchin, J. W. (2007). Offline Consequences of Online Victimization: School Violence and Delinquency. Journal of School Violence, 6(3), 89–112.
- National Children's Home. (2005).Putting U in the picture. Mobile Bullying Survey 2005.(pdf)
- "Cyberbullying FAQ For Teens". National Crime Prevention Council. 2015. Retrieved 2015-02-03.
- Hertz, M. F.; David-Ferdon, C. (2008). Electronic Media and Youth Violence: A CDC Issue Brief for Educators and Caregivers (PDF). Atlanta (GA): Centers for Disease Control. p. 9. Retrieved 2015-02-03.
- Ybarra, Michele L.; Diener-West, Marie; Leaf, Philip J. (December 2007). "Examining the overlap in internet harassment and school bullying: implications for school intervention". Journal of Adolescent Health 41 (6 Suppl 1): S42–S50. doi:10.1016/j.jadohealth.2007.09.004.
- Kowalski, Robin M.; Limber, Susan P. (December 2007). "Electronic bullying among middle school students". Journal of Adolescent Health 41 (6 Suppl 1): S22–S30. doi:10.1016/j.jadohealth.2007.08.017.
- Hertz, M. F.; David-Ferdon, C. (2008). Electronic Media and Youth Violence: A CDC Issue Brief for Educators and Caregivers (PDF). Atlanta (GA): Centers for Disease Control. p. 7. Retrieved 2015-02-03.
- Hinduja, S.; Patchin, J. W. (2009). Bullying beyond the schoolyard: Preventing and responding to cyberbullying. Thousand Oaks, CA: Corwin Press. ISBN 1412966892.
- Snyder, Thomas D.; Robers, Simone; Kemp, Jana; Rathbun, Amy; Morgan, Rachel (2014-06-10). "Indicator 11: Bullying at School and Cyber-Bullying Anywhere". Indicators of School Crime and Safety: 2013 (COMPENDIUM). National Center for Education Statistics. NCES 2014042. Retrieved 2015-02-03.
- Kann, Laura; Kinchen, Steve; Shanklin, Shari L.; Flint, Katherine H.; Hawkins, Joseph; Harris, William A.; Lowry, Richard; Olsen, Emily O’Malley; McManus, Tim; Chyen, David; Whittle, Lisa; Taylor, Eboni; Demissie, Zewditu; Brener, Nancy; Thornton, Jemekia; Moore, John; Zaza, Stephanie (2014-06-13). "Youth Risk Behavior Surveillance — United States, 2013" (PDF). Morbidity and Mortality Weekly Report (MMWR) (Centers for Disease Control and Prevention) 63 (4): 66. Retrieved 16 February 2015.
- Mehari, Krista; Farrell, Albert; Le, Anh-Thuy (2014). "Cyberbullying among adolescents: Measures in search of a construct". Psychology of Violence. doi:10.1037/a0037521. Retrieved March 24, 2015.
- Howlett-Brandon, Mary (2014). "CYBERBULLYING: AN EXAMINATION OF GENDER, RACE, ETHNICITY, AND ENVIRONMENTAL FACTORS FROM THE NATIONAL CRIME VICTIMIZATION SURVEY: STUDENT CRIME SUPPLEMENT, 2009". VCU Theses and Dissertations, VCU Scholars Compass. Virginia Commonwealth University. Retrieved March 30, 2015.
- PÉREZ-PEÑA, RICHARD. "Christie Signs Tougher Law on Bullying in Schools". NewYork Times. Retrieved January 6, 2011.
- Bill targets adults who cyberbully Pantagraph, by Kevin Mcdermott, December 20, 2007
- A rallying cry against cyberbullying. CNET News, by Stefanie Olsen, June 7, 2008
- PBS Teachers. Pbs.org. Retrieved on July 6, 2011.
- Surdin, Ashley (January 1, 2009). "States Passing Laws to Combat Cyber-Bullying — washingtonpost.com". The Washington Post. Retrieved January 2, 2009.
- International IT and e-commerce legal info. Out-law.com. Retrieved on July 6, 2011.
- Von Marees, N., & Petermann, F. (2012). Cyberbullying: An increasing challenge for schools. School Psychology International, 33(5), 476.
- Smyth, S. M. (2010). Cybercrime in Canadian criminal law. (pp. 105–122). Toronto, ON: Carswell.
- Walther, B. (2012). "Cyberbullying: Holding grownups liable for negligent entrustment." Houston Law Review, 49(2), 531–562.
- Medscape article #579988_5 (may require login)
- Alexandra Topping; Ellen Coyne and agencies (8 August 2013). "Cyberbullying websites should be boycotted, says Cameron: Prime minister calls for website operators to 'step up to the plate', following death of 14-year-old Hannah Smith". The Guardian. Retrieved 8 August 2013.
- Kelly Running. "Cyber-bullying and popular culture". Carlyle Observer. Retrieved January 5, 2015.
- "Cyberthreat: How to protect yourself from online bullying". Ideas and Discoveries (Ideas and Discoveries): 76. 2011.
- Alvarez, Lizette. "Girl's Suicide Points to Rise in Apps Used by Cyberbullies". The New York Times. Retrieved 20 November 2013.
- Douglas Fischer: Cyber Bullying Intensifies as Climate Data Questioned. Scientific American, March 1, 2010.
- Dominique Browning: When Grownups Bully Climate Scientists. Time, April 10, 2012.
- Ben Habib: Bullying Climate Change Scientists. Latrobe University News, 2010.
- "First Class Rank Requirements". US Scout Service Project. Retrieved August 5, 2008.
- "Boy Scout Handbook adds advice for dealing with bullies". Dallas Morning News. Retrieved August 5, 2008.
- "FOX 11 Investigates: Cyber Bullies". Fox Television Stations, Inc. Retrieved February 5, 2008.
- "FOX 11 Investigates: 'Anonymous'". Fox Television Stations, Inc. Retrieved August 11, 2007.
- Stop Cyberbullying — An International Conference to Address Cyberbullying, Solutions and Industry Best Practices Wired Safety, Pace University Program, June 2008.
- End Revenge Porn http://www.endrevengeporn.org/
- YouTube tackles bullying online BBC News, November 19, 2007
- Salazar, Cristian (2010-05-24). "Alexis Pilkington Facebook Horror: Cyber Bullies Harass Teen Even After Suicide". Huffington Post. Retrieved 22 October 2012.
- The Bully Project http://www.thebullyproject.com/
- "Cyberbully". Imdb.
- Berson, I. R., Berson, M. J., & Ferron, J. M.(2002). Emerging risks of violence in the digital age: Lessons for educators from an online study of adolescent girls in the United States.Journal of School Violence, 1(2), 51–71.
- Burgess-Proctor, A., Patchin, J. W., & Hinduja, S. (2009). Cyberbullying and online harassment: Reconceptualizing the victimization of adolescent girls. In V. Garcia and J. Clifford [Eds.]. Female crime victims: Reality reconsidered. Upper Saddle River, NJ: Prentice Hall. In Print.
- Keith, S. & Martin, M. E. (2005). Cyber-bullying: Creating a Culture of Respect in a Cyber World. Reclaiming Children & Youth, 13(4), 224–228.
- Hinduja, S. & Patchin, J. W. (2007). Offline Consequences of Online Victimization: School Violence and Delinquency. Journal of School Violence, 6(3), 89–112.
- Hinduja, S. & Patchin, J. W. (2008). Cyberbullying: An Exploratory Analysis of Factors Related to Offending and Victimization. Deviant Behavior, 29(2), 129–156.
- Hinduja, S. & Patchin, J. W. (2009). Bullying beyond the Schoolyard: Preventing and Responding to Cyberbullying. Thousand Oaks, CA: Sage Publications.
- Patchin, J. & Hinduja, S. (2006). Bullies Move beyond the Schoolyard: A Preliminary Look at Cyberbullying. Youth Violence and Juvenile Justice', 4(2), 148–169.
- Tettegah, S. Y., Betout, D., & Taylor, K. R. (2006). Cyber-bullying and schools in an electronic era. In S. Tettegah & R. Hunter (Eds.) Technology and Education: Issues in administration, policy and applications in k12 school. PP. 17–28. London: Elsevier.
- Wolak, J. Mitchell, K.J., & Finkelhor, D. (2006). Online victimization of youth: 5 years later. Alexandria, VA: National Center for Missing & Exploited Children. Available at unh.edu
- Ybarra, M. L. & Mitchell, J. K. (2004). Online aggressor/targets, aggressors and targets: A comparison of associated youth characteristics. Journal of Child Psychology and Psychiatry, 45, 1308–1316.
- Ybarra ML (2004). Linkages between depressive symptomatology and Internet harassment among young regular Internet users. Cyberpsychol and Behavior. Apr;7(2):247-57.
- Ybarra ML, Mitchell KJ (2004). Youth engaging in online harassment: associations with caregiver-child relationships, Internet use, and personal characteristics. Journal of Adolescence. Jun;27(3):319-36.
- Frederick S. Lane, (Chicago: NTI Upstream, 2011)
|Library resources about
|Wikimedia Commons has media related to Cyberbullying.|
|Look up cyberbully in Wiktionary, the free dictionary.|
- Cyberbullying Research Center
- Cyberbullying at Stopbullying.gov
- Cyberbullying Searchable Information Center, ebrary
- Cyberbullying.org.nz – Cyberbullying information, support, and teaching resources from the New Zealand non-profit NetSafe, including the short film At a Distance
- Cyberbullying in Australia Australian Cyberbullying resource for teenagers
- Cyberbullying - What is Cyberbullying?
- Media Smarts - Cyberbullying
- Bad Behavior Online: Bullying, Trolling & Free Speech Video produced by Off Book (web series) |
Program Arcade GamesWith Python And Pygame
Dutch - Nederlands
Hungarian - Magyar
Portuguese - Português
Russian - Русский
Spanish - Español
Turkish - Türkçe
Now that you can create loops, it is time to move on to learning how to create graphics. This chapter covers:
- How the computer handles x, y coordinates. It isn't like the coordinate system you learned in math class.
- How to specify colors. With millions of colors to choose from, telling the computer what color to use isn't as easy as just saying “red.”
- How to open a blank window for drawing. Every artist needs a canvas.
- How to draw lines, rectangles, ellipses, and arcs.
The Cartesian coordinate system, shown in Figure 5.1 (Wikimedia Commons), is the system most people are used to when plotting graphics. This is the system taught in school. The computer uses a similar, but somewhat different, coordinate system. Understanding why it is different requires a quick bit of computer history.
During the early '80s, most computer systems were text-based and did not support graphics. Figure 5.2 (Wikimedia Commons) shows an early spreadsheet program run on an Apple ][ computer that was popular in the '80s. When positioning text on the screen, programmers started at the top calling it line 1. The screen continued down for 24 lines and across for 40 characters.
Even with plain text, it was possible to make rudimentary graphics by just using characters on the keyboard. See this kitten shown in Figure 5.3 and look carefully at how it is drawn. When making this art, characters were still positioned starting with line 1 at the top.
Later the character set was expanded to include boxes and other primitive drawing shapes. Characters could be drawn in different colors. As shown in Figure 5.4 the graphics got more advanced. Search the web for “ASCII art” and many more examples can be found.
Once computers moved to being able to control individual pixels for graphics, the text-based coordinate system stuck.
The $x$ coordinates work the same as the Cartesian coordinates system. But the $y$ coordinates are reversed. Rather than the zero $y$ coordinate at the bottom of the graph like in Cartesian graphics, the zero $y$ coordinate is at the top of the screen with the computer. As the $y$ values go up, the computer coordinate position moved down the screen, just like lines of text rather than standard Cartesian graphics. See Figure 5.5.
Also, note the screen covers the lower right quadrant, where the Cartesian coordinate system usually focuses on the upper right quadrant. It is possible to draw items at negative coordinates, but they will be drawn off-screen. This can be useful when part of a shape is off screen. The computer figures out what is off-screen and the programmer does not need to worry too much about it.
To make graphics easier to work with, we'll use the Pygame. Pygame is a library of code other people have written, and makes it simple to:
- Draw graphic shapes
- Display bitmapped images
- Interact with keyboard, mouse, and gamepad
- Play sound
- Detect when objects collide
The first code a Pygame program needs to do is load and initialize the Pygame library. Every program that uses Pygame should start with these lines:
# Import a library of functions called 'pygame' import pygame # Initialize the game engine pygame.init()
If you haven't installed Pygame yet, directions for installing Pygame are available in the before you begin section. If Pygame is not installed on your computer, you will get an error when trying to run import pygame.
Important: The import pygame looks for a library file named pygame. If a programmer creates a new program named pygame.py, the computer will import that file instead! This will prevent any pygame programs from working until that pygame.py file is deleted.
Next, we need to add variables that define our program's colors. Colors are defined in a list of three colors: red, green, and blue. Have you ever heard of an RGB monitor? This is where the term comes. Red-Green-Blue. With older monitors, you could sit really close to the monitor and make out the individual RGB colors. At least before your mom told you not to sit so close to the TV. This is hard to do with today's high resolution monitors.
Each element of the RGB triad is a number ranging from 0 to 255. Zero means there is none of the color, and 255 tells the monitor to display as much of the color as possible. The colors combine in an additive way, so if all three colors are specified, the color on the monitor appears white. (This is different than how ink and paint work.)
Lists in Python are surrounded by either square brackets or parentheses. (Chapter 7 covers lists in detail and the difference between the two types.) Individual numbers in the list are separated by commas. Below is an example that creates variables and sets them equal to lists of three numbers. These lists will be used later to specify colors.
# Define some colors BLACK = ( 0, 0, 0) WHITE = ( 255, 255, 255) GREEN = ( 0, 255, 0) RED = ( 255, 0, 0)
Why are these variables in upper-case? Remember back from chapter one, a variable that doesn't change is called a constant. We don't expect the color of black to change; it is a constant. We signify that variables are constants by naming them with all upper-case letters. If we expect a color to change, like if we have sky_color that changes as the sun sets, then that variable would be in all lower-case letters.
Using the interactive shell in IDLE, try defining these variables and printing them
If the five colors above aren't the colors you are looking for, you can define your
own. To pick a color, find an on-line “color picker” like the one shown in
Figure 5.6. One such color picker is at:
Extra: Some color pickers specify colors in hexadecimal. You can enter hexadecimal numbers if you start them with 0x. For example:
WHITE = (0xFF, 0xFF, 0xFF)
Eventually the program will need to use the value of $\pi$ when drawing arcs, so this is a good time in our program to define a variable that contains the value of $\pi$. (It is also possible to import this from the math library as math.pi.)
PI = 3.141592653
So far, the programs we have created only printed text out to the screen. Those programs did not open any windows like most modern programs do. The code to open a window is not complex. Below is the required code, which creates a window sized to a width of 700 pixels, and a height of 500:
size = (700, 500) screen = pygame.display.set_mode(size)
Why set_mode? Why not open_window? The reason is that this command can actually do a lot more than open a window. It can also create games that run in a full-screen mode. This removes the start menu, title bars, and gives the game control of everything on the screen. Because this mode is slightly more complex to use, and most people prefer windowed games anyway, we'll skip a detailed discussion on full-screen games. But if you want to find out more about full-screen games, check out the documentation on pygame's display command.
Also, why size = (700, 500) and not size = 700, 500? The same reason why we put parentheses around the color definitions. Python can't normally store two numbers (a height and width) into one variable. The only way it can is if the numbers are stored as a list. Lists need either parentheses or square brackets. (Technically, parenthesis surrounding a set of numbers is more accurately called a tuple or an immutable list. Lists surrounded by square brackets are just called lists. An experienced Python developer would cringe at calling a list of numbers surrounded by parentheses a list rather than a tuple. Also you can actually say size = 700, 500 and it will default to a tuple but I prefer to use parentheses.) Lists are covered in detail in Chapter 7.
To set the title of the window (which shown in the title bar) use the following line of code:
pygame.display.set_caption("Professor Craven's Cool Game")
With just the code written so far, the program would create a window and immediately hang. The user can't interact with the window, even to close it. All of this needs to be programmed. Code needs to be added so that the program waits in a loop until the user clicks “exit.”
This is the most complex part of the program, and a complete understanding of it isn't needed yet. But it is necessary to have an idea of what it does, so spend some time studying it and asking questions.
# Loop until the user clicks the close button. done = False # Used to manage how fast the screen updates clock = pygame.time.Clock() # -------- Main Program Loop ----------- while not done: # --- Main event loop for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done = True # Flag that we are done so we exit this loop # --- Game logic should go here # --- Drawing code should go here # First, clear the screen to white. Don't put other drawing commands # above this, or they will be erased with this command. screen.fill(WHITE) # --- Go ahead and update the screen with what we've drawn. pygame.display.flip() # --- Limit to 60 frames per second clock.tick(60)
Eventually we will add code to handle the keyboard and mouse clicks. That code will go below the comment for main event loop on line 9. Code for determining when bullets are fired and how objects move will go below the comment for game logic on line 14. We'll talk about that in later chapters. Code to draw will go in below where the screen is filled with white on line 20.
Alert! One of the most frustrating problems programmers have is to mess up the event processing loop. This “event processing” code handles all the keystrokes, mouse button clicks, and several other types of events. For example your loop might look like:
for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") elif event.type == pygame.KEYDOWN: print("User pressed a key.") elif event.type == pygame.KEYUP: print("User let go of a key.") elif event.type == pygame.MOUSEBUTTONDOWN: print("User pressed a mouse button")
The events (like pressing keys) all go together in a list. The program uses a for loop to loop through each event. Using a chain of if statements the code figures out what type of event occurred, and the code to handle that event goes in the if statement.
All the if statements should go together, in one for loop. A common mistake when doing copy and pasting of code is to not merge loops from two programs, but to have two event loops.
# Here is one event loop for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") elif event.type == pygame.KEYDOWN: print("User pressed a key.") elif event.type == pygame.KEYUP: print("User let go of a key.") # Here the programmer has copied another event loop # into the program. This is BAD. The events were already # processed. for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") elif event.type == pygame.MOUSEBUTTONDOWN: print("User pressed a mouse button")
The for loop on line 2 grabbed all of the user events. The for loop on line 13 won't grab any events because they were already processed in the prior loop.
Another typical problem is to start drawing, and then try to finish the event loop:
for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") elif event.type == pygame.KEYDOWN: print("User pressed a key.") pygame.draw.rect(screen, GREEN, [50,50,100,100]) # This is code that processes events. But it is not in the # 'for' loop that processes events. It will not act reliably. if event.type == pygame.KEYUP: print("User let go of a key.") elif event.type == pygame.MOUSEBUTTONDOWN: print("User pressed a mouse button")
This will cause the program to ignore some keyboard and mouse commands. Why? The for loop processes all the events in a list. So if there are two keys that are hit, the for loop will process both. In the example above, the if statements are not in the for loop. If there are multiple events, the if statements will only run for the last event, rather than all events.
The basic logic and order for each frame of the game:
- While not done:
- For each event (keypress, mouse click, etc.):
- Use a chain of if statements to run code to handle each event.
- Run calculations to determine where objects move, what happens when objects collide, etc.
- Clear the screen
- Draw everything
- For each event (keypress, mouse click, etc.):
It makes the program easier to read and understand if these steps aren't mixed togther. Don't do some calculations, some drawing, some more calculations, some more drawing. Also, see how this is similar to the calculator done in chapter one. Get user input, run calculations, and output the answer. That same pattern applies here.
The code for drawing the image to the screen happens inside the while loop. With the clock tick set at 10, the contents of the window will be drawn 10 times per second. If it happens too fast the computer is sluggish because all of its time is spent updating the screen. If it isn't in the loop at all, the screen won't redraw properly. If the drawing is outside the loop, the screen may initially show the graphics, but the graphics won't reappear if the window is minimized, or if another window is placed in front.
Right now, clicking the “close” button of a window while running this Pygame program in IDLE will still cause the program to crash. This is a hassle because it requires a lot of clicking to close a crashed program.
The problem is, even though the loop has exited, the program hasn't told the computer to close the window. By calling the command below, the program will close any open windows and exit as desired.
The following code clears whatever might be in the window with a white background. Remember that the variable WHITE was defined earlier as a list of 3 RGB values.
# Clear the screen and set the screen background screen.fill(WHITE)
This should be done before any drawing command is issued. Clearing the screen after the program draws graphics results in the user only seeing a blank screen.
When a window is first created it has a black background. It is still important to clear the screen because there are several things that could occur to keep this window from starting out cleared. A program should not assume it has a blank canvas to draw on.
Very important! You must flip the display after you draw. The computer will not display the graphics as you draw them because it would cause the screen to flicker. This waits to display the screen until the program has finished drawing. The command below “flips” the graphics to the screen.
Failure to include this command will mean the program just shows a blank screen. Any drawing code after this flip will not display.
# Go ahead and update the screen with what we've drawn. pygame.display.flip()
Let's bring everything we've talked about into one full program. This code can be used as a base template for a Pygame program. It opens up a blank window and waits for the user to press the close button.
On the website, if you click the “Examples” button you can select “graphics examples” and then you will find this file as pygame_base_template.py.
""" Show how to use a sprite backed by a graphic. Sample Python/Pygame Programs Simpson College Computer Science http://programarcadegames.com/ http://simpson.edu/computer-science/ Explanation video: http://youtu.be/vRB_983kUMc """ import pygame # Define some colors BLACK = (0, 0, 0) WHITE = (255, 255, 255) GREEN = (0, 255, 0) RED = (255, 0, 0) pygame.init() # Set the width and height of the screen [width, height] size = (700, 500) screen = pygame.display.set_mode(size) pygame.display.set_caption("My Game") # Loop until the user clicks the close button. done = False # Used to manage how fast the screen updates clock = pygame.time.Clock() # -------- Main Program Loop ----------- while not done: # --- Main event loop for event in pygame.event.get(): if event.type == pygame.QUIT: done = True # --- Game logic should go here # --- Drawing code should go here # First, clear the screen to white. Don't put other drawing commands # above this, or they will be erased with this command. screen.fill(WHITE) # --- Go ahead and update the screen with what we've drawn. pygame.display.flip() # --- Limit to 60 frames per second clock.tick(60) # Close the window and quit. # If you forget this line, the program will 'hang' # on exit if running from IDLE. pygame.quit()
Here is a list of things that you can draw:
A program can draw things like rectangles, polygons, circles, ellipses, arcs, and lines. We will also cover how to display text with graphics. Bitmapped graphics such as images are covered in Chapter 11. If you decide to look at that pygame reference, you might see a function definition like this:
pygame.draw.rect(Surface, color, Rect, width=0): return Rect
A frequent cause of confusion is the part of the line that says width=0. What this means is that if you do not supply a width, it will default to zero. Thus this function call:
pygame.draw.rect(screen, RED, [55, 500, 10, 5])
Is the same as this function call:
pygame.draw.rect(screen, RED, [55, 500, 10, 5], 0)
The : return Rect is telling you that the function returns a rectangle, the same one that was passed in. You can just ignore this part.
What will not work, is attempting to copy the line and put width=0 in the quotes.
# This fails and the error the computer gives you is # really hard to understand. pygame.draw.rect(screen, RED, [55, 500, 10, 5], width=0)
The code example below shows how to draw a line on the screen. It will draw on the screen a green line from (0, 0) to (100, 100) that is 5 pixels wide. Remember that GREEN is a variable that was defined earlier as a list of three RGB values.
# Draw on the screen a green line from (0, 0) to (100, 100) # that is 5 pixels wide. pygame.draw.line(screen, GREEN, [0, 0], [100, 100], 5)
Use the base template from the prior example and add the code to draw lines. Read the comments to figure out exactly where to put the code. Try drawing lines with different thicknesses, colors, and locations. Draw several lines.
Programs can repeat things over and over. The next code example draws a line over and over using a loop. Programs can use this technique to do multiple lines, and even draw an entire car.
Putting a line drawing command inside a loop will cause multiple lines being drawn to the screen. But here's the catch. If each line has the same starting and ending coordinates, then each line will draw on top of the other line. It will look like only one line was drawn.
To get around this, it is necessary to offset the coordinates each time through the loop. So the first time through the loop the variable y_offset is zero. The line in the code below is drawn from (0,10) to (100, 110). The next time through the loop y_offset increased by 10. This causes the next line to be drawn to have new coordinates of (0, 20) and (100, 120). This continues each time through the loop shifting the coordinates of each line down by 10 pixels.
# Draw on the screen several lines from (0, 10) to (100, 110) # 5 pixels wide using a while loop y_offset = 0 while y_offset < 100: pygame.draw.line(screen,RED,[0,10+y_offset],[100,110+y_offset],5) y_offset = y_offset + 10
This same code could be done even more easily with a for loop:
# Draw on the screen several lines from (0,10) to (100,110) # 5 pixels wide using a for loop for y_offset in range(0, 100, 10): pygame.draw.line(screen,RED,[0,10+y_offset],[100,110+y_offset],5)
Run this code and try using different changes to the offset. Try creating an offset with different values. Experiment with different values until exactly how this works is obvious.
For example, here is a loop that uses sine and cosine to create a more complex set of offsets and produces the image shown in Figure 5.7.
for i in range(200): radians_x = i / 20 radians_y = i / 6 x = int( 75 * math.sin(radians_x)) + 200 y = int( 75 * math.cos(radians_y)) + 200 pygame.draw.line(screen, BLACK, [x,y], [x+5,y], 5)
Multiple elements can be drawn in one for loop, such as this code which draws the multiple X's shown in Figure 5.8.
for x_offset in range(30, 300, 30): pygame.draw.line(screen,BLACK,[x_offset,100],[x_offset-10,90],2) pygame.draw.line(screen,BLACK,[x_offset,90],[x_offset-10,100],2)
When drawing a rectangle, the computer needs coordinates for the upper left rectangle corner (the origin), and a height and width.
Figure 5.9 shows a rectangle (and an ellipse, which will be explained later) with the origin at (20, 20), a width of 250 and a height of 100. When specifying a rectangle the computer needs a list of these four numbers in the order of (x, y, width, height).
The next code example draws this rectangle. The first two numbers in the list define the upper left corner at (20, 20). The next two numbers specify first the width of 250 pixels, and then the height of 100 pixels.
The 2 at the end specifies a line width of 2 pixels. The larger the number, the thicker the line around the rectangle. If this number is 0, then there will not be a border around the rectangle. Instead it will be filled in with the color specified.
# Draw a rectangle pygame.draw.rect(screen,BLACK,[20,20,250,100],2)
An ellipse is drawn just like a rectangle. The boundaries of a rectangle are specified, and the computer draws an ellipses inside those boundaries.
The most common mistake in working with an ellipse is to think that the starting point specifies the center of the ellipse. In reality, nothing is drawn at the starting point. It is the upper left of a rectangle that contains the ellipse.
Looking back at Figure 5.9 one can see an ellipse 250 pixels wide and 100 pixels tall. The upper left corner of the 250x100 rectangle that contains it is at (20, 20). Note that nothing is actually drawn at (20, 20). With both drawn on top of each other it is easier to see how the ellipse is specified.
# Draw an ellipse, using a rectangle as the outside boundaries pygame.draw.ellipse(screen, BLACK, [20,20,250,100], 2)
What if a program only needs to draw part of an ellipse? That can be done with the arc command. This command is similar to the ellipse command, but it includes start and end angles for the arc to be drawn. The angles are in radians.
The code example below draws four arcs showing four difference quadrants of the circle. Each quadrant is drawn in a different color to make the arcs sections easier to see. The result of this code is shown in Figure 5.10.
# Draw an arc as part of an ellipse. Use radians to determine what # angle to draw. pygame.draw.arc(screen, GREEN, [100,100,250,200], PI/2, PI, 2) pygame.draw.arc(screen, BLACK, [100,100,250,200], 0, PI/2, 2) pygame.draw.arc(screen, RED, [100,100,250,200],3*PI/2, 2*PI, 2) pygame.draw.arc(screen, BLUE, [100,100,250,200], PI, 3*PI/2, 2)
The next line of code draws a polygon. The triangle shape is defined with three points at (100, 100) (0, 200) and (200, 200). It is possible to list as many points as desired. Note how the points are listed. Each point is a list of two numbers, and the points themselves are nested in another list that holds all the points. This code draws what can be seen in Figure 5.11.
# This draws a triangle using the polygon command pygame.draw.polygon(screen, BLACK, [[100,100], [0,200], [200,200]], 5)
Text is slightly more complex. There are three things that need to be done. First, the program creates a variable that holds information about the font to be used, such as what typeface and how big.
Second, the program creates an image of the text. One way to think of it is that the program carves out a “stamp” with the required letters that is ready to be dipped in ink and stamped on the paper.
The third thing that is done is the program tells where this image of the text should be stamped (or “blit'ed”) to the screen.
Here's an example:
# Select the font to use, size, bold, italics font = pygame.font.SysFont('Calibri', 25, True, False) # Render the text. "True" means anti-aliased text. # Black is the color. The variable BLACK was defined # above as a list of [0, 0, 0] # Note: This line creates an image of the letters, # but does not put it on the screen yet. text = font.render("My text",True,BLACK) # Put the image of the text on the screen at 250x250 screen.blit(text, [250, 250])
Want to print the score to the screen? That is a bit more complex. This does not work:
text = font.render("Score: ", score, True, BLACK)
Why? A program can't just add extra items to font.render like the print statement. Only one string can be sent to the command, therefore the actual value of score needs to be appended to the “Score: ” string. But this doesn't work either:
text = font.render("Score: " + score, True, BLACK)
If score is an integer variable, the computer doesn't know how to add it to a string. You, the programmer, must convert the score to a string. Then add the strings together like this:
text = font.render("Score: " + str(score), True, BLACK)
Now you know how to print the score. If you want to print a
timer, that requires print formatting, discussed in a chapter later on.
Check in the example code for
section on-line for the timer.py example:
This is a full listing of the program discussed in this chapter. This program,
along with other programs, may be downloaded from:
""" Simple graphics demo Sample Python/Pygame Programs Simpson College Computer Science http://programarcadegames.com/ http://simpson.edu/computer-science/ """ # Import a library of functions called 'pygame' import pygame # Initialize the game engine pygame.init() # Define some colors BLACK = (0, 0, 0) WHITE = (255, 255, 255) BLUE = (0, 0, 255) GREEN = (0, 255, 0) RED = (255, 0, 0) PI = 3.141592653 # Set the height and width of the screen size = (400, 500) screen = pygame.display.set_mode(size) pygame.display.set_caption("Professor Craven's Cool Game") # Loop until the user clicks the close button. done = False clock = pygame.time.Clock() # Loop as long as done == False while not done: for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done = True # Flag that we are done so we exit this loop # All drawing code happens after the for loop and but # inside the main while not done loop. # Clear the screen and set the screen background screen.fill(WHITE) # Draw on the screen a line from (0,0) to (100,100) # 5 pixels wide. pygame.draw.line(screen, GREEN, [0, 0], [100, 100], 5) # Draw on the screen several lines from (0,10) to (100,110) # 5 pixels wide using a loop for y_offset in range(0, 100, 10): pygame.draw.line(screen, RED, [0, 10 + y_offset], [100, 110 + y_offset], 5) # Draw a rectangle pygame.draw.rect(screen, BLACK, [20, 20, 250, 100], 2) # Draw an ellipse, using a rectangle as the outside boundaries pygame.draw.ellipse(screen, BLACK, [20, 20, 250, 100], 2) # Draw an arc as part of an ellipse. # Use radians to determine what angle to draw. pygame.draw.arc(screen, BLACK, [20, 220, 250, 200], 0, PI / 2, 2) pygame.draw.arc(screen, GREEN, [20, 220, 250, 200], PI / 2, PI, 2) pygame.draw.arc(screen, BLUE, [20, 220, 250, 200], PI, 3 * PI / 2, 2) pygame.draw.arc(screen, RED, [20, 220, 250, 200], 3 * PI / 2, 2 * PI, 2) # This draws a triangle using the polygon command pygame.draw.polygon(screen, BLACK, [[100, 100], [0, 200], [200, 200]], 5) # Select the font to use, size, bold, italics font = pygame.font.SysFont('Calibri', 25, True, False) # Render the text. "True" means anti-aliased text. # Black is the color. This creates an image of the # letters, but does not put it on the screen text = font.render("My text", True, BLACK) # Put the image of the text on the screen at 250x250 screen.blit(text, [250, 250]) # Go ahead and update the screen with what we've drawn. # This MUST happen after all the other drawing commands. pygame.display.flip() # This limits the while loop to a max of 60 times per second. # Leave this out and we will use all CPU we can. clock.tick(60) # Be IDLE friendly pygame.quit()
You are not logged in. Log in here and track your progress.
|About||Buy the Book||Help Translate||My College||My Twitter||Updates|
English version by Paul Vincent Craven
Spanish version by Antonio Rodríguez Russian version by Vladimir Slav
Turkish version by Güray Yildirim
Portuguese version by Armando Marques Sobrinho and Tati Carvalho
Dutch version by Frank Waegeman
Hungarian version by Nagy Attila
French version by Franco Rossi |
LESSON PLANNING OF DIVISION OF DECIMALS
Students` Learning Outcomes
- Divide a decimal by a decimal using direct division by moving decimal point.
Information for Teachers
- To divide by decimal divisors:
o Move the decimal up to required digit to make the divisor whole.
o Move the decimal in the dividend the same number of places as moved in division.
o Place the decimal straight up from the new position in the dividend.
o Now divide as;
o In this example to make divisor (0.53) a whole number (53) move the decimal point two digits to the right side as;
Material / Resources
Writing board, chalk / marker, duster, textbook
- Ask students to solve three related whole number division problems as;
- They will notice the relationship between the three problems and recall the steps required to solve division questions.
- Introduce that today we will apply the same division method to decimals with a little variations.
- Doing mathematics problems is just like eating a piece of cake. Eating a piece of cake gets difficult when we try to eat a big piece of cake all in one gulp. Let`s break it into smaller pieces, and we can eat it easily.
- Teach students the steps to dividing by decimals –model for students:
Step 1: Make the divisor a whole number by multiplying by the appropriate power of 10 (move the decimal all the way to the right)
Step 2: Multiply the dividend by the same power of 10. Add zeros if necessary (Move the decimal in the dividend the same number of places as you did for the divisor)
Step 3: Float the decimal; in the dividend straight up.
Step 4: Divide as Divide 5.39 by 1.1
- You are not dividing by a whole number, so you need to move the decimal point so that you are dividing by a whole number:
- You are now dividing by a whole number, so you can proceed:
- Ignore the decimal point and use long division :
- Put the decimal point in the answer directly above the decimal point in the dividend:
- The answer is 4.9.
- Student practice two examples= one with a decimal that terminates and one in which zeros need to be added.
- Student review writing division in different ways-and translate problems into different formats.
- Students practice independently on copies.
- Also give few questions from textbook.
Sum up / Conclusion
- Move the decimal up to required digit to make the divisor whole.
- Move the decimal in the dividend the same number of places as moved in divisor.
- Place the decimal straight up from the new position in the dividend.
- Ask students to solve following questions:
- Solve the following cross word puzzles of decimal division. |
The declaration of independence was published in the congress, by the unanimous declaration of the thirteen United States of America on July 4, 1776. People in the congress declared that everyone in the world is equal which means everyone is also wroth to own their unalienable rights. When people live under tyranny, the government who governed them should be overthrown. They argued that The British government was a despotism that should be overthrown and people in the thirteen United States are worth to have their own government to provide basic mankind’s needs which are basically safety and happiness to them. The argument that the congress came up with can be separated as three parts including the major premise, the minor premise and the conclusion.
The major premise is that a government, which implements despotism, constrains the basic but unalienable human rights should be overthrown; the minor premise is that the British government was constraining the basic but unalienable human rights of the inhabitants of the thirteen United States of America; the conclusion is that the British government should be overthrown. In order to convince readers, there are swarms of evidences to support the premises of the argument. For the major side, the second paragraph is devoted to support the major premise of the argument. It is a judgment that bases on the norms shared universally which simply mean everyone has their rights to pursue their life, liberty and happiness. On the other hand, paragraph three to paragraph twenty nine, total 26 paragraphs, are devoted to support minor premise of the argument.
All of the supports are historical facts which are the inhabitants of the United States have been suffered. Despite the fact that there are twenty five paragraphs for supporting the minor premise more than the major one, we can understand that the purpose and the aims of this declaration. The major premise of the argument is made by the universally common norms. The people in congress do not need to put much effort on convincing it to the readers. All they need to do is remind the readers that there is a norm like what they were talking about. For the minor premise, all the supports are only the experience of the inhabitants themselves.
There doesn’t contain any universal things that the readers can understand without any explanations. So, they need to put more effort on it to convince readers to believe what they were talking about. Moreover, the aim of this declaration is to overthrown British government and to establish their own government, which is the conclusion of the argument. The major thing they need to do in order to achieve this aim is to let other countries to support them. Showing how the British government dictated the whole colony is an efficient and useful to achieve. These two are the reasons that the author spent a lot more paragraphs in minor premise than in major premise. |
Measures of Central Tendency
Find a Middle Ground
The measures of central tendency- mean, median, and mode- are quite simply three different ways of looking at the way numbers behave towards the middle of a data set. Knowing the behavior at the center can help us to better understand data.
There are three measures of central tendency that you need to know for the GED: mean, median, and mode.
Mean: Also known as the average is the number you would get if all the data was shared equally. Simply total the data, then divide by the number of items in the data set*
Median: The median is the centermost number in an ordered data set*
Mode: The most commonly occurring item in a data set.
*See the GED formula sheet for more information
Watch the Virtual GED Class video and the supplementary videos below for a complete explanation and tons of worked example problems.
Understanding Mean, Median and Mode
Make a "cheat sheet" to help you remember mean, median, and mode.
Isolate each skill before practicing a mix of straightforward mean, median, and mode problems.
NEED MORE EXAMPLES?
Experienced 1 (Mean)
Experienced 2 (Mean)
Experienced 3 (Average)
Experienced 4 (Median)
Experienced 5 (Mode)
Looking for more challenging examples? Move on to the next lesson, "GED Style Mean, Median, Mode and Range" for a ton of GED style problems, tips, and tricks! |
This article needs additional citations for verification. (January 2015)
Polar molecules must contain one or more polar bonds due to a difference in electronegativity between the bonded atoms. Molecules containing polar bonds have no molecular polarity if the bond dipoles cancel each other out by symmetry.
Polar molecules interact through dipole-dipole intermolecular forces and hydrogen bonds. Polarity underlies a number of physical properties including surface tension, solubility, and melting and boiling points.
Polarity of bonds
Not all atoms attract electrons with the same force. The amount of "pull" an atom exerts on its electrons is called its electronegativity. Atoms with high electronegativities – such as fluorine, oxygen, and nitrogen – exert a greater pull on electrons than atoms with lower electronegativities such as alkali metals and alkaline earth metals. In a bond, this leads to unequal sharing of electrons between the atoms, as electrons will be drawn closer to the atom with the higher electronegativity.
Because electrons have a negative charge, the unequal sharing of electrons within a bond leads to the formation of an electric dipole: a separation of positive and negative electric charge. Because the amount of charge separated in such dipoles is usually smaller than a fundamental charge, they are called partial charges, denoted as δ+ (delta plus) and δ− (delta minus). These symbols were introduced by Sir Christopher Ingold and Dr. Edith Hilda (Usherwood) Ingold in 1926. The bond dipole moment is calculated by multiplying the amount of charge separated and the distance between the charges.
These dipoles within molecules can interact with dipoles in other molecules, creating dipole-dipole intermolecular forces.
Bonds can fall between one of two extremes – completely nonpolar or completely polar. A completely nonpolar bond occurs when the electronegativities are identical and therefore possess a difference of zero. A completely polar bond is more correctly called an ionic bond, and occurs when the difference between electronegativities is large enough that one atom actually takes an electron from the other. The terms "polar" and "nonpolar" are usually applied to covalent bonds, that is, bonds where the polarity is not complete. To determine the polarity of a covalent bond using numerical means, the difference between the electronegativity of the atoms is used.
Bond polarity is typically divided into three groups that are loosely based on the difference in electronegativity between the two bonded atoms. According to the Pauling scale:
- Nonpolar bonds generally occur when the difference in electronegativity between the two atoms is less than 0.5
- Polar bonds generally occur when the difference in electronegativity between the two atoms is roughly between 0.5 and 2.0
- Ionic bonds generally occur when the difference in electronegativity between the two atoms is greater than 2.0
Pauling based this classification scheme on the partial ionic character of a bond, which is an approximate function of the difference in electronegativity between the two bonded atoms. He estimated that a difference of 1.7 corresponds to 50% ionic character, so that a greater difference corresponds to a bond which is predominantly ionic.
As a quantum-mechanical description, Pauling proposed that the wave function for a polar molecule AB is a linear combination of wave functions for covalent and ionic molecules: ψ = aψ(A:B) + bψ(A+B−). The amount of covalent and ionic character depends on the values of the squared coefficients a2 and b2.
Bond dipole moments
The bond dipole moment uses the idea of electric dipole moment to measure the polarity of a chemical bond within a molecule. It occurs whenever there is a separation of positive and negative charges.
The bond dipole μ is given by:
The bond dipole is modeled as δ+ — δ– with a distance d between the partial charges δ+ and δ–. It is a vector, parallel to the bond axis, pointing from minus to plus, as is conventional for electric dipole moment vectors.
Chemists often draw the vector pointing from plus to minus. This vector can be physically interpreted as the movement undergone by electrons when the two atoms are placed a distance d apart and allowed to interact, the electrons will move from their free state positions to be localised more around the more electronegative atom.
The SI unit for electric dipole moment is the coulomb–meter. This is too large to be practical on the molecular scale. Bond dipole moments are commonly measured in debyes, represented by the symbol D, which is obtained by measuring the charge in units of 10−10 statcoulomb and the distance d in Angstroms. Based on the conversion factor of 10−10 statcoulomb being 0.208 units of elementary charge, so 1.0 debye results from an electron and a proton separated by 0.208 Å. A useful conversion factor is 1 D = 3.335 64×10−30 C m.
For diatomic molecules there is only one (single or multiple) bond so the bond dipole moment is the molecular dipole moment, with typical values in the range of 0 to 11 D. At one extreme, a symmetrical molecule such as chlorine, Cl
2, has zero dipole moment, while near the other extreme, gas phase potassium bromide, KBr, which is highly ionic, has a dipole moment of 10.41 D.[page needed][verification needed]
For polyatomic molecules, there is more than one bond. The total molecular dipole moment may be approximated as the vector sum of the individual bond dipole moments. Often bond dipoles are obtained by the reverse process: a known total dipole of a molecule can be decomposed into bond dipoles. This is done to transfer bond dipole moments to molecules that have the same bonds, but for which the total dipole moment is not yet known. The vector sum of the transferred bond dipoles gives an estimate for the total (unknown) dipole of the molecule.
Polarity of molecules
A molecule is composed of one or more chemical bonds between molecular orbitals of different atoms. A molecule may be polar either as a result of polar bonds due to differences in electronegativity as described above, or as a result of an asymmetric arrangement of nonpolar covalent bonds and non-bonding pairs of electrons known as a full molecular orbital.
While the molecules can be described as "polar covalent", "nonpolar covalent", or "ionic", this is often a relative term, with one molecule simply being more polar or more nonpolar than another. However, the following properties are typical of such molecules.
When comparing a polar and nonpolar molecule with similar molar masses, the polar molecule in general has a higher boiling point, because the dipole–dipole interaction between polar molecules results in stronger intermolecular attractions. One common form of polar interaction is the hydrogen bond, which is also known as the H-bond. For example, water forms H-bonds and has a molar mass M = 18 and a boiling point of +100 °C, compared to nonpolar methane with M = 16 and a boiling point of –161 °C.
Due to the polar nature of the water molecule itself, other polar molecules are generally able to dissolve in water. Most nonpolar molecules are water-insoluble (hydrophobic) at room temperature. Many nonpolar organic solvents, such as turpentine, are able to dissolve nonpolar substances.
Polar liquids have a tendency to be more viscous than nonpolar liquids. For example, nonpolar hexane is much less viscous than polar water. However, molecule size is a much stronger factor on viscosity than polarity, where compounds with larger molecules are more viscous than compounds with smaller molecules. Thus, water (small polar molecules) is less viscous than hexadecane (large nonpolar molecules).
A polar molecule has a net dipole as a result of the opposing charges (i.e. having partial positive and partial negative charges) from polar bonds arranged asymmetrically. Water (H2O) is an example of a polar molecule since it has a slight positive charge on one side and a slight negative charge on the other. The dipoles do not cancel out, resulting in a net dipole. The dipole moment of water depends on its state. In the gas phase the dipole moment is ≈ 1.86 debye (D), whereas liquid water (≈ 2.95 D) and ice (≈ 3.09 D) are higher due to differing hydrogen-bonded environments. Other examples include sugars (like sucrose), which have many polar oxygen–hydrogen (−OH) groups and are overall highly polar.
If the bond dipole moments of the molecule do not cancel, the molecule is polar. For example, the water molecule (H2O) contains two polar O−H bonds in a bent (nonlinear) geometry. The bond dipole moments do not cancel, so that the molecule forms a molecular dipole with its negative pole at the oxygen and its positive pole midway between the two hydrogen atoms. In the figure each bond joins the central O atom with a negative charge (red) to an H atom with a positive charge (blue).
The hydrogen fluoride, HF, molecule is polar by virtue of polar covalent bonds – in the covalent bond electrons are displaced toward the more electronegative fluorine atom.
Ammonia, NH3, is a molecule whose three N−H bonds have only a slight polarity (toward the more electronegative nitrogen atom). The molecule has two lone electrons in an orbital that points towards the fourth apex of an approximately regular tetrahedron, as predicted by the VSEPR theory. This orbital is not participating in covalent bonding; it is electron-rich, which results in a powerful dipole across the whole ammonia molecule.
In ozone (O3) molecules, the two O−O bonds are nonpolar (there is no electronegativity difference between atoms of the same element). However, the distribution of other electrons is uneven – since the central atom has to share electrons with two other atoms, but each of the outer atoms has to share electrons with only one other atom, the central atom is more deprived of electrons than the others (the central atom has a formal charge of +1, while the outer atoms each have a formal charge of −1⁄2). Since the molecule has a bent geometry, the result is a dipole across the whole ozone molecule.
A molecule may be nonpolar either when there is an equal sharing of electrons between the two atoms of a diatomic molecule or because of the symmetrical arrangement of polar bonds in a more complex molecule. For example, boron trifluoride (BF3) has a trigonal planar arrangement of three polar bonds at 120°. This results in no overall dipole in the molecule.
Carbon dioxide (CO2) has two polar C=O bonds, but the geometry of CO2 is linear so that the two bond dipole moments cancel and there is no net molecular dipole moment; the molecule is nonpolar.
Examples of household nonpolar compounds include fats, oil, and petrol/gasoline.
In the methane molecule (CH4) the four C−H bonds are arranged tetrahedrally around the carbon atom. Each bond has polarity (though not very strong). The bonds are arranged symmetrically so there is no overall dipole in the molecule. The diatomic oxygen molecule (O2) does not have polarity in the covalent bond because of equal electronegativity, hence there is no polarity in the molecule.
Large molecules that have one end with polar groups attached and another end with nonpolar groups are described as amphiphiles or amphiphilic molecules. They are good surfactants and can aid in the formation of stable emulsions, or blends, of water and fats. Surfactants reduce the interfacial tension between oil and water by adsorbing at the liquid–liquid interface.
Phospholipids are effective natural surfactants that have important biological functions
Predicting molecule polarity
|Polar||AB||Linear molecules||CO||Carbon monoxide||0.112|
|HAx||Molecules with a single H||HF||Hydrogen fluoride||1.86|
|AxOH||Molecules with an OH at one end||C2H5OH||Ethanol||1.69|
|OxAy||Molecules with an O at one end||H2O||Water||1.85|
|NxAy||Molecules with an N at one end||NH3||Ammonia||1.42|
|Nonpolar||A2||Diatomic molecules of the same element||O2||Dioxygen||0.0|
|CxAy||Most hydrocarbon compounds||C3H8||Propane||0.083|
|CxAy||Hydrocarbon with center of inversion||C4H10||Butane||0.0|
Determining the point group is a useful way to predict polarity of a molecule. In general, a molecule will not possess dipole moment if the individual bond dipole moments of the molecule cancel each other out. This is because dipole moments are euclidean vector quantities with magnitude and direction, and a two equal vectors that oppose each other will cancel out.
Any molecule with a centre of inversion ("i") or a horizontal mirror plane ("σh") will not possess dipole moments. Likewise, a molecule with more than one Cn axis of rotation will not possess a dipole moment because dipole moments cannot lie in more than one dimension. As a consequence of that constraint, all molecules with dihedral symmetry (Dn) will not have a dipole moment because, by definition, D point groups have two or multiple Cn axes.
Since C1, Cs,C∞h Cn and Cnv point groups do not have a centre of inversion, horizontal mirror planes or multiple Cn axis, molecules in one of those point groups will have dipole moment.
Electrical deflection of water
Contrary to popular misconception, the electrical deflection of a stream of water from a charged object is not based on polarity. The deflection occurs because of electrically charged droplets in the stream, which the charged object induces. A stream of water can also be deflected in a uniform electrical field, which cannot exert force on polar molecules. Additionally, after a stream of water is grounded, it can no longer be deflected. Weak deflection is even possible for nonpolar liquids.
- Chemical properties
- Electronegativities of the elements (data page)
- Polar point group
- Jensen, William B. (2009). "The Origin of the "Delta" Symbol for Fractional Charges". J. Chem. Educ. 86 (5): 545. Bibcode:2009JChEd..86..545J. doi:10.1021/ed086p545.
- Ingold, C. K.; Ingold, E. H. (1926). "The Nature of the Alternating Effect in Carbon Chains. Part V. A Discussion of Aromatic Substitution with Special Reference to Respective Roles of Polar and Nonpolar Dissociation; and a Further Study of the Relative Directive Efficiencies of Oxygen and Nitrogen". J. Chem. Soc. 129: 1310–1328. doi:10.1039/jr9262901310.
- Pauling, L. (1960). The Nature of the Chemical Bond (3rd ed.). Oxford University Press. pp. 98–100. ISBN 0801403332.
- Pauling, L. (1960). The Nature of the Chemical Bond (3rd ed.). Oxford University Press. p. 66. ISBN 0801403332.
- Blaber, Mike (2018). "Dipole_Moments". Libre Texts. California State University.
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "electric dipole moment, p". doi:10.1351/goldbook.E01929
- Hovick, James W.; Poler, J. C. (2005). "Misconceptions in Sign Conventions: Flipping the Electric Dipole Moment". J. Chem. Educ. 82 (6): 889. Bibcode:2005JChEd..82..889H. doi:10.1021/ed082p889.
- Atkins, Peter; de Paula, Julio (2006). Physical Chemistry (8th ed.). W.H. Freeman. p. 620 (and inside front cover). ISBN 0-7167-8759-8.
- Physical chemistry 2nd Edition (1966) G.M. Barrow McGraw Hill
- Van Wachem, R.; De Leeuw, F. H.; Dymanus, A. (1967). "Dipole Moments of KF and KBr Measured by the Molecular‐Beam Electric‐Resonance Method". J. Chem. Phys. 47 (7): 2256. Bibcode:1967JChPh..47.2256V. doi:10.1063/1.1703301.
- Clough, Shepard A.; Beers, Yardley; Klein, Gerald P.; Rothman, Laurence S. (1 September 1973). "Dipole moment of water from Stark measurements of H2O, HDO, and D2O". The Journal of Chemical Physics. 59 (5): 2254–2259. Bibcode:1973JChPh..59.2254C. doi:10.1063/1.1680328.
- Gubskaya, Anna V.; Kusalik, Peter G. (27 August 2002). "The total molecular dipole moment for liquid water". The Journal of Chemical Physics. 117 (11): 5290–5302. Bibcode:2002JChPh.117.5290G. doi:10.1063/1.1501122.
- Batista, Enrique R.; Xantheas, Sotiris S.; Jónsson, Hannes (15 September 1998). "Molecular multipole moments of water molecules in ice Ih". The Journal of Chemical Physics. 109 (11): 4546–4551. Bibcode:1998JChPh.109.4546B. doi:10.1063/1.477058.
- Ziaei-Moayyed, Maryam; Goodman, Edward; Williams, Peter (2000-11-01). "Electrical Deflection of Polar Liquid Streams: A Misunderstood Demonstration". Journal of Chemical Education. 77 (11): 1520. Bibcode:2000JChEd..77.1520Z. doi:10.1021/ed077p1520. ISSN 0021-9584. |
The time signature (also known as meter signature, metre signature, or measure signature) is a notational convention used in Western musical notation to specify how many beats (pulses) are to be contained in each bar and which note value is to be given one beat. In a musical score, the time signature appears at the beginning of the piece, as a time symbol or stacked numerals, such as or 3
4 (read common time and three-four time, respectively), immediately following the key signature or immediately following the clef symbol if the key signature is empty. A mid-score time signature, usually immediately following a barline, indicates a change of meter.
There are various types of time signatures, depending on whether the music follows simple rhythms or involves unusual shifting tempos, including: simple (such as 3
4 or 4
4), compound (e.g., 9
8 or 12
8), complex (e.g., 5
4 or 7
8), mixed (e.g., 5
8 & 3
8 or 6
8 & 3
4), additive (e.g., 3+2+3
8), fractional (e.g., 2 1⁄2
4), and irrational meters (e.g., 3
10 or 5
- 1 Simple time signatures
- 2 Compound time signatures
- 3 Beat and time
- 4 Most frequent time signatures
- 5 Complex time signatures
- 6 Mixed meters
- 7 Variants
- 8 Irrational meters
- 9 Early music usage
- 10 See also
- 11 References
- 12 External links
Simple time signatures
Simple time signatures consist of two numerals, one stacked above the other:
- The lower numeral indicates the note value that represents one beat (the beat unit).
- The upper numeral indicates how many such beats there are grouped together in a bar.
The most common simple time signatures are 2
4, and 4
Notational variations in simple time
The symbol is sometimes used for 4
4 time, also called common time or imperfect time. The symbol is derived from a broken circle used in music notation from the 14th through 16th centuries, where a full circle represented what today would be written in 3
2 or 3
4 time, and was called tempus perfectum (perfect time). The symbol is also a carry-over from the notational practice of late-Medieval and Renaissance music, where it signified tempus imperfectum diminutum (diminished imperfect time)—more precisely, a doubling of the speed, or proportio dupla, in duple meter. In modern notation, it is used in place of 2
2 and is called alla breve or, colloquially, cut time or cut common time.
Compound time signatures
In compound meter, subdivisions (which are what the upper number represents in these meters) of the main beat are in three equal parts, so that a dotted note (half again longer than a regular note) becomes the beat unit. Compound time signatures are named as if they were simple time signatures, in which the one-third part of the beat unit is the beat, so the top number is commonly 6, 9 or 12 (multiples of 3). The lower number is most commonly an 8 (an eighth-note): as in 9
8 or 12
4 is a simple signature that represents three quarter notes. It has a basic feel of (Bold denotes a stressed beat):
- one two three (as in a waltz)
- Each quarter note might comprise two eighth-notes (quavers) giving a total of six such notes, but it still retains that three-in-a-bar feel:
- one and two and three and
In principle, 6
8 can be thought of as the same as the six-quaver form of 3
4 above, the only difference being that the eighth note is selected as the beat unit. But whereas the six quavers in 3
4 had been in three groups of two, in practice 6
8 is understood to mean that they are in two groups of three, with a two-in-a-bar feel (Bold denotes a stressed beat):
- one and a, two and a
- one two three, four five six
Beat and time
Time signatures indicating two beats per bar (whether it is simple or compound) are called duple time; those with three beats to the bar are triple time. To the ear, a bar may seem like one singular beat. For example, a fast waltz, notated in 3
4 time, may be described as being one in a bar. Terms such as quadruple (4), quintuple (5), and so on are also occasionally used.
Actual beat divisions
As mentioned above, though the score indicates a 3
4 time, the actual beat division can be the whole bar, particularly at faster tempos. Correspondingly, at slow tempos the beat indicated by the time signature could in actual performance be divided into smaller units.
Interchangeability, rewriting meters
On a formal mathematical level the time signatures of, e.g., 3
4 and 3
8 are interchangeable. In a sense, all simple triple time signatures, such as 3
2, etc.—and all compound duple times, such as 6
16 and so on, are equivalent. A piece in 3
4 can be easily rewritten in 3
8, simply by halving the length of the notes. Other time signature rewritings are possible: most commonly a simple time signature with triplets translates into a compound meter.
Though formally interchangeable, for a composer or performing musician, different time signatures often have different connotations. First, a smaller note value in the beat unit implies a more complex notation, which can affect ease of performance. Second, beaming affects the choice of actual beat divisions. It is, for example, more natural to use the quarter note/crotchet as a beat unit in 6
4 or 2
2 than the eight/quaver in 6
8 or 2
4. Third, time signatures are traditionally associated with different music styles—it might seem strange to notate a rock tune in 4
8 or 4
Stress and meter
|This section does not cite any sources. (April 2017) (Learn how and when to remove this template message)|
For all meters, the first beat (the downbeat, ignoring any anacrusis) is usually stressed with generally a greater activity before arriving at the downbeat (though not always, for example in reggae where the offbeats are stressed); in time signatures with four groups in the bar (such as 4
4 and 12
8), the second and third beat are often quieter while the fourth beat gets active, releasing its energy on the next downbeat. The cut common time (2/2) instead has a stressed down beat moving to the next down beat, giving a feel of one beat per measure. This gives a regular pattern of stressed and unstressed beats, though notes on stressed beats are not necessarily louder or more important, indeed it is the energy of the music's flow that counts.
Most frequent time signatures
|Simple time signatures|
|Common time: widely used in most forms of Western popular music. Most common time signature in rock, blues, country, funk, and pop|
|Alla breve, cut time: used for marches and fast orchestral music. Frequently occurs in musical theater. The same effect is sometimes obtained by marking a 4
4 meter "in 2"
|Used for polkas or marches|
|Used for waltzes, minuets, scherzi, country & western ballads, R&B, sometimes used in pop|
|Also used for the above, but usually suggests higher tempo or shorter hypermeter|
|Compound time signatures|
|Double jigs, polkas, sega, salegy, tarantella, marches, barcarolles, loures, and some rock music|
|Compound triple time, used in triple ("slip") jigs, otherwise occurring rarely (The Ride of the Valkyries, Tchaikovsky's Fourth Symphony, and the final movement of the Bach Violin Concerto in A minor (BWV 1041) are familiar examples. Debussy's Clair de lune and Prélude à l'après-midi d'un faune (opening bars) are in 9
|Also common in slower blues (where it is called a shuffle) and doo-wop; also used more recently in rock music. Can also be heard in some jigs like The Irish Washerwoman. This is also the time signature of the Movement II By the Brook of Beethoven's Symphony No 6 (the Pastoral)|
Video samples for the most frequent time signatures
For larger versions of the videos, click play, then go to More than About this file
Complex time signatures
Signatures that do not fit the usual duple or triple categories are called complex, asymmetric, irregular, unusual, or odd—though these are broad terms, and usually a more specific description is appropriate. The term odd meter, however, sometimes describes time signatures in which the upper number is simply odd rather than even, including 3
4 and 9
8. The irregular meters (not fitting duple or triple categories) are common in some non-Western music, but rarely appeared in formal written Western music until the 19th century. Early anomalous examples appeared in Spain between 1516 and 1520, but the Delphic Hymns to Apollo (one by Athenaeus is entirely in quintuple meter, the other by Limenius predominantly so), carved on the exterior walls of the Athenian Treasury at Delphi in 128 BC are in the relatively common cretic meter, with five beats to a foot. The third movement (Larghetto) of Chopin's Piano Sonata No. 1 (1828) is an early, but by no means the earliest, example of 5
4 time in solo piano music. Reicha's Fugue 20 from his Thirty-six Fugues, published in 1803, is also for piano and is in 5
8. The waltz-like second movement of Tchaikovsky's Pathétique Symphony, often described as a limping waltz, is a notable example of 5
4 time in orchestral music. Examples from the 20th century include Holst's Mars, the Bringer of War and Neptune, the Mystic (both in 5
4) from the orchestral suite The Planets, Paul Hindemith's Fugue Secunda in G,(5
8) from Ludus Tonalis, the ending of Stravinsky's Firebird (7
4), the fugue from Heitor Villa-Lobos's Bachianas Brasileiras No. 9 (11
8) and the themes for the Mission Impossible television series by Lalo Schifrin (in 5
4) and Jerry Goldsmith's theme for Room 222 (in 7
In the Western popular music tradition, unusual time signatures occur as well, with progressive rock in particular making frequent use of them. The use of shifting meters in The Beatles' "Strawberry Fields Forever" (1967) and the use of quintuple meter in their "Within You, Without You" (1967) are well-known examples, as is Radiohead's "Paranoid Android" (includes 7
Paul Desmond's jazz composition Take Five, in 5
4 time, was one of a number of irregular-meter compositions that The Dave Brubeck Quartet played. They played other compositions in 11
4 (Eleven Four), 7
4 (Unsquare Dance)—and 9
8 (Blue Rondo à la Turk), expressed as 2+2+2+3
8. This last is an example of a work in a signature that, despite appearing merely compound triple, is actually more complex. Brubeck's title refers to the characteristic aksak meter of the Turkish karşılama dance.
However, such time signatures are only unusual in most Western music. Traditional music of the Balkans uses such meters extensively. Bulgarian dances, for example, include forms with 5, 7, 9, 11, 13, 15, 22, 25 and other numbers of beats per measure. These rhythms are notated as additive rhythms based on simple units, usually 2, 3 and 4 beats, though the notation fails to describe the metric "time bending" taking place, or compound meters. For example, the Bulgarian Sedi Donka consists of 25 beats divided 7+7+11, where 7 is subdivided 3+2+2 and 11 is subdivided 2+2+3+2+2 or 4+3+4. See Variants below.
Video samples for complex time signatures
While time signatures usually express a regular pattern of beat stresses continuing through a piece (or at least a section), sometimes composers place a different time signature at the beginning of each bar, resulting in music with an extremely irregular rhythmic feel. In this case the time signatures are an aid to the performers, and not necessarily an indication of meter. The Promenade from Mussorgsky's Pictures at an Exhibition (1874) is a good example:
Igor Stravinsky's The Rite of Spring (1913) is famous for its "savage" rhythms:
In such cases, a convention that some composers follow (e.g., Olivier Messiaen, in his La Nativité du Seigneur and Quatuor pour la fin du temps) is to simply omit the time signature. Charles Ives's Concord Sonata has measure bars for select passages, but the majority of the work is unbarred.
Some pieces have no time signature, as there is no discernible meter. This is commonly known as free time. Sometimes one is provided (usually 4
4) so that the performer finds the piece easier to read, and simply has 'free time' written as a direction. Sometimes the word FREE is written downwards on the staff to indicate the piece is in free time. Erik Satie wrote many compositions that are ostensibly in free time, but actually follow an unstated and unchanging simple time signature. Later composers used this device more effectively, writing music almost devoid of a discernibly regular pulse.
If two time signatures alternate repeatedly, sometimes the two signatures are placed together at the beginning of the piece or section, as shown below:
To indicate more complex patterns of stresses, such as additive rhythms, more complex time signatures can be used. Additive meters have a pattern of beats that subdivide into smaller, irregular groups. Such meters are sometimes called imperfect, in contrast to perfect meters, in which the bar is first divided into equal units.
For example, the signature
—which can be written (3+2+3)/8, means that there are 8 quaver beats in the bar, divided as the first of a group of three eighth notes (quavers) that are stressed, then the first of a group of two, then first of a group of three again. The stress pattern is usually counted as one-two-three-one-two-one-two-three. This kind of time signature is commonly used to notate folk and non-Western types of music. In classical music, Béla Bartók and Olivier Messiaen have used such time signatures in their works. The first movement of Maurice Ravel's Piano Trio in A Minor is written in 8
8, in which the beats are likewise subdivided into 3 + 2 + 3 to reflect Basque dance rhythms.
Romanian musicologist Constantin Brăiloiu had a special interest in compound time signatures, developed while studying the traditional music of certain regions in his country. While investigating the origins of such unusual meters, he learned that they were even more characteristic of the traditional music of neighboring peoples (e.g., the Bulgarians). He suggested that such timings can be regarded as compounds of simple two-beat and three-beat meters, where an accent falls on every first beat, even though, for example in Bulgarian music, beat lengths of 1, 2, 3, 4 are used in the metric description. In addition, when focused only on stressed beats, simple time signatures can count as beats in a slower, compound time. However, there are two different-length beats in this resulting compound time, a one half-again longer than the short beat (or conversely, the short beat is 2⁄3 the value of the long). This type of meter is called aksak (the Turkish word for "limping"), impeded, jolting, or shaking, and is described as an irregular bichronic rhythm. A certain amount of confusion for Western musicians is inevitable, since a measure they would likely regard as 7
16, for example, is a three-beat measure in aksak, with one long and two short beats (with subdivisions of 2+2+3, 2+3+2, or 3+2+2).
Folk music may make use of metric time bends, so that the proportions of the performed metric beat time lengths differ from the exact proportions indicated by the metric. Depending on playing style of the same meter, the time bend can vary from non-existent to considerable; in the latter case, some musicologists may want to assign a different meter. For example, the Bulgarian tune Eleno Mome is written as 7=2+2+1+2, 13=4+4+2+3, 12=3+4+2+3, but an actual performance (e.g., Smithsonian Eleno Mome) may be closer to 4+4+2+3.5. The Macedonian 3+2+2+3+2 meter is even more complicated, with heavier time bends, and use of quadruples on the threes. The metric beat time proportions may vary with the speed that the tune is played. The Swedish Boda Polska (Polska from the parish Boda) has a typical elongated second beat.
In Western classical music, metric time bend is used in the performance of the Viennese Waltz. Most Western music uses metric ratios of 2:1, 3:1, or 4:1 (two-, three- or four-beat time signatures)—in other words, integer ratios that make all beats equal in time length. So, relative to that, 3:2 and 4:3 ratios correspond to very distinctive metric rhythm profiles. Complex accentuation occurs in Western music, but as syncopation rather than as part of the metric accentuation.
Brăiloiu borrowed a term from Turkish medieval music theory: aksak (Turkish for crippled). Such compound time signatures fall under the "aksak rhythm" category that he introduced along with a couple more that should describe the rhythm figures in traditional music. The term Brăiloiu revived had moderate success worldwide, but in Eastern Europe it is still frequently used. However, aksak rhythm figures occur not only in a few European countries, but on all continents, featuring various combinations of the two and three sequences. The longest are in Bulgaria. The shortest aksak rhythm figures follow the five-beat timing, comprising a two and a three (or three and two).
Video samples for additive meters
Some composers have used fractional beats: for example, the time signature 2 1⁄2
4 appears in Carlos Chávez's Piano Sonata No. 3 (1928) IV, m. 1.
Music educator Carl Orff proposed replacing the lower number of the time signature with an actual note image, as shown at right. This system eliminates the need for compound time signatures (described above), which are confusing to beginners. While this notation has not been adopted by music publishers generally (except in Orff's own compositions), it is used extensively in music education textbooks. Similarly, American composers George Crumb and Joseph Schwantner, among others, have used this system in many of their works.
Another possibility is to extend the barline where a time change is to take place above the top instrument's line in a score and to write the time signature there, and there only, saving the ink and effort that would have been spent writing it in each instrument's staff. Henryk Górecki's Beatus Vir is an example of this. Alternatively, music in a large score sometimes has time signatures written as very long, thin numbers covering the whole height of the score rather than replicating it on each staff; this is an aid to the conductor, who can see signature changes more easily.
These are time signatures, used for so-called irrational bar lengths, that have a denominator that is not a power of two (1, 2, 4, 8, 16, 32, etc.) (or, mathematically speaking, is not a dyadic rational). These are based on beats expressed in terms of fractions of full beats in the prevailing tempo—for example 3
10 or 5
24. For example, where 4
4 implies a bar construction of four quarter-parts of a whole note (i.e., four quarter notes), 4
3 implies a bar construction of four third-parts of it. These signatures are only of utility when juxtaposed with other signatures with varying denominators; a piece written entirely in 4
3, say, could be more legibly written out in 4
Metric modulation is "a somewhat distant analogy". It is arguable whether the use of these signatures makes metric relationships clearer or more obscure to the musician; it is always possible to write a passage using non-irrational signatures by specifying a relationship between some note length in the previous bar and some other in the succeeding one. Sometimes, successive metric relationships between bars are so convoluted that the pure use of irrational signatures would quickly render the notation extremely hard to penetrate. Good examples, written entirely in conventional signatures with the aid of between-bar specified metric relationships, occur a number of times in John Adams' opera Nixon in China (1987), where the sole use of irrational signatures would quickly produce massive numerators and denominators.
Historically, this device has been prefigured wherever composers wrote tuplets. For example, a 2
4 bar of 3 triplet crotchets could arguably be written as a bar of 3
6. Henry Cowell's piano piece Fabric (1920) employs separate divisions of the bar (anything from 1 to 9) for the three contrapuntal parts, using a scheme of shaped note heads to visually clarify the differences, but the pioneering of these signatures is largely due to Brian Ferneyhough, who says that he "find[s] that such 'irrational' measures serve as a useful buffer between local changes of event density and actual changes of base tempo. Thomas Adès has also used them extensively—for example in Traced Overhead (1996), the second movement of which contains, among more conventional meters, bars in such signatures as 2
14 and 5
A gradual process of diffusion into less rarefied musical circles seems underway. For example, John Pickard's Eden, commissioned for the 2005 finals of the National Brass Band Championships of Great Britain contains bars of 3
10 and 7
Notationally, rather than using Cowell's elaborate series of notehead shapes, the same convention has been invoked as when normal tuplets are written; for example, one beat in 4
5 is written as a normal quarter note, four quarter notes complete the bar, but the whole bar lasts only 4⁄5 of a reference whole note, and a beat 1⁄5 of one (or 4⁄5 of a normal quarter note). This is notated in exactly the same way that one would write if one were writing the first four quarter notes of five quintuplet quarter notes.
This article uses irrational in the music theory sense, not the mathematical sense, where an irrational number is one that cannot be written as a ratio of whole numbers. However, a few pieces from Conlon Nancarrow's Studies for Player Piano—use a time signature that is irrational in the mathematical sense. A piece contains a canon with a part augmented in the ratio √:1 (approximately 6.48:1). Another one has a time signature of π
e, amongst others.
Video samples for irrational meters
These video samples show two time signatures combined to make a polymeter, since 4
3, say, in isolation, is identical to 4
Early music usage
Mensural time signatures
In the 14th, 15th and 16th centuries, a period in which mensural notation was used, four basic mensuration signs determined the proportion between the two main units of rhythm. There were no measure or bar lines in music of this period; these signs, the ancestors of modern time signatures, indicate the ratio of duration between different note values. The relation between the breve and the semibreve was called tempus, and the relation between the semibreve and the minim was called prolatio. The breve and the semibreve use roughly the same symbols as our modern double whole note (breve) and whole note (semibreve), but they were not limited to the same proportional values as are in use today. There are complicated rules concerning how a breve is sometimes three and sometimes two semibreves. Unlike modern notation, the duration ratios between these different values was not always 2:1; it could be either 2:1 or 3:1, and that is what, amongst other things, these mensuration signs indicated. A ratio of 3:1 was called complete, perhaps a reference to the Trinity, and a ratio of 2:1 was called incomplete.
A circle used as a mensuration sign indicated tempus perfectum (a circle being a symbol of completeness), while an incomplete circle, resembling a letter C, indicated tempus imperfectum. Assuming the breve is a beat, this corresponds to the modern concepts of triple meter and duple meter, respectively. In either case, a dot in the center indicated prolatio perfecta (compound meter) while the absence of such a dot indicated prolatio imperfecta (simple meter).
A rough equivalence of these signs to modern meters would be:
- corresponds to 9
- corresponds to 3
- corresponds to 6
- corresponds to 2
N.B.: in modern compound meters the beat is a dotted note value, such as a dotted quarter, because the ratios of the modern note value hierarchy are always 2:1. Dotted notes were never used in this way in the mensural period; the main beat unit was always a simple (undotted) note value.
- tempus imperfectum diminutum, 1:2 proportion (twice as fast);
- tempus perfectum diminutum, 1:2 proportion (twice as fast);
- or just proportio tripla, 1:3 proportion (three times as fast, similar to triplets).
Often the ratio was expressed as two numbers, one above the other, looking similar to a modern time signature, though it could have values such as 4
3, which a conventional time signature could not.
Some proportional signs were not used consistently from one place or century to another. In addition, certain composers delighted in creating "puzzle" compositions that were intentionally difficult to decipher.
In particular, when the sign was encountered, the tactus (beat) changed from the usual semibreve to the breve, a circumstance called alla breve. This term has been sustained to the present day, and though now it means the beat is a minim (half note), in contradiction to the literal meaning of the phrase, it still indicates that the beat has changed to a longer note value.
- Alexander R. Brinkman, Pascal Programming for Music Research (Chicago: University of Chicago Press, 1990): 443, 450–63, 757, 759, 767. ISBN 0226075079; Mary Elizabeth Clark and David Carr Glover, Piano Theory: Primer Level (Miami: Belwin Mills, 1967): 12; Steven M. Demorest, Building Choral Excellence: Teaching Sight-Singing in the Choral Rehearsal (Oxford and New York: Oxford University Press, 2003): 66. ISBN 0195165500; William Duckworth, A Creative Approach to Music Fundamentals, eleventh edition (Boston, MA: Schirmer Cengage Learning, 2013): 54, 59, 379. ISBN 0840029993; Edwin Gordon, Tonal and Rhythm Patterns: An Objective Analysis: A Taxonomy of Tonal Patterns and Rhythm Patterns and Seminal Experimental Evidence of Their Difficulty and Growth Rate (Albany: SUNY Press, 1976): 36, 37, 54, 55, 57. ISBN 0873953541; Demar Irvine, Reinhard G. Pauly, Mark A. Radice, Irvine’s Writing about Music, third edition (Portland, Oregon: Amadeus Press, 1999): 209–10. ISBN 1574670492.
- Henry Cowell and David Nicholls, New Musical Resources, third edition (Cambridge and New York: Cambridge University Press, 1996): 63. ISBN 0521496519 (cloth); ISBN 0521499747 (pbk); Cynthia M. Gessele, "Thiéme, Frédéric [Thieme, Friedrich]", The New Grove Dictionary of Music and Musicians, second edition, edited by Stanley Sadie and John Tyrrell (London: Macmillan Publishers, 2001); James L. Zychowicz, Mahler's Fourth Symphony (Oxford and New York: Oxford University Press, 2005): 82–83, 107. ISBN 0195181654.
- Edwin Gordon, Rhythm: Contrasting the Implications of Audiation and Notation (Chicago: GIA Publications, 2000): 111. ISBN 1579990983.
- G. Augustus Holmes (1949). The Academic Manual of the Rudiments of Music. London: A. Weekes; Stainer & Bell. p. 17. ISBN 9780852492765.
- Willi Apel, The Notation of Polyphonic Music 900–1600, fifth edition, revised and with commentary; The Medieval Academy of America Publication no. 38 (Cambridge, Massachusetts: The Medieval Academy of America, 1953): 147–48.
- Scott Schroedl, Play Drums Today! A Complete Guide to the Basics: Level One (Milwaukee: Hal Leonard Corporation, 2001), p. 42. ISBN 0-634-02185-0.
- See File:Bach BVW 1041 Allegro Assai.png for an excerpt from the violin part of the final movement.
- Tim Emmons, Odd Meter Bass: Playing Odd Time Signatures Made Easy (Van Nuys: Alfred Publishing, 2008): 4. ISBN 978-0-7390-4081-2. "What is an 'odd meter'?...A complete definition would begin with the idea of music organized in repeating rhythmic groups of three, five, seven, nine, eleven, thirteen, fifteen, etc."
- Egert Pöhlmann and Martin L. West, Documents of Ancient Greek Music: The Extant Melodies and Fragments, edited and transcribed with commentary by Egert Pöhlmann and Martin L. West (Oxford: Clarendon Press, 2001): 70–71 and 85. ISBN 0-19-815223-X.
- "Tchaikovsky's Symphony # 6 (Pathetique), Classical Classics, Peter Gutmann". Classical Notes. Retrieved 2012-04-20.
- Edward Macan, Rocking the Classics: English Progressive Rock and the Counterculture (New York: Oxford University Press, 1997): 48. ISBN 978-0-19-509888-4.
- Radiohead (musical group). OK Computer, vocal score with guitar accompaniment and tablature (Essex, England: IMP International Music Publications; Miami, FL: Warner Bros. Publications; Van Nuys, Calif.: Alfred Music Co., Inc., 1997):[page needed]. ISBN 0-7579-9166-1.
- Manuel, Peter (1988). Popular Musics of the Non-Western World: An Introductory Survey (rev. ed.). Oxford University Press. p. 131. ISBN 9780195063349.
- Gardner Read, Music Notation: A Manual of Modern Practice (Boston: Allyn and Bacon, Inc., 1964):[page needed].
- Constantin Brăiloiu, Le rythme Aksak, Revue de Musicologie 33, nos. 99 and 100 (December 1951): 71–108. Citation on pp. 75–76.
- Gheorghe Oprea, Folclorul muzical românesc (Bucharest: Ed. Muzicala, 2002),[page needed]. ISBN 973-42-0304-5.
- "Brian Ferneyhough", The Ensemble Sospeso
- John Pickard: Eden, full score, Kirklees Music, 2005.
- Willi Apel, The Notation of Polyphonic Music 900–1600, fifth edition, revised with commentary; The Medieval Academy of America Publication no. 38 (Cambridge, Massachusetts: The Mediaeval Academy of America, 1953), p. 148.
- Willi Apel, The Notation of Polyphonic Music 900–1600, fifth edition, revised with commentary; The Medieval Academy of America Publication no. 38 (Cambridge, Massachusetts: The Mediaeval Academy of America, 1953), p. 147.
- Grateful Dead songs with unusual time signatures (Grateful Dead)
- "Funky Vergina" - a tune in 15/16 by Mode Plagal
- Odd Time Obsessed Internet Radio - dedicated to "odd" meters
- More video samples of many time signatures - made with Bounce Metronome Pro a program that can play all the time signatures mentioned in this article, even the ones that are irrational in the mathematical sense, like π |
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
Little was known about Titan’s surface before the Cassini-Huygens mission. Because the moon’s haze is partially transparent to near-infrared light, earlier telescopic studies exploiting this property were able to show that the surface is not uniform. Images taken in near-infrared wavelengths by the Hubble Space Telescope in 1994 revealed a bright continent-sized region, later named Xanadu Regio, on Titan’s leading face. This region was also discerned from Earth and from the Cassini spacecraft at radar wavelengths, which can penetrate the haze.
As the Cassini spacecraft orbited Saturn, it made numerous observations during a series of close flybys of Titan beginning in late 2004. On Jan. 14, 2005, the Huygens entry probe became the first spacecraft to land on a planetary surface in the outer solar system, carrying out various physical and chemical measurements of Titan’s atmosphere and transmitting high-resolution images as it descended by parachute. The Cassini-Huygens mission revealed that Titan’s surface is quite young by planetary standards, with only a few large impact craters observed. The surface is composed of three major types of terrain: bright, rough regions that are similar to Xanadu Regio, dark regions that are rich in water ice, and dark regions that are covered by fields of dunes. The surface is composed mainly of water ice, hydrocarbons, and possibly methane and ammonia ice. There is evidence for the recent condensation of ices on the surface of Titan, perhaps by active geologic processes. Although no active volcanoes were observed by Cassini, landforms that may be ice volcanoes were discovered.
Titan’s surface, like Earth’s, is sculpted by wind and probably also rain (in the form of liquid methane). “River” channels coated with dark hydrocarbon deposits are common, sometimes running along faults and sometimes with extensive tributary systems. The surface temperature and pressure of Titan’s surface is near methane’s triple point (the temperature and pressure at which a substance can coexist as a liquid, a solid, and a gas). Thus, the role of methane on Titan may be similar to that of water on Earth; that is, it may be the principal agent behind erosion processes.
The equatorial and temperate regions of Titan have vast areas of dunes formed by windblown sand rich in organic compounds. The Cassini spacecraft discovered an extensive system of lakes filled with liquid hydrocarbons in the north polar region. A smaller lake, Ontario Lacus, with a shrinking shoreline, has been observed in the south polar region. Reflections of the Sun have been observed on the lakes that confirm that they are filled with liquids rather than ice or sand.
Learn More in these related Britannica articles:
SaturnSaturn’s moon Titan is distinguished from all other moons in the solar system by the presence of a significant atmosphere, one that is denser than that of any of the terrestrial planets except Venus.…
Saturn: Significant satellitesTitan is Saturn’s largest moon and the only moon in the solar system known to have clouds, a dense atmosphere, and liquid lakes. The diameter of its solid body is 5,150 km (3,200 miles), which makes it, after Jupiter’s Ganymede, the second largest moon in…
astrobiologyEven Titan, a large moon of Saturn with a thick atmosphere, might conceivably have some unusual biology on its cold surface, where lakes of liquid methane and ethane may exist. The European space probe Huygens landed on Titan on January 14, 2005, and saw signs of… |
Jamestown Colony, first permanent English settlement in North America, located near present-day Williamsburg, Virginia. Established on May 14, 1607, the colony gave England its first foothold in the European competition for the New World, which had been dominated by the Spanish since the voyages of Christopher Columbus in the late 15th century.
The colony was a private venture, financed and organized by the Virginia Company of London. King James I granted a charter to a group of investors for the establishment of the company on April 10, 1606. During this era, “Virginia” was the English name for the entire East Coast of North America north of Florida. The charter gave the company the right to settle anywhere from roughly present-day North Carolina to New York state. The company’s plan was to reward investors by locating gold and silver deposits and by finding a river route to the Pacific Ocean for trade with the Orient.
A contingent of approximately 105 colonists departed England in late December 1606 in three ships—the Susan Constant, the Godspeed, and the Discovery—under the command of Christopher Newport. They reached Chesapeake Bay on April 26, 1607. Soon afterward the captains of the three ships met to open a box containing the names of members of the colony’s governing council: Newport; Bartholomew Gosnold, one of the behind-the-scenes initiators of the Virginia Company; Edward-Maria Wingfield, a major investor; John Ratcliffe; George Kendall; John Martin; and Captain John Smith, a former mercenary who had fought in the Netherlands and Hungary. Wingfield became the colony’s first president. Smith had been accused of plotting a mutiny during the ocean voyage and was not admitted to the council until weeks later, on June 10.
After a period of searching for a settlement site, the colonists moored the ships off a peninsula (now an island) in the James River on the night of May 13 and began to unload them on May 14. The site’s marshy setting and humidity would prove to be unhealthful, but the site had several apparent advantages at the time the colony’s leaders chose it: ships could pull up close to it in deep water for easy loading and unloading, it was unoccupied, and it was joined to the mainland only by a narrow neck of land, making it simpler to defend. The settlement, named for James I, was known variously during its existence as James Forte, James Towne, and James Cittie.
First years (1607–09)
Most Indian tribes of the region were part of the Powhatan empire, with Chief Powhatan as its head. The colonists’ relations with the local tribes were mixed from the beginning. The two sides conducted business with each other, the English trading their metal tools and other goods for the Native Americans’ food supplies. At times the Indians showed generosity in providing gifts of food to the colony. On other occasions, encounters between the colonists and the tribes turned violent, and the Native Americans occasionally killed colonists who strayed alone outside the fort.
On May 21, 1607, a week after the colonists began occupying Jamestown, Newport took five colonists (including Smith) and 18 sailors with him on an expedition to explore the rivers flowing into the Chesapeake and to search for a way to the Pacific Ocean. On returning, they found that the colony had endured a surprise attack and had managed to drive the attackers away only with cannon fire from the ships. However, when Newport left for England on June 22 with the Susan Constant and the Godspeed—leaving the smaller Discovery behind for the colonists—he brought with him a positive report from the council in Jamestown to the Virginia Company. The colony’s leaders wrote, and probably believed, that the colony was in good condition and on track for success.
The report proved too optimistic. The colonists had not carried out the work in the springtime needed for the long haul, such as building up the food stores and digging a freshwater well. The first mass casualties of the colony took place in August 1607, when a combination of bad water from the river, disease-bearing mosquitoes, and limited food rations created a wave of dysentery, severe fevers, and other serious health problems. Numerous colonists died, and at times as few as five able-bodied settlers were left to bury the dead. In the aftermath, three members of the council—John Smith, John Martin, and John Ratcliffe—acted to eject Edward-Maria Wingfield from his presidency on September 10. Ratcliffe took Wingfield’s place. It was apparently a lawful transfer of power, authorized by the company’s rules that allowed the council to remove the president for just cause.
Shortly after Newport returned in early January 1608, bringing new colonists and supplies, one of the new colonists accidentally started a fire that leveled all of the colony’s living quarters. The fire further deepened the colony’s dependence on the Indians for food. In accord with the Virginia Company’s objectives, much of the colony’s efforts in 1608 were devoted to searching for gold. Newport had brought with him two experts in gold refining (to determine whether ore samples contained genuine gold), as well as two goldsmiths. With the support of most of the colony’s leadership, the colonists embarked on a lengthy effort to dig around the riverbanks of the area. Councillor John Smith objected, believing the quest for gold was a diversion from needed practical work. “There was no talke, no hope, no worke, but dig gold, refine gold, load gold,” one colonist remembered.
During the colony’s second summer, President Ratcliffe ordered the construction of an overelaborate capitol building. This structure came to symbolize the colony’s mismanagement in the minds of some settlers. With growing discontent over his leadership, Ratcliffe left office; whether he resigned or was overthrown is unclear. John Smith took his place on September 10, 1608. To impose discipline on malingering colonists, Smith announced a new rule: “He that will not worke shall not eate (except by sicknesse he be disabled).” Even so, the colony continued to depend on trade with the Indians for much of its food supply. During Smith’s administration, no settlers died of starvation, and the colony survived the winter with minimal losses. In late September 1608 a ship brought a new group of colonists that included Jamestown’s first women: Mistress Forrest and her maid, Anne Burras.
In London, meanwhile, the company received a new royal charter on May 23, 1609, which gave the colony a new form of management, replacing its president and council with a governor. The company determined that Sir Thomas Gates would hold that position for the first year of the new charter. He sailed for Virginia in June with a fleet of nine ships and hundreds of new colonists. The fleet was caught in a hurricane en route, however, and Gates’s ship was wrecked off Bermuda. Other ships of the fleet did arrive in Virginia that August, and the new arrivals demanded that Smith step down. Smith resisted, and finally it was agreed that he would remain in office until the expiration of his term the following month. His presidency ended early nonetheless. While still in command, Smith was seriously injured when his gunpowder bag caught fire from mysterious causes. He sailed back to England in early September. A nobleman named George Percy, the eighth son of an earl, took his place as the colony’s leader.
In the autumn of 1609, after Smith left, Chief Powhatan began a campaign to starve the English out of Virginia. The tribes under his rule stopped bartering for food and carried out attacks on English parties that came in search of trade. Hunting became highly dangerous, as the Powhatan Indians also killed Englishmen they found outside the fort. Long reliant on the Indians, the colony found itself with far too little food for the winter.
As the food stocks ran out, the settlers ate the colony’s animals—horses, dogs, and cats—and then turned to eating rats, mice, and shoe leather. In their desperation, some practiced cannibalism. The winter of 1609–10, commonly known as the Starving Time, took a heavy toll. Of the 500 colonists living in Jamestown in the autumn, fewer than one-fifth were still alive by March 1610. Sixty were still in Jamestown; another 37, more fortunate, had escaped by ship.
On May 24, 1610, two ships, the Deliverance and the Patience, unexpectedly arrived. The colonists who had wrecked on the Bermuda Islands all had survived and managed to rebuild the two ships to carry them onward. Those colonists, led by Gates (the new governor) and George Somers, assumed they would find a thriving colony. Instead they found near-skeletal survivors. Gates and Somers had brought only a small food supply, so Gates decided to abandon the colony. On June 7 all the colonists boarded four small ships to head home. On their way out of the Chesapeake Bay, however, they encountered an incoming fleet of three ships under Thomas West, 12th baron de la Warr, who ordered them to turn around. West brought with him 150 new settlers, ample provisions for the colony, and orders from the company naming him governor and captain-general of Virginia.
In his initial message to Chief Powhatan, West demanded that he return some stolen English tools and weapons and also turn over the perpetrator of the recent murder of an Englishman. Powhatan replied with “proud and disdainful answers” (as one colonist put it), telling West to either keep the colonists within the Jamestown peninsula or leave the country. The exchange brought about a state of war. West left Virginia in March 1611, after struggling with a series of diseases, but the hostilities between the Indians and the English continued.
Peace and the onset of the tobacco economy (1613–14)
Sir Samuel Argall, a mariner who had taken West back to England, returned to the colony and became acquainted with Japazeus, the chief of the Patawomeck tribe. The Patawomeck were located along the Potomac River, beyond Chief Powhatan’s empire. In March 1613 Argall chanced to learn that Powhatan’s daughter Pocahontas was staying with Japazeus. Argall resolved to kidnap her and ransom her for English prisoners held by the Powhatan Indians and for English weapons and tools the Powhatan had taken.
After persuading Japazeus to cooperate, Argall seized Pocahontas and brought her to Jamestown. He sent a messenger to Chief Powhatan with his demands. Powhatan freed the seven Englishmen he had held captive, but an impasse resulted when he did not return the weapons and tools and refused to negotiate further. Negotiations finally broke down altogether. Pocahontas was taken to an English outpost called Henricus, near present-day Richmond, Virginia. Over the following year, she converted to Christianity and became close to an Englishman named John Rolfe, a pioneering planter of tobacco. Rolfe asked for and received permission from the colony’s leaders to marry Pocahontas; the wedding took place in April 1614. As the colony’s leaders had anticipated, the marriage of Rolfe and Pocahontas brought about peaceful relations between the Powhatans and the English, which lasted almost eight years.
Rolfe’s experiments with tobacco quickly transformed the settlement. By replacing native Virginia tobacco with more-palatable plants from the West Indies, he was able to raise a product that could compete with Spanish tobacco in the British market. After Rolfe sent his first barrels to England in 1614, other colonists observed his lucrative results and imitated him. By the end of the decade, the colony had virtually a one-crop economy.
In the summer of 1619 two significant changes occurred in the colony that would have lasting influence. One was the company’s introduction of representative government to English America, which began on July 30 with the opening of the General Assembly. Voters in each of the colony’s four cities, or boroughs, elected two burgesses to represent them, as did residents of each of the seven plantations. There were limitations to the democratic aspects of the General Assembly, however. In addition to the 22 elected burgesses, the General Assembly included six men chosen by the company. Consistent with the British practice of the time, the right to vote was most likely available only to male property owners. The colony’s governor had power to veto the assembly’s enactments, as did the company itself in London. Nonetheless, the body served as a precedent for self-governance in later British colonies in North America.
The second far-reaching development was the arrival in the colony (in August) of the first Africans in English America. They had been carried on a Portuguese slave ship sailing from Angola to Veracruz, Mexico. While the Portuguese ship was sailing through the West Indies, it was attacked by a Dutch man-of-war and an English ship out of Jamestown. The two attacking ships captured about 50 slaves—men, women, and children—and brought them to outposts of Jamestown. More than 20 of the African captives were purchased there.
Records concerning the lives and status of these first African Americans are very limited. It can be assumed that they were put to work on the tobacco harvest, an arduous undertaking. English law at this time did not recognize hereditary slavery, and it is possible that they were treated at first as indentured servants (obligated to serve for a specified period of time) rather than as slaves. Clear evidence of slavery in English America does not appear until the 1640s.
Dissolution of the Virginia Company (1622–24)
Chief Powhatan’s successor, Opechancanough, carried out a surprise attack on the colony on the morning of March 22, 1622. The attack was strongest at the plantations and other English outposts that now lined the James River. The main settlement at Jamestown received a warning of the attack at the last minute and was able to mount a defense. Some 347 to 400 colonists died; reports of the death toll vary. The deaths that day represented between one-fourth and one-third of the colony’s population of 1,240.
The outcry in London over the attack, combined with political disagreements between James I and the company’s leaders, led the king to appoint a commission in April 1623 to investigate the company’s condition. Predictably, the commission returned a negative report. The king’s advisers, the Privy Council, urged the company to accept a new charter that gave the king greater control over its operations. The company refused. On May 24, 1624, motivated in part by domestic political differences with the company’s leadership, the king dissolved the company outright and made Virginia a royal colony, an arm of his government. Jamestown remained the colonial capital until Williamsburg became the capital in 1699.
The site of the Jamestown Colony is now administered by the U.S. National Park Service (as the Colonial National Historical Park) and the Association for the Preservation of Virginia Antiquities. In the 1990s, archaeological excavations uncovered thousands of artifacts from the colony. Nearby is a historical park, Jamestown Settlement, founded in 1957 and operated by the Jamestown-Yorktown Foundation. Jamestown Settlement includes reproductions of the colonists’ fort and buildings and a Powhatan village, as well as full-size replicas of the ships that made the first Jamestown voyage. The Jamestown Colony, especially the characters of John Smith and Pocahontas, has been the subject of numerous novels, dramas, and motion pictures, many of them highly fanciful. |
Federalism Through Integration
A federal state differs from a unitary one – i.e., one that is governed from the top-down – in that the former tends to transfer governmental tasks onto lower administrative and territorial authorities. The federal state’s powers come directly from the people. In Switzerland, it is the people who decide about the form and the function of the state, which is founded on the communes and the cantons. These subnational units are federalised and deprived of only those competencies that they voluntarily and exclusively transferred to the federation. Nevertheless, the majority of the governmental power is transferred from the federal level to the lower tiers in line with the principle of subsidiarity. The territorial authorities that make up the federal state are granted such an extensive autonomy in the fields of constitutionality, legislation, executive and law-making that they can be virtually considered as separate states, although they lack any competencies concerning home affairs and defence.
Switzerland is the first federation in the world that came into being as a result of tightening the relations between many sovereign canton-states. Swiss federalism is therefore an example of a federalism through integration, while the name “Confederation” has merely a symbolic meaning.
Switzerland as a model of “federalism through integration” means that, in the moment of establishing the Swiss state, the cantons had to give up certain part of their competencies in favour of the new political entity. When discussing Swiss federalism, one should always take into account the specificity of this country. Despite its limited area, Switzerland is characterised by deep multiculturalism, with four linguistic regions and two dominating religions. The small size of the Swiss territory, as well as its cultural, historical, and religious diversity, have shaped the peculiar character of the country’s federalism. Firstly, the Swiss rejected the idea of creating a monocultural state with only one official language and religion. Secondly, they managed to build a type of democracy that enables it to divide power not only between Catholics and Protestants, but also between the German-speaking majority and the French, Italian, and Rhaeto-Romance minorities. Therefore, it is a country characterised, by great will, by the building of an independent nation based on mutual respect for its minorities and citizens.
As it is known, nationalism is concerned with an ethnic tradition and the desire of part of a given country to become independent from its whole. In Switzerland, however, it was completely the opposite: citizens of the cantons, who represented different languages, ethnic and religious groups, developed a belief about the necessity of establishing a political entity that would not be based on the common tradition and culture, which, today, may make their country seem to be an artificial creation. In order to properly understand Swiss federalism, one cannot forget that the state comprises of twenty-six cantons [The following three cantons are divided into half-cantons: Basel, Appenzell, and Unterwalden [author’s note]], which, to a large extent, are autonomous and proudly regard themselves as “republics” or “states.” It is the cantons that are the foundation of the federation, and not the linguistic communities as it is commonly thought. All but four cantons are linguistically homogenous, which means that there is still a risk of a conflict if their citizens decide to form a separate faction of cantons. However, the conflicts between the cantons are very rare. One of the reasons is the fact that the boundary between the French-speaking part of Switzerland and the German-speaking one runs through three cantons that recognise two official languages. Another factor important for Swiss federalism is that the boundaries between the region dominated by one religion do not coincide with the boundaries of the linguistic regions or with the cantons’ borders. Switzerland has German-speaking cantons dominated by Catholics and French-speaking ones dominated by Protestants.
An additional important element is that Switzerland has no official capital. The federal authority has its seat in Bern but the Federal Court is located in Lausanne. Bern is ranked only as the 4th largest city in the country. Zurich, Basel, and Geneva exceed Bern not only in size of the population, but also in scale of industry and banking. The factors that foster the cooperation between the cantons are: the concentration of banks’ headquarters in all major cities, the fact that all of the four linguistic regions have their own tourist resorts, and that the major factories are located in, at least, two linguistic regions. Since there are many links (linguistic, religious, economic, and cultural) between the cantons, the Swiss policy is characterised by forming many changeable coalitions that not only cooperate, but also compete. None of the coalitions create a long-lasting majority, and none of them dominate in any area. As Christoph Büchi rightly states, the battles for the “Röstigraben” [The term Rösigraben was coined during the First World War when the citizens of the French part supported the Entente, whereas the citizens of the German part supported the Three Caesars’ Alliance, with the German Empire as the leading one. Strangely enough, society’s sympathies coincided exactly with the boundary between the linguistic parts of Switzerland, as if the boundary between the German-speaking part and the French-speaking one was also a mental barrier between two separate nations. Today, the term is used in in the context of analysing the results of referenda [author’s note]], should not be taken literally since the unity of Switzerland is guaranteed regardless of the outcome.
Switzerland’s entrepreneurship and economic policy flourish also due to its openness to diversity. Although Switzerland is commonly associated with specific products, like watches and clocks, chocolate, cheese and banks, the country’s success results from the way in which various inventions and innovations, as well as the cantons, are linked with each other. It is this typically Swiss diversity (“a multitude in unity”), stemming partially from federalism, that marks the country’s innovative character – from tourism, medical technology, production of chemicals and pharmaceuticals, to banking or the watch and clock industry.
An important aspect of federalism is the bicameralism of the federal parliament, which significantly differs from that of a unitary state’s parliament. The Swiss parliament – just like any other parliament of a federal state – comprises of two chambers. The first one, the National Council, represents the nation. Its election and the distribution of seats depends on the size of the cantons’ population. When it comes to the election of the second chamber, the Council of States, every canton is given two seats regardless of its population size (half-cantons are given one seat each). Even though the legislative procedure can be initiated in any of the chambers, the enactment of a given law requires their mutual approval, which emphasises the significance of the cantons in federal decision-making.
It should be added that the Swiss engage in the so-called “federal dialogue.” It is a forum used for regular political meetings (usually twice a year) between delegations of the Federal Council and the Conference of Cantonal Governments, which represents the cantons. The goal of the federal dialogue is the harmonisation of the federal and the cantonal policy during the time of initiating and carrying out new projects. The governing principle in this context is: dialogue is the source of a compromise. One example is the construction of the Alpine Tunnels, partially financed by the federation, which required many difficult negotiations between the federation and the cantons.
Subsidiarity – the Role of the Cantons and the Communes
A crucial element of the Swiss political system is the principle of subsidiarity, which grants the communes and the cantons all powers that do not belong explicitly to the federal authorities.
As has already been mentioned, Switzerland is a federal state consisting of three administrative levels: the federation, 26 cantons, and about 2850 communes. The decentralised division of the authorities’ tasks and the tendency to carry them out on the lowest possible level, as dictated by the principle of subsidiarity, are the foundation of the state that has existed in a virtually unchanged form since 1848.
The Swiss federal state is a direct democracy in which the people hold the highest political authority, and citizens make laws through referenda. The universal nation-wide suffrage (both active and passive) was established in 1848 for men and in 1971 for women. Democracy, however, does not naturally invite everyone to take part in political life, and, sometimes, it may even exclude them from it. That was exactly the case in Switzerland: adult men often used their democratic privilege to deny voting rights to women. Since the 1880s, Swiss women had demanded the right to vote in an increasingly stronger manner. Men’s opposition was unwavering and – under direct democracy – in full accordance with the law. This confirms the notion that democracy and progress do not always go hand in hand.
The crucial part of the current constitution of the Swiss Confederation is Article 3, which essentially determines the federal and the subsidiary character of the state: “The Cantons are sovereign except to the extent that their sovereignty is limited by the Federal Constitution. They exercise all rights that are not vested in the Confederation.”
It is just a one phrase, but it contains the whole essence of Swiss federalism and the principle of subsidiarity. All the more important is the interpretation of this article, which suggests that all of the state’s institutions act within the law and in good faith, whereas their competencies are divided between the federal and the cantonal authorities. The former carry out only those tasks that are explicitly transferred to them by the constitution, and – since both levels of power mutually overlap – competence disputes are resolved through negotiations or mediation. The duties concerning the application of the federation’s regulations are quite often transferred to the cantons, although it is not a rule. This decentralised federalism founded upon the principle of subsidiarity means that all decisions are made at the grass-roots level with the direct participation of citizens. The decisions that cannot be made on the communal level are made by cantonal authorities. In many areas, it is a rule that the federal government makes the law, but its implementation is left to the cantons, which do so according to their own requirements. A strong federalist tradition compels the authorities of the individual cantons to focus on their own problems and to abstain from criticising the actions of the other cantons. It can be compared to a principle according to which competing companies do not criticise each other but carefully examine their methods. If a given method proves effective, every company sets out to implement it itself.
The cantons cannot be compared to the administrative regions, provinces, or districts in other democratic countries, e.g., voivodeships in Poland. They are essentially independent territorial units resembling states, with their own constitutions, and referring to themselves precisely as “states” (Ger. Staat, Fr. Etat). The cantons have virtually all the powers of a state, except for those that they voluntarily ceded in favour of the federation, such as defence and foreign policy. According to the general political doctrine of Switzerland, apart from observing their own, sovereignly enacted laws, the cantons are obliged to implement the general federal laws. As a result, the Swiss state has an exceptionally low number of conflicting or mutually obstructing regulations that are in force on the federal, cantonal, and communal level simultaneously.
After transferring certain competencies in favour of the federation, the cantons’ numerous freedoms from before 1848 were significantly limited. However, despite the fact that the federation was gaining an increasing number of powers, the cantons did retain their strong position. It should not be forgotten that a major part of the federal revenues is distributed among the cantons. In the early days of the Swiss state’s existence, the confederation’s and the cantons’ budgets were clearly separated. Every canton was obliged to carry out its duties with its own funds. The cantons did not receive any funds from the federation even for the realisation of the federal tasks and had to rely solely on their own tax revenues. Currently, the federation is obliged to share a part of its revenue with the cantons.
This results directly from the nature of the Swiss tax system, which is considered as one of the most complex in the world. Another cause is the diversity of the tax rates in different parts of the country and the cantons. Switzerland has three types of the income tax, which stems from the principle of subsidiarity: the communal, the cantonal, and the federal. Every Swiss canton and commune sets its own rate of cantonal and communal income tax. Each canton and each commune has is own tax act, which determines their revenues and assets. The system fosters competition between the cantons and the communes who try to attract companies and wealthy citizens with better tax rates.
The most important factor of the cantonal autonomy is that the cantons can adopt their own constitution, provided that it is in accordance with the provisions of the Federal Constitution, and that it strengthens the unity of the federation.
It is worthwhile to mention the basic political and legal factors that ensure the cantons’ autonomy:
First, the existence of the cantons is guaranteed by the constitution. The federal legislators cannot create or dissolve any canton against its will. It is guaranteed by Article 53 of the Federal Constitution. In order to change the number of the cantons, or even to modify their territory, consent is required on the part of the community concerned, which means a long and complex procedure, including a cantonal referendum.
Second, the cantons organise their political life autonomously. Each establishes its own authorities, distributes competencies among them and determines rights, as well as duties, of its citizens. The federal law imposes only several basic principles that essentially come down to the ideas of equality and democracy. Apart from those principles, the cantons have absolute freedom in organising their internal political life.
Third, the cantons are free to elect their authorities. The Federal Council does not impose or suggest any candidates to the cantons and does not take part in elections of deputies or members of cantonal parliaments. It also lacks any powers to dissolve a canton’s parliament or dismiss its government.
Fourth, the cantons are not subject to the Federation’s political control. The cantonal constitutions require the approval of the Federal Assembly. The Federal Council monitors only certain cantonal laws. The majority of judicial decisions and rulings can be appealed before the Federal Supreme Court. These supervisions, however, differ from those of unitary states in that they are limited to the matter of legality, and not the jurisdiction. The Federal Council can, for example, refuse to accept a given law only when it decides that it violates the federal regulations.
Switzerland’s subsidiary federalism is certainly a complex and costly system. In practice, however, it enables citizens to take a real and authentic part in the state’s political life. It gives the individuals eligible to vote the satisfaction of deciding jointly about matters that concern them directly. Of course, the disputes and mediations between the specific levels of federal authority occur on a daily basis. An example of that is the problem of accommodating the immigrants who have come to Switzerland. The decision to admit them are made by the federal authorities. However, they are placed, naturally, on the territory of specific cantons. Some cantons oppose, while others agree without putting forward any additional conditions. This gives rise to the problem of financing the immigrants’ residency. Some cantons demand support from the federation since they lack the means to provide for the masses of newcomers.
Local and political affiliation is deeply rooted in the mind of a Swiss citizen, who, primarily, identifies strongly with the commune and the canton and, only later, with the federation. A typical Swiss considers a different canton as a “foreign country,” and people very reluctantly move from one canton to another. |
Deep learning is a subset of machine learning in which algorithms enable computers to learn from data in order to make predictions. The gradient descent algorithm is a common method for training deep learning models.
Check out our video:
Introduction to Deep Learning
Deep learning is a subset of machine learning that is concerned with algorithms inspired by the structure and function of the brain. Deep learning algorithms are built upon a number of layers, each of which performs a specific task. The output of one layer becomes the input for the next layer, until finally, an output is produced.
Deep learning is capable of handling large amounts of data and can learn complex patterns. It has been used for a variety of tasks including image recognition, natural language processing, and time series analysis.
The gradient descent algorithm is one of the most popular algorithms used in deep learning. It is an optimization algorithm that is used to find the values of weights that minimize a cost function. The cost function represents how far the predictions made by the model are from the actual values. The gradient descent algorithm adjusts the weights so that the predictions get closer to the actual values.
The gradient descent algorithm can be used with different types of neural networks including CNNs and RNNs.
What is the Gradient Descent Algorithm?
The gradient descent algorithm is a method used to find the local minimum of a function. It does this by taking small steps in the direction of the negative gradient of the function. The size of the steps is determined by a parameter called the learning rate.
The gradient descent algorithm is used extensively in machine learning, particularly in neural networks, as it can efficiently find the values of weights and biases that minimize a cost function. It is also used in other optimization problems such as.
How does the Gradient Descent Algorithm work?
At its core, the gradient descent algorithm is an optimization technique used to find the local minimum of a function. In other words, it helps us find the values of our parameters (such as weights and biases) that minimize the cost function.
The cost function is a measure of how far off our predictions are from the actual values. We want to minimize this cost function so that our predictions are as close to the actual values as possible.
The gradient descent algorithm works by iteratively moving in the direction of steepest descent (the direction that minimizes the cost function). In each iteration, we take a small step in the direction of the steepest descent. We continue taking these small steps until we reach a point where the cost function is minimized.
One challenge with using gradient descent is that it can be very slow, especially if we have a lot of data. Another challenge is that we may not end up at the global minimum (the point where the cost function is minimized over all possible parameter values) but only at a local minimum (a point where the cost function is minimized only over a small region).
The Benefits of Deep Learning
Deep learning is a neural network algorithm that is responsible for many recent success stories in artificial intelligence, including the ability to automatically recognize objects, facial expressions, and spoken words. But what exactly is deep learning, and how does it work?
To understand deep learning, we first need to understand the gradient descent algorithm. The gradient descent algorithm is a mathematical procedure that is used to find the minimum value of a function. In machine learning, we use the gradient descent algorithm to find the values of the weights and biases that minimize the cost function.
The cost function is a measure of how well our model is doing. For example, in a classification task, the cost function could be the number of misclassified examples. We want to find the values of the weights and biases that minimize the cost function because this will give us the best model.
The gradient descent algorithm works by taking small steps in the direction that decreases the cost function. These small steps are called “learning rates”. The size of the learning rate determines how quickly our model converges on the minimum value of the cost function. If we take too large of a step, we may miss the minimum; if we take too small of a step, our model will take too long to converge.
Once our model has converged on the minimum value of the cost function, we have found the values of weights and biases that give us the best results. This is why deep learning works so well; by taking many small steps, we can find very precise values for our weights and biases that lead to accurate predictions.
The Drawbacks of Deep Learning
Deep learning is a powerful tool, but it has its drawbacks. One of the biggest problems is that deep learning algorithms can be very slow to converged. That’s where the gradient descent algorithm comes in.
Gradient descent is a optimization algorithm that can help speed up the training process of deep learning algorithms. However, there are some drawbacks to using gradient descent. One is that it can be difficult to set the learning rate. If the learning rate is too high, the algorithm will diverge and if it’s too low, the algorithm will take a long time to converge. Another problem with gradient descent is that it can be sensitive to local minima. This means that if the data contains mountains and valleys, the algorithm may get stuck in a local minimum, which is not necessarily the global minimum.
How to Implement the Gradient Descent Algorithm
If you’re just getting started with deep learning, it can be difficult to know where to begin. In this article, we’ll walk through the gradient descent algorithm, which is used to optimization in many machine learning models. After reading this post, you should have a good understanding of how the algorithm works and how to implement it in Python.
What is gradient descent?
Gradient descent is an optimization algorithm used to minimize cost functions by iteratively moving in the direction of steepest descent. In other words, the algorithm tries to find the values of the parameters (weights) that minimize the cost function.
The cost function is often written as J(θ), where θ represents the parameters (weights) of the model. The goal is to find the value of θ that minimizes J(θ).
How does gradient descent work?
The gradient descent algorithm begins with a set of initial parameter values (θ0). The algorithm then iteratively improves these values by taking small steps in the direction that decreases J(θ).
More specifically, at each iteration, the algorithm calculates the partial derivative of J(θ) with respect to each parameter in θ and updates each parameter accordingly:
θj := θj – α∂/∂θj J(ν) for j = 0,…,n (1)
In this equation, α is the learning rate, which determines how large each parameter update will be. The partial derivative ∂/∂θjJ(θ) tells us how much J(ν) will change if we slightly change θj. Therefore, equation (1) says that we should update each parameter θj in proportion to how much changing it would affect J(ν).
We can write equation (1) more compactly as:
Θ := Θ – α∇J(ν) where ∇J(ν) = (∂/∂Θ0J(ν), … , ∂/∂ΘnJ(ν)) (2)
Tips for Optimizing the Gradient Descent Algorithm
The gradient descent algorithm is a powerful tool for optimizing machine learning models. However, there are a few potential pitfalls that can occur if the algorithm is not used correctly. In this article, we’ll explore some tips for avoiding these pitfalls and optimize the gradient descent algorithm for better results.
One common pitfall is failing to normalize the data before training the model. This can cause the algorithm to converge slowly or even fail to converge at all. Another pitfall is using a learning rate that is too large or too small. If the learning rate is too large, the algorithm may overshoot the global minimum and fail to converge. If the learning rate is too small, the algorithm may converged slowly or become stuck in a local minimum.
There are a few different strategies for choosing an optimal learning rate. One simple strategy is to begin with a relatively large learning rate and decrease it gradually as the algorithm converges. Another strategy is to use a line search algorithm to find the learning rate that results in the fastest convergence.
It’s also important to choose an appropriate stopping condition for the gradient descent algorithm. If the stopping condition is too strict, the algorithm may stop before it has converged to a satisfactory solution. If the stopping condition is too lax, the algorithm may continue past the point of optimal convergence and begin to overfit the data.
Adjusting these parameters can be tricky, but doing so can have a big impact on the performance of your machine learning models. By taking care to avoid these potential pitfalls, you can make sure that your gradient descent algorithms are running optimally and producing good results.
Case Studies of Deep Learning
Deep learning is a subset of machine learning that is concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Deep learning is usually used to refer to the use of multiple layers in artificial neural networks. These algorithms are used to learn complex patterns in data.
Deep learning algorithms have been applied to many different fields, including speech recognition, computer vision, natural language processing, and bioinformatics. In this article, we will take a closer look at two case studies of deep learning: image classification and object detection.
Image classification is the process of assigning a label to an image. For example, an image of a dog might be classified as “dog”, while an image of a cat might be classified as “cat”. Image classification is a supervised learning problem, which means that we need labelled data in order to train our models.
There are many different ways to approach the problem of image classification. One popular approach is to use a convolutional neural network (CNN). CNNs are well-suited for image classification because they are able to extract features from images that are invariant to translation and scaling. CNNs also have the ability to learn hierarchical representations of data, which is helpful for understanding complex images.
Object detection is the process of identifying objects in images or videos. This is usually done by boundingBoxes around each object in an image. For example, if we wanted to detect people in an image, we would need to draw bounding boxes around each person in the image. Object detection is a more difficult problem than image classification, as it requires not only identify each object in an image but also localize it within the image.
The Future of Deep Learning
Deep learning is a field of machine learning that is based on artificial neural networks. These networks are able to learn complex tasks by taking advantage of the large amount of data that is available. The gradient descent algorithm is a key part of deep learning, and it is what allows these networks to learn so effectively.
The gradient descent algorithm works by minimizing a cost function. This cost function measures how well the network is doing at performing a task, and the algorithm tries to find the set of weights that will minimize this cost function. This process is repeated for each training example, and the weights are updated accordingly.
The gradient descent algorithm is very powerful, and it has been responsible for some of the most impressive achievements in deep learning. It is used in many different fields, including image recognition, natural language processing, and machine translation.
We have seen that the gradient descent algorithm is a powerful tool for minimizing cost functions. This technique can be used on a wide variety of problems, including those involving deep neural networks. The key to understanding gradient descent is to realize that it is an iterative process: at each step, we move in the direction that will minimize the cost function. Over time, this process will converge on a minimum value for the cost function.
Keyword: Deep Learning: The Gradient Descent Algorithm |
In physics, the Lorentz transformation (or transformations) is named after the Dutch physicist Hendrik Lorentz. It was the result of attempts by Lorentz and others to explain how the speed of light was observed to be independent of the reference frame, and to understand the symmetries of the laws of electromagnetism. The Lorentz transformation is in accordance with special relativity, but was derived well before special relativity.
The transformations describe how measurements of space and time by two observers are related. They reflect the fact that observers moving at different velocities may measure different distances, elapsed times, and even different orderings of events. They supersede the Galilean transformation of Newtonian physics, which assumes an absolute space and time (see Galilean relativity). The Galilean transformation is a good approximation only at relative speeds much smaller than the speed of light.
The Lorentz transformation is a linear transformation. It may include a rotation of space; a rotation-free Lorentz transformation is called a Lorentz boost.
In the Minkowski space, the Lorentz transformations preserve the spacetime interval between any two events. They describe only the transformations in which the spacetime event at the origin is left fixed, so they can be considered as a hyperbolic rotation of Minkowski space. The more general set of transformations that also includes translations is known as the Poincaré group.
Early in 1889, Oliver Heaviside had shown from Maxwell's equations that the electric field surrounding a spherical distribution of charge should cease to have spherical symmetry once the charge is in motion relative to the ether. FitzGerald then conjectured that Heaviside’s distortion result might be applied to a theory of intermolecular forces. Some months later, FitzGerald published the conjecture that bodies in motion are getting contracted, in order to explain the baffling outcome of the 1887 ether-wind experiment of Michelson and Morley. In 1892, Lorentz independently presented the same idea in a more detailed manner, which was subsequently called FitzGerald–Lorentz contraction hypothesis. Their explanation was widely known before 1905.
Lorentz (1892–1904) and Larmor (1897–1900), who believed the luminiferous ether hypothesis, were also seeking the transformation under which Maxwell's equations were invariant when transformed from the ether to a moving frame. They extended the FitzGerald–Lorentz contraction hypothesis and found out that the time coordinate has to be modified as well ("local time"). Henri Poincaré gave a physical interpretation to local time (to first order in v/c) as the consequence of clock synchronization under the assumption that the speed of light is constant in moving frames. Larmor is credited to have been the first to understand the crucial time dilation property inherent in his equations.
In 1905, Poincaré was the first to recognize that the transformation has the properties of a mathematical group, and named it after Lorentz. Later in the same year Albert Einstein published what is now called special relativity, by deriving the Lorentz transformation under the assumptions of the principle of relativity and the constancy of the speed of light in any inertial reference frame, and by abandoning the mechanical aether.
Lorentz transformation for frames in standard configuration
Consider two observers O and O′, each using their own Cartesian coordinate system to measure space and time intervals. O uses (t, x, y, z) and O′ uses (t′, x′, y′, z′). Assume further that the coordinate systems are oriented so that, in 3 dimensions, the x-axis and the x′-axis are collinear, the y-axis is parallel to the y′-axis, and the z-axis parallel to the z′-axis. The relative velocity between the two observers is v along the common x-axis; O measures O′ to move at velocity v along the coincident xx′ axes, while O′ measures O to move at velocity −v along the coincident xx′ axes. Also assume that the origins of both coordinate systems are the same, that is, coincident times and positions. If all these hold, then the coordinate systems are said to be in standard configuration.
The inverse of a Lorentz transformation relates the coordinates the other way round; from the coordinates O′ measures (t′, x′, y′, z′) to the coordinates O measures (t, x, y, z), so t, x, y, z are in terms of t′, x′, y′, z′. The mathematical form is nearly identical to the original transformation; the only difference is the negation of the uniform relative velocity (from v to −v), and exchange of primed and unprimed quantities, because O′ moves at velocity v relative to O, and equivalently, O moves at velocity −v relative to O′. This symmetry makes it effortless to find the inverse transformation (carrying out the exchange and negation saves a lot of rote algebra), although more fundamentally; it highlights that all physical laws should remain unchanged under a Lorentz transformation.
Below, the Lorentz transformations are called "boosts" in the stated directions.
Boost in the x-direction
- v is the relative velocity between frames in the x-direction,
- c is the speed of light,
- is the Lorentz factor (Greek lowercase gamma),
- (Greek lowercase beta), again for the x-direction.
The use of β and γ is standard throughout the literature. For the remainder of the article – they will be also used throughout unless otherwise stated. Since the above is a linear system of equations (more technically a linear transformation), they can be written in matrix form:
According to the principle of relativity, there is no privileged frame of reference, so the inverse transformations frame F′ to frame F must be given by simply negating v:
where the value of γ remains unchanged.
Boost in the y or z directions
The above collection of equations apply only for a boost in the x-direction. The standard configuration works equally well in the y or z directions instead of x, and so the results are similar.
For the y-direction:
where v and so β are now in the y-direction.
For the z-direction:
where v and so β are now in the z-direction.
The Lorentz transform for a boost in one of the above directions can be compactly written as a single matrix equation:
Boost in any direction
Vector form
For a boost in an arbitrary direction with velocity v, that is, O observes O′ to move in direction v in the F coordinate frame, while O′ observes O to move in direction −v in the F′ coordinate frame, it is convenient to decompose the spatial vector r into components perpendicular and parallel to v:
are "warped" by the Lorentz factor:
The parallel and perpendicular components can be eliminated, by substituting into r′:
Since r‖ and v are parallel we have
where geometrically and algebraically:
- v/v is a dimensionless unit vector pointing in the same direction as r‖,
- r‖ = (r • v)/v is the projection of r into the direction of v,
substituting for r‖ and factoring v gives
This method, of eliminating parallel and perpendicular components, can be applied to any Lorentz transformation written in parallel-perpendicular form.
Matrix forms
These equations can be expressed in block matrix form as
and β is the magnitude of β:
More explicitly stated:
The transformation Λ can be written in the same form as before,
which has the structure:
and the components deduced from above are:
where δij is the Kronecker delta, and by convention: Latin letters for indices take the values 1, 2, 3, for spatial components of a 4-vector (Greek indices take values 0, 1, 2, 3 for time and space components).
Note that this transformation is only the "boost," i.e., a transformation between two frames whose x, y, and z axis are parallel and whose spacetime origins coincide. The most general proper Lorentz transformation also contains a rotation of the three axes, because the composition of two boosts is not a pure boost but is a boost followed by a rotation. The rotation gives rise to Thomas precession. The boost is given by a symmetric matrix, but the general Lorentz transformation matrix need not be symmetric.
Composition of two boosts
- B(v) is the 4 × 4 matrix that uses the components of v, i.e. v1, v2, v3 in the entries of the matrix, or rather the components of v/c in the representation that is used above,
- is the velocity-addition,
- Gyr[u,v] (capital G) is the rotation arising from the composition. If the 3 × 3 matrix form of the rotation applied to spatial coordinates is given by gyr[u,v], then the 4 × 4 matrix rotation applied to 4-coordinates is given by:
- gyr (lower case g) is the gyrovector space abstraction of the gyroscopic Thomas precession, defined as an operator on a velocity w in terms of velocity addition:
- for all w.
The composition of two Lorentz transformations L(u, U) and L(v, V) which include rotations U and V is given by:
Visualizing the transformations in Minkowski space
The yellow axes are the rest frame of an observer, the blue axes correspond to the frame of a moving observer
The red lines are world lines, a continuous sequence of events: straight for an object travelling at constant velocity, curved for an object accelerating. Worldlines of light form the boundary of the light cone.
The purple hyperbolae indicate this is a hyperbolic rotation, the hyperbolic angle ϕ is called rapidity (see below). The greater the relative speed between the reference frames, the more "warped" the axes become. The relative velocity cannot exceed c.
The black arrow is a displacement four-vector between two events (not necessarily on the same world line), showing that in a Lorentz boost; time dilation (fewer time intervals in moving frame) and length contraction (shorter lengths in moving frame) occur. The axes in the moving frame are orthogonal (even though they do not look so).
Then the Lorentz transformation in standard configuration is:
Hyperbolic expressions
From the above expressions for eφ and e−φ
Hyperbolic rotation of coordinates
Substituting these expressions into the matrix form of the transformation, we have:
Thus, the Lorentz transformation can be seen as a hyperbolic rotation of coordinates in Minkowski space, where the parameter ϕ represents the hyperbolic angle of rotation, often referred to as rapidity. This transformation is sometimes illustrated with a Minkowski diagram, as displayed above.
Transformation of other physical quantities
or in tensor index notation:
in which the primed indices denote indices of Z in the primed frame.
where is the inverse matrix of
Special relativity
The crucial insight of Einstein's clock-setting method is the idea that time is relative. In essence, each observer's frame of reference is associated with a unique set of clocks, the result being that time as measured for a location passes at different rates for different observers. This was a direct result of the Lorentz transformations and is called time dilation. We can also clearly see from the Lorentz "local time" transformation that the concept of the relativity of simultaneity and of the relativity of length contraction are also consequences of that clock-setting hypothesis.
Transformation of the electromagnetic field
Lorentz transformations can also be used to prove that magnetic and electric fields are simply different aspects of the same force — the electromagnetic force, as a consequence of relative motion between electric charges and observers. The fact that the electromagnetic field shows relativistic effects becomes clear by carrying out a simple thought experiment:
- Consider an observer measuring a charge at rest in a reference frame F. The observer will detect a static electric field. As the charge is stationary in this frame, there is no electric current, so the observer will not observe any magnetic field.
- Consider another observer in frame F′ moving at relative velocity v (relative to F and the charge). This observer will see a different electric field because the charge is moving at velocity −v in their rest frame. Further, in frame F′ the moving charge constitutes an electric current, and thus the observer in frame F′ will also see a magnetic field.
This shows that the Lorentz transformation also applies to electromagnetic field quantities when changing the frame of reference, given below in vector form.
The correspondence principle
The correspondence limit is usually stated mathematically as: as v → 0, c → ∞. In words: as velocity approaches 0, the speed of light (seems to) approach infinity. Hence, it is sometimes said that nonrelativistic physics is a physics of "instantaneous action at a distance".
Spacetime interval
In a given coordinate system xμ, if two events A and B are separated by
the spacetime interval between them is given by
This can be written in another form using the Minkowski metric. In this coordinate system,
Then, we can write
or, using the Einstein summation convention,
Now suppose that we make a coordinate transformation xμ → x′ μ. Then, the interval in this coordinate system is given by
It is a result of special relativity that the interval is an invariant. That is, s2 = s′ 2. For this to hold, it can be shown that it is necessary (but not sufficient) for the coordinate transformation to be of the form
Here, Cμ is a constant vector and Λμν a constant matrix, where we require that
Such a transformation is called a Poincaré transformation or an inhomogeneous Lorentz transformation. The Ca represents a spacetime translation. When Ca = 0, the transformation is called an homogeneous Lorentz transformation, or simply a Lorentz transformation.
Taking the determinant of
The cases are:
- Proper Lorentz transformations have det(Λμν) = +1, and form a subgroup called the special orthogonal group SO(1,3).
- Improper Lorentz transformations are det(Λμν) = −1, which do not form a subgroup, as the product of any two improper Lorentz transformations will be a proper Lorentz transformation.
From the above definition of Λ it can be shown that (Λ00)2 ≥ 1, so either Λ00 ≥ 1 or Λ00 ≤ −1, called orthochronous and non-orthochronous respectively. An important subgroup of the proper Lorentz transformations are the proper orthochronous Lorentz transformations which consist purely of boosts and rotations. Any Lorentz transform can be written as a proper orthochronous, together with one or both of the two discrete transformations; space inversion P and time reversal T, whose non-zero elements are:
The set of Poincaré transformations satisfies the properties of a group and is called the Poincaré group. Under the Erlangen program, Minkowski space can be viewed as the geometry defined by the Poincaré group, which combines Lorentz transformations with translations. In a similar way, the set of all Lorentz transformations forms a group, called the Lorentz group.
A quantity invariant under Lorentz transformations is known as a Lorentz scalar.
The usual treatment (e.g., Einstein's original work) is based on the invariance of the speed of light. However, this is not necessarily the starting point: indeed (as is exposed, for example, in the second volume of the Course of Theoretical Physics by Landau and Lifshitz), what is really at stake is the locality of interactions: one supposes that the influence that one particle, say, exerts on another can not be transmitted instantaneously. Hence, there exists a theoretical maximal speed of information transmission which must be invariant, and it turns out that this speed coincides with the speed of light in vacuum. The need for locality in physical theories was already noted by Newton (see Koestler's The Sleepwalkers), who considered the notion of an action at a distance "philosophically absurd" and believed that gravity must be transmitted by an agent (such as an interstellar aether) which obeys certain physical laws.
Michelson and Morley in 1887 designed an experiment, employing an interferometer and a half-silvered mirror, that was accurate enough to detect aether flow. The mirror system reflected the light back into the interferometer. If there were an aether drift, it would produce a phase shift and a change in the interference that would be detected. However, no phase shift was ever found. The negative outcome of the Michelson–Morley experiment left the concept of aether (or its drift) undermined. There was consequent perplexity as to why light evidently behaves like a wave, without any detectable medium through which wave activity might propagate.
In a 1964 paper, Erik Christopher Zeeman showed that the causality preserving property, a condition that is weaker in a mathematical sense than the invariance of the speed of light, is enough to assure that the coordinate transformations are the Lorentz transformations.
From physical principles
The problem is usually restricted to two dimensions by using a velocity along the x axis such that the y and z coordinates do not intervene. The following is similar to that of Einstein. As in the Galilean transformation, the Lorentz transformation is linear since the relative velocity of the reference frames is constant as a vector; otherwise, inertial forces would appear. They are called inertial or Galilean reference frames. According to relativity no Galilean reference frame is privileged. Another condition is that the speed of light must be independent of the reference frame, in practice of the velocity of the light source.
Galilean and Einstein's relativity
- Galilean reference frames
In classical kinematics, the total displacement x in the R frame is the sum of the relative displacement x′ in frame R′ and of the distance between the two origins x − x′. If v is the relative velocity of R′ relative to R, the transformation is: x = x′ + vt, or x′ = x − vt. This relationship is linear for a constant v, that is when R and R′ are Galilean frames of reference.
In Einstein's relativity, the main difference from Galilean relativity is that space and time coordinates are intertwined, and in different inertial frames t ≠ t′.
Since space is assumed to be homogeneous, the transformation must be linear. The most general linear relationship is obtained with four constant coefficients, A, B, γ, and b:
The Lorentz transformation becomes the Galilean transformation when γ = B = 1, b = −v and A = 0.
An object at rest in the R′ frame at position x′ = 0 moves with constant velocity v in the R frame. Hence the transformation must yield x′ = 0 if x = vt. Therefore, b = −γv and the first equation is written as
- Principle of relativity
According to the principle of relativity, there is no privileged Galilean frame of reference: therefore the inverse transformation for the position from frame R′ to frame R should have the same form as the original. To take advantage of this, we arrange by reversing the axes that R′ sees R moving towards positive x′ (i.e. just as R sees R′ moving towards positive x ), so that we can write
which, when multiplied through by −1, becomes
- The speed of light is constant
Since the speed of light is the same in all frames of reference, for the case of a light signal, the transformation must guarantee that t = x/c and t′ = x′/c.
Substituting for t and t′ in the preceding equations gives:
Multiplying these two equations together gives,
At any time after t = t′ = 0, xx′ is not zero, so dividing both sides of the equation by xx′ results in
which is called the "Lorentz factor".
- Transformation of time
The transformation equation for time can be easily obtained by considering the special case of a light signal, satisfying
Substituting term by term into the earlier obtained equation for the spatial coordinate
which determines the transformation coefficients A and B as
So A and B are the unique coefficients necessary to preserve the constancy of the speed of light in the primed system of coordinates.
Einstein's popular derivation
In his popular book Einstein derived the Lorentz transformation by arguing that there must be two non-zero coupling constants λ and μ such that
that correspond to light traveling along the positive and negative x-axis, respectively. For light x = ct if and only if x′ = ct′. Adding and subtracting the two equations and defining
Substituting x′ = 0 corresponding to x = vt and noting that the relative velocity is v = bc/γ, this gives
The constant γ can be evaluated as was previously shown above.
The Lorentz transformations can also be derived by simple application of the special relativity postulates and using hyperbolic identities. It is sufficient to derive the result in for a boost in the one direction, since for an arbitrary direction the decomposition of the position vector into parallel and perpendicular components can be done after, and generalizations therefrom follow, as outlined above.
- Relativity postulates
Start from the equations of the spherical wave front of a light pulse, centred at the origin:
which take the same form in both frames because of the special relativity postulates. Next, consider relative motion along the x-axes of each frame, in standard configuration above, so that y = y′, z = z′, which simplifies to
Now assume that the transformations take the linear form:
where A, B, C, D are to be found. If they were non-linear, they would not take the same form for all observers, since fictitious forces (hence accelerations) would occur in one frame even if the velocity was constant in another, which is inconsistent with inertial frame transformations.
Substituting into the previous result:
and comparing coefficients of x2, t2, xt:
- Hyperbolic rotation
The formulae resemble the hyperbolic identity
Introducing the rapidity parameter ϕ as a parametric hyperbolic angle allows the self-consistent identifications
where the signs after the square roots are chosen so that x and t increase. The hyperbolic transformations have been solved for:
If the signs were chosen differently the position and time coordinates would need to be replaced by −x and/or −t so that x and t increase not decrease.
To find what ϕ actually is, from the standard configuration the origin of the primed frame x′ = 0 is measured in the unprimed frame to be x = vt (or the equivalent and opposite way round; the origin of the unprimed frame is x = 0 and in the primed frame it is at x′ = −vt):
and manipulation of hyperbolic identities leads to
so the transformations are also:
From group postulates
Following is a classical derivation (see, e.g., and references therein) based on group postulates and isotropy of the space.
- Coordinate transformations as a group
The coordinate transformations between inertial frames form a group (called the proper Lorentz group) with the group operation being the composition of transformations (performing one transformation after another). Indeed the four group axioms are satisfied:
- Closure: the composition of two transformations is a transformation: consider a composition of transformations from the inertial frame K to inertial frame K′, (denoted as K → K′), and then from K′ to inertial frame K′′, [K′ → K′′], there exists a transformation, [K → K′][K′ → K′′], directly from an inertial frame K to inertial frame K′′.
- Associativity: the result of ([K → K′][K′ → K′′])[K′′ → K′′′] and [K → K′]([K′ → K′′][K′′ → K′′′]) is the same, K → K′′′.
- Identity element: there is an identity element, a transformation K → K.
- Inverse element: for any transformation K → K′ there exists an inverse transformation K′ → K.
- Transformation matrices consistent with group axioms
Let us consider two inertial frames, K and K′, the latter moving with velocity v with respect to the former. By rotations and shifts we can choose the z and z′ axes along the relative velocity vector and also that the events (t, z) = (0, 0) and (t′, z′) = (0, 0) coincide. Since the velocity boost is along the z (and z′) axes nothing happens to the perpendicular coordinates and we can just omit them for brevity. Now since the transformation we are looking after connects two inertial frames, it has to transform a linear motion in (t, z) into a linear motion in (t′, z′) coordinates. Therefore it must be a linear transformation. The general form of a linear transformation is
where α, β, γ, and δ are some yet unknown functions of the relative velocity v.
Let us now consider the motion of the origin of the frame K′. In the K′ frame it has coordinates (t′, z′ = 0), while in the K frame it has coordinates (t, z = vt). These two points are connected by the transformation
from which we get
Analogously, considering the motion of the origin of the frame K, we get
from which we get
Combining these two gives α = γ and the transformation matrix has simplified,
Now let us consider the group postulate inverse element. There are two ways we can go from the K′ coordinate system to the K coordinate system. The first is to apply the inverse of the transform matrix to the K′ coordinates:
The second is, considering that the K′ coordinate system is moving at a velocity v relative to the K coordinate system, the K coordinate system must be moving at a velocity −v relative to the K′ coordinate system. Replacing v with −v in the transformation matrix gives:
Now the function γ can not depend upon the direction of v because it is apparently the factor which defines the relativistic contraction and time dilation. These two (in an isotropic world of ours) cannot depend upon the direction of v. Thus, γ(−v) = γ(v) and comparing the two matrices, we get
According to the closure group postulate a composition of two coordinate transformations is also a coordinate transformation, thus the product of two of our matrices should also be a matrix of the same form. Transforming K to K′ and from K′ to K′′ gives the following transformation matrix to go from K to K′′:
In the original transform matrix, the main diagonal elements are both equal to γ, hence, for the combined transform matrix above to be of the same form as the original transform matrix, the main diagonal elements must also be equal. Equating these elements and rearranging gives:
The denominator will be nonzero for nonzero v, because γ(v) is always nonzero;
If v = 0 we have the identity matrix which coincides with putting v = 0 in the matrix we get at the end of this derivation for the other values of v, making the final matrix valid for all nonnegative v.
For the nonzero v, this combination of function must be a universal constant, one and the same for all inertial frames. Define this constant as δ(v)/vγ(v) = κ where κ has the dimension of 1/v2. Solving
we finally get
and thus the transformation matrix, consistent with the group axioms, is given by
If κ > 0, then there would be transformations (with κv2 ≫ 1) which transform time into a spatial coordinate and vice versa. We exclude this on physical grounds, because time can only run in the positive direction. Thus two types of transformation matrices are consistent with group postulates:
- with the universal constant κ = 0, and
- with κ < 0.
- Galilean transformations
If κ = 0 then we get the Galilean-Newtonian kinematics with the Galilean transformation,
where time is absolute, t′ = t, and the relative velocity v of two inertial frames is not limited.
- Lorentz transformations
where the speed of light is a finite universal constant determining the highest possible relative velocity between inertial frames.
If v ≪ c the Galilean transformation is a good approximation to the Lorentz transformation.
Only experiment can answer the question which of the two possibilities, κ = 0 or κ < 0, is realised in our world. The experiments measuring the speed of light, first performed by a Danish physicist Ole Rømer, show that it is finite, and the Michelson–Morley experiment showed that it is an absolute speed, and thus that κ < 0.
See also
- Ricci calculus
- Electromagnetic field
- Galilean transformation
- Hyperbolic rotation
- Invariance mechanics
- Lorentz group
- Principle of relativity
- Velocity-addition formula
- Algebra of physical space
- Relativistic aberration
- Prandtl–Glauert transformation
- O'Connor, John J.; Robertson, Edmund F., A History of Special Relativity
- Brown, Harvey R., Michelson, FitzGerald and Lorentz: the Origins of Relativity Revisited
- Rothman, Tony (2006), "Lost in Einstein's Shadow", American Scientist 94 (2): 112f.
- Darrigol, Olivier (2005), "The Genesis of the theory of relativity", Séminaire Poincaré 1: 1–22
- Macrossan, Michael N. (1986), "A Note on Relativity Before Einstein", Brit. Journal Philos. Science 37: 232–34
- The reference is within the following paper: Poincaré, Henri (1905), "On the Dynamics of the Electron", Comptes rendus hebdomadaires des séances de l'Académie des sciences 140: 1504–1508
- Einstein, Albert (1905), "Zur Elektrodynamik bewegter Körper", Annalen der Physik 322 (10): 891–921, Bibcode:1905AnP...322..891E, doi:10.1002/andp.19053221004. See also: English translation.
- A. Halpern (1988). 3000 Solved Problems in Physics. Schaum Series. Mc Graw Hill. p. 688. ISBN 978-0-07-025734-4.
- University Physics – With Modern Physics (12th Edition), H.D. Young, R.A. Freedman (Original edition), Addison-Wesley (Pearson International), 1st Edition: 1949, 12th Edition: 2008, ISBN (10-) 0-321-50130-6, ISBN (13-) 978-0-321-50130-1
- Dynamics and Relativity, J.R. Forshaw, A.G. Smith, Manchester Physics Series, John Wiley & Sons Ltd, ISBN 978-0-470-01460-8
- http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html. Hyperphysics, web-based physics matrial hosted by Georgia State University, USA.
- Relativity DeMystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN 0-07-145545-0
- Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, ISBN 0-7167-0344-0
- Ungar, A. A. (1989). "The relativistic velocity composition paradox and the Thomas rotation". Foundations of Physics 19: 1385–1396. Bibcode:1989FoPh...19.1385U. doi:10.1007/BF00732759.
- Ungar, A. A. (2000). "The relativistic composite-velocity reciprocity principle". Foundations of Physics (Springer) 30 (2): 331–342. CiteSeerX: 10.1.1.35.1131.
- eq. (55), Thomas rotation and the parameterization of the Lorentz transformation group, AA Ungar – Foundations of Physics Letters, 1988
- M. Carroll, Sean (2004). Spacetime and Geometry: An Introduction to General Relativity (illustrated ed.). Addison Wesley. p. 22. ISBN 0-8053-8732-3.
- Einstein, Albert (1916). "Relativity: The Special and General Theory" (PDF). Retrieved 2012-01-23.
- Dynamics and Relativity, J.R. Forshaw, A.G. Smith, Wiley, 2009, ISBN 978 0 470 01460 8
- Electromagnetism (2nd Edition), I.S. Grant, W.R. Phillips, Manchester Physics, John Wiley & Sons, 2008, ISBN 9-780471-927129
- Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, ISBN 81-7758-293-3
- Weinberg, Steven (1972), Gravitation and Cosmology, New York, [NY.]: Wiley, ISBN 0-471-92567-5: (Section 2:1)
- Weinberg, Steven (1995), The quantum theory of fields (3 vol.), Cambridge, [England] ; New York, [NY.]: Cambridge University Press, ISBN 0-521-55001-7 : volume 1.
- Zeeman, Erik Christopher (1964), "Causality implies the Lorentz group", Journal of Mathematical Physics 5 (4): 490–493, Bibcode:1964JMP.....5..490Z, doi:10.1063/1.1704140
- Stauffer, Dietrich; Stanley, Harry Eugene (1995). From Newton to Mandelbrot: A Primer in Theoretical Physics (2nd enlarged ed.). Springer-Verlag. p. 80,81. ISBN 978-3-540-59191-7.
- An Introduction to Mechanics, D. Kleppner, R.J. Kolenkow, Cambridge University Press, 2010, ISBN 978-0-521-19821-9
Further reading
- Einstein, Albert (1961), Relativity: The Special and the General Theory, New York: Three Rivers Press (published 1995), ISBN 0-517-88441-0
- Ernst, A.; Hsu, J.-P. (2001), "First proposal of the universal speed of light by Voigt 1887", Chinese Journal of Physics 39 (3): 211–230, Bibcode:2001ChJPh..39..211E
- Thornton, Stephen T.; Marion, Jerry B. (2004), Classical dynamics of particles and systems (5th ed.), Belmont, [CA.]: Brooks/Cole, pp. 546–579, ISBN 0-534-40896-6
- Voigt, Woldemar (1887), "Über das Doppler'sche princip", Nachrichten von der Königlicher Gesellschaft den Wissenschaft zu Göttingen 2: 41–51
|Wikisource has original works on the topic: Relativity|
|Wikibooks has a book on the topic of: special relativity|
- Derivation of the Lorentz transformations. This web page contains a more detailed derivation of the Lorentz transformation with special emphasis on group properties.
- The Paradox of Special Relativity. This webpage poses a problem, the solution of which is the Lorentz transformation, which is presented graphically in its next page.
- Relativity – a chapter from an online textbook
- Special Relativity: The Lorentz Transformation, The Velocity Addition Law on Project PHYSNET
- Warp Special Relativity Simulator. A computer program demonstrating the Lorentz transformations on everyday objects.
- Animation clip visualizing the Lorentz transformation.
- Lorentz Frames Animated from John de Pillis. Online Flash animations of Galilean and Lorentz frames, various paradoxes, EM wave phenomena, etc. |
SummaryStudents take a hands-on look at the design of bridge piers (columns). First they brainstorm types of loads that might affect a Colorado bridge. Then they determine the maximum possible load for that scenario, and calculate the cross-sectional area of a column designed to support that load. Choosing from clay, foam or marshmallows, they create model columns and test their calculations.
The engineering design process begins by thoroughly understanding the problem to be solved. Once all aspects of the problem are understood, engineers explore many possible design solutions to determine the one that best meets all the objectives. Performing a load analysis helps engineers determine how to design a structure that is strong enough and what types of materials to use. Engineers also keep in mind how material choices may affect construction speed and cost.
Students should have a familiarity with bridge types, as introduced in the first lesson of the Bridges unit, including area of a rectangle, and compressive and tensile forces.
After this activity, students should be able to:
- Describe the process that an engineer uses to design a bridge, including determining loads, calculating the highest load, and calculating the amount of material to resist the loads.
- Use appropriate calculations to model pier (column) design in a bridge.
More Curriculum Like This
Students learn about the types of possible loads, how to calculate ultimate load combinations, and investigate the different sizes for the beams (girders) and columns (piers) of simple bridge design. Additionally, they learn the steps that engineers use to design bridges.
Students are presented with a brief history of bridges as they learn about the three main bridge types: beam, arch and suspension. They are introduced to two natural forces — tension and compression — common to all bridges and structures.
To introduce the two types of stress that materials undergo — compression and tension — students examine compressive and tensile forces and learn about bridges and skyscrapers. They construct their own building structure using marshmallows and spaghetti to see which structure can hold the most weigh...
Students explore how tension and compression forces act on three different bridge types. Using sponges, cardboard and string, they create models of beam, arch and suspension bridges and apply forces to understand how they disperse or transfer these loads.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science,
technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN),
a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics;
within type by subtype, then by grade, etc.
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
- Fluently divide multi-digit numbers using the standard algorithm. (Grade 6) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
- Fluently add, subtract, multiply, and divide multi-digit decimals using the standard algorithm for each operation. (Grade 6) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
- Solve real-world and mathematical problems involving area, volume and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms. (Grade 7) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
- Manufacturing systems use mechanical processes that change the form of materials through the processes of separating, forming, combining, and conditioning them. (Grades 6 - 8) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
- The selection of designs for structures is based on factors such as building laws and codes, style, convenience, cost, climate, and function. (Grades 6 - 8) Details... View more aligned curriculum... Do you agree with this alignment? Thanks for your feedback!
Each group needs:
- Pier (Column) Design Worksheet
- measuring stick or ruler with 1/16-in (1-mm) marks
- weights for measuring, totaling at least 7 lbs (3 kg), such as weights, books or a coffee can filled with coins or rocks
- (optional) sculpting tool, such as a metal spoon or scissors to cut clay, foam and marshmallows
For the entire class to share:
- modeling clay pieces, ~ 2 in x 2 in x 3 in tall (~5 cm x 5 cm x 7.5 cm)
- 1 bag large-sized marshmallows
- high-density foam, about 1 ft2 (9 dm2) by 1-2-inches (2.5-5-cm) thick; available at fabric and craft stores
- scale, to weigh books
- scrap paper and tape, to note book weights on each book
- tape, to hold marshmallows together
- (optional) wax paper or plastic wrap, to keep clay moist before use
- (optional) toothpicks (if students redesign using "reinforced" materials)
Bridges are essential components of our communities, cities and roadways. What would happen to the normal course of your life if one day all the bridges over rivers and water, and all the bridges that enable our highway system suddenly collapsed because they could no longer withstand the forces of the load placed on them every day? (Collect responses from students.) Would it be a problem for people in Japan if the 3.7 km Sky Gate Bridge that connects Japan's Kansai International Airport in Osaka Bay to the mainland failed? What are the real-world consequences of un-planned-for compressive or tensile forces? What happens to a structure or a bridge if the forces on it are too much for the design and materials to withstand? (Show students photographs in the attached Failed Bridges Images PowerPoint presentation, or from an Internet image search using keywords such as: bridge, highway, failure, collapse, damage.) Engineers do not want to be put in the position of having failed or collapsed bridges, although sometimes it happens when the materials of bridges deteriorate or are not maintained over the years, or they become used for more load than originally planned.
What are some examples of loads that a bridge might need to withstand? (Take suggestions from students. Possible answers: Vehicles, people, snow, rain, wind, the weight of the bridge and its railings and signs, etc.) Engineers would organize your answers into three main types of loads: dead loads, live loads and environmental loads. Dead loads include the weight of the bridge itself plus any other object permanently fixed to the bridge, such as highway signs, guardrails or a concrete road surface. Live loads are temporary loads that act on a bridge, such as cars, trucks, trains or pedestrians. Lastly, environmental loads are temporary loads that act on a bridge and that are due to weather or other environmental influences, such as wind from hurricanes, tornadoes or high gusts; snow; and earthquakes. Rainwater collecting on the bridge might also be a factor if proper drainage is not provided. What if more than one of these types of loads happened to the bridge at the same time? Engineers design bridges for the high possible combination of these loads at one time to determine the "ultimate load."
So, how does an engineer figure out how big to make the bridge parts, like girders (beams) and piers (columns), so they can withstand the combined loads that might be placed on them over a long lifespan? If you were an engineer, how would you go about designing a bridge to make sure it was safe? Engineers have several things to consider before they can create a final design. Let's review these.
First, engineers must understand the problem completely. To do this, they ask a lot of questions, such as how strong the bridge needs to be, and what materials can they use. Next, engineers determine what types of loads or forces they expect the bridge to carry. These include all dead, live and possible environmental loads. The next step is to determine if these loads can occur at the same time and what combination of loads provides the highest possible force (stress) on the bridge. For example, a train crossing a bridge and an earthquake in the vicinity of the bridge could occur at the same time. However, many vehicles crossing a bridge and a tornado passing close to the bridge probably would not occur at the same time. After having calculated the largest possible force from all the load combinations, engineers use mathematical equations to calculate the amount of material required to support the loads in that design. Then, engineers brainstorm different bridge design ideas that would accommodate the anticipated loads and amount of material they calculated. They can also split their design into smaller parts and work on the design criteria for all the different components of the bridge.
Engineers are continually refining the materials used to make the members (beams, piers, columns, girders) of a bridge, and testing them so we construct safe and dependable bridges. Today you are engineers working for a structural design company in Colorado. Your team will design a model of a bridge pier (column) by performing some initial load calculations and choosing from one of several materials (clay, marshmallows and foam) to test those calculations.
beam: A long, rigid, horizontal support member of a structure.
beam bridge: A bridge that consists of beams supported by columns (piers, towers).
compression: A pushing force that tends to shorten objects.
compressive strength: The amount of compressive stress that a material can resist before failing.
cross-sectional area: A "slice" or top-view of a shape (such as a girder or pier).
engineer: A person who applies her/his understanding of science and mathematics to creating things for the benefit of humanity and our world.
force: A push or pull on an object, such as compression or tension.
girders: The "beams" of a bridge; usually horizontal members.
load: Any of the forces that a structure is calculated to oppose, comprising any unmoving and unvarying force (dead load), any load from wind or earthquake, and any other moving or temporary force (live load).
member: An individual angle, beam, plate or built piece intended to become an integral part of an assembled frame or structure.
model: (noun) A representation of something, sometimes on a smaller scale. (verb) To make or construct something to help visualize or learn about something else.
piers: The "columns" of a bridge; usually vertical members.
tensile strength: The amount of tensile stress that a material can resist before failing.
tension: A pulling or stretching force that tends to lengthen objects.
Before the Activity
- Gather materials and make copies of the Pier (Columns) Design Worksheet, one per team.
- Place materials available for the columns (clay, marshmallows, foam) on a table for students to see.
- Weigh the books, enough for 7 lbs per team, and tape to each book its weight on a piece of paper. Be as accurate as possible when weighing the books.
- Divide the class into teams of two or three students each.
With the Students
- Give the students the following scenario: You are engineers working for a structural design company in Colorado. The state of Colorado needs a new transportation bridge to serve as an overpass connecting a highway to a small mining town business district. They want you to design a model of the bridge by performing some initial calculations on a pier (column), and choosing one of several materials (clay, marshmallows and foam) to test those calculations.
- Hand out worksheets. Ask students to consider a bridge design for this scenario and brainstorm a list of questions they would need answered before designing the bridge. (For example: What vehicles would be crossing the bridge? How often? How wide is the highway? What is the weather like?) Have students write their questions on their worksheet; write a few of their ideas on the board.
- Next, have student teams make lists of the possible loads that their bridges must withstand. The worksheet provides hypothetical values for each of these loads. Students will later construct a pier (column) able to resist (hold, support) that load (simulated with books).
- Have students calculate the maximum load (ultimate load) that their bridge must be able to hold and record their answers on their worksheets. (We are looking at the load over one pier.)
- Have each team work together to calculate the required cross-sectional area needed for their columns to support their predetermined maximum load (up to 7 lbs). On the worksheet, they are given the following compressive strength values: clay = 7 lb/in2, large marshmallows = 5 lb/in2, foam = 3 lb/in2. With this information, have them solve to find the cross-section area of the column using the equation, Area = Force (max load) ÷ Fy (compressive strength). Remind students to show their work on their worksheets and record their answers.
- Next, have each team calculate the length of each side of the cross sectional area for their pier. (The pier/column works best if both sides of the cross-section area are approximately equal (a square shape), enabling students to determine the area's side length by using their calculators to find the square root of the area. (For marshmallows, use the calculated side length as diameter.) Have students confirm this calculation by multiplying the lengths of their two sides to get the cross-sectional area.) The area that they calculate must be equal to or greater than the required area calculated above. Remind students to show their work and record their answers.
- Next, have teams gather the amount of material they need to create a model of their column that has the cross sectional area they just calculated and a height of 3 inches. Have students shape their columns to match these dimensions.
- Next, have each team sketch a design of their column. Review the force concepts of compression and tension. Which forces do they think will affect their column? (Compressive forces press down on the top of the pier/column.)
- Next, have students test their model piers to see if they can support the predetermined load. Using pre-weighed and labeled books (or some other weight system), have students choose enough books to equal the ultimate load. (We are looking at the load over one pier.) Starting with the lightest books first, place one book at a time on top of the column. Remind students to record their book weights and pier measurements and observations on their worksheets. Keep adding books until the ultimate load is placed on top of the column. Is the pier still standing? Is it shrinking? Fast or slow? (Some columns collapse or fall over right away, such as foam. Those that remain standing usually compress. Some compress quickly down to an inch tall; others, such as clay, compress more slowly and stop at about 2 inches tall.)
- Lastly, as a class, compare and discuss each team's pier/column test results, and review worksheet answers. Have students analyze and discuss the performance of their columns, and describe on their worksheets what they could do to improve their design. Conduct the post-activity assessment activities described in the Assessment section.
Balancing may be tricky; it is okay to place a finger on top of book stacks (or weights) to keep them from falling over.
When loading books on top of the column, place the lightest books (or weights) first, followed by subsequently heavier books.
Be aware that not all columns work; this is primarily due to the lack of experimental analysis used to determine the compressive strength of the materials and the variability of the materials.
Brainstorming: Ask students to consider bridge design and brainstorm a thorough list of questions they would need answered before designing the bridge. (For example: What obstacle is the bridge crossing over? How wide does the bridge need to be? What vehicles would be crossing the bridge? How often? What else would be using the bridge? What is the weather like?) Write their ideas on the board.
Activity Embedded Assessment
Worksheet: Have each team follow along with the activity by recording measurements and observations, and showing calculations on the attached Pier (Column) Design Worksheet. Review their answers to gauge their mastery of the concepts.
Question/Answer: Pose the following questions to the class as a whole, or individually as homework:
- What effect does the height of a pier/column have on its strength? (Answer: As the column gets taller, it tends to buckle and not support as much weight. Imagine pushing down on a vertical yardstick; it bows out. If you push down on a shorter ruler, it does not bow out unless you push with much more force.)
- Now that you have experience in constructing a pier, how does the cross-section area of a column affect the amount of load it can carry? (Answer: The area of a pier is critical to the amount of load it can carry. If a certain area is calculated for a certain load and the actual constructed area is less than that area, the column may not be able to support the full force and it might fail.)
- Can you relate the importance of providing adequate cross-section area to actual construction? (Answer: In the real world — such as on a construction site — it can be challenging to get measurements exactly correct, so that they precisely meet the engineering plans [specifications]. So, engineers take this into consideration when designing the dimensions of an object and allow for some error or "tolerance." A 1/16-inch tolerance might be impossible to achieve in some construction conditions; so, a tolerance of ½-inch might be used instead. All engineering calculations must account for any tolerance allowed.)
- Would you want to walk across a bridge made of clay, marshmallows or foam? Why or why not? And, use some engineering terminology in your answer. (Answer: You probably would not want to walk across a bridge made of clay, marshmallows or foam. The properties of these items make it a difficult material with which to design. Their low compressive and tensile strength would require very large dimensions to accommodate the same load that stronger materials could resist. It does not make sense to use these materials if stronger materials, such as concrete or steel, are available.)
Re-Engineering: Ask students how they might improve their pier design. Discuss the idea of reinforced concrete (when steel rods or mesh are embedded in the concrete to resist tensile forces). Have them sketch or test their ideas, using toothpicks to simulate steel reinforcement. With reinforced materials, how much more force can your pier resist before failure?
Designing More Members: Have students continue with their design by considering the size of a model girder (beam) for the bridge with a single load acting only in the center of the beam. Have each team calculate the required Zx of a 20-inch long rectangular clay beam with 10 lbs force pushing down mid-span. Use the following equation: Zx = (force x length) ÷ (4 x Fy). Remind students to show their work and record their answers. Answer: Zx = 10 lbs x 20 inches ÷ 4 x Fy lb/in2. Have them use the value for Fy from their column.
Then, have each team determine the cross-section dimension of "h" using: Zx = (w x h x h) ÷ 4, where w = 1 inch. Which dimension do you think has the most influence on the outcome of Zx? (Answer: The "h" term, which in this case is squared compared to "w," has the most influence. However, if a rectangular beam is turned on its side, then the "w" term would have the most influence. Therefore, in general the height of the member for a beam has the most influence on the strength of the beam.)
Real-Life Bridges: Have students investigate the condition of bridges in their community. Assign research topics and have students report their findings to the class. What type of regular maintenance is required? What are results from the most recent inspections? Investigate the causes of any known bridge failures, such as the I-35W interstate bridge in Minneapolis, MN, that collapsed into the Mississippi River on August 1, 2007, killing 13 people. Find images that show what happens to bridge members that failed.
- For lower grades, conduct the activity steps in teams, but perform all worksheet calculations together, as a class.
- For upper grades, have each team perform the activity with a different predetermined load amount (5, 6, 7, 8, 9 and 10 lbs). After each group completes their design and testing, plot the successful results on the board with force on the vertical axis and area on the horizontal axis. Expect the graph to show a straight, linear line.
Additional Multimedia Support
Watch a four-minute narrated film clip of the wind-induced 1940 collapse of the "Galloping Gertie" Tacoma Narrows Bridge in Washington four months after it was built. See http://www.youtube.com/watch?v=3mclp9QmCGs
See many aerial and ground photographs of the 2007 I-35W bridge collapse at the Minnesota Department of Transportation's website at www.dot.state.mn.us/.
Dictionary.com. Lexico Publishing Group, LLC. Accessed October 23, 2007. (Source of some vocabulary definitions, with some adaptation) http://www.dictionary.com
Earthquake Hazards Program: Large Earthquakes in the United States. Last updated September 1, 2005. US Geological Survey. www.earthquake.usgs.gov/. Accessed October 23, 2007. (Damage photos)
Hibbeler, R.C. Mechanics of Materials, Third Edition. Prentice Hall: Upper Saddle River, NJ, 1997.
ContributorsJonathan S. Goode; Joe Friedrichsen; Natalie Mach; Denali Lander; Chris Valenti; Denise W. Carlson; Malinda Schaefer Zarske
Copyright© 2006 by Regents of the University of Colorado.
Supporting ProgramIntegrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder
The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education and National Science Foundation GK-12 grant no. 0338326. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not assume endorsement by the federal government.
Last modified: May 25, 2017 |
SATs revision: your KS2 SATs maths helper
KS2 maths SATs: understanding the test
At the end of Year 6 all children sit a KS2 maths SATs exam, which is made up of three papers:
- Paper 1: arithmetic (30 minutes)
- Paper 2: reasoning (40 minutes)
- Paper 3: reasoning (40 minutes)
The arithmetic paper consists of around 36 questions that are all number sentences involving the four operations (addition, subtraction, multiplication and division). Numbers used may be decimals with up to two places, or whole numbers with up to five digits. Children will be required to calculate with percentages and decimals.
The reasoning papers each consist of around 20 questions, each of which present a problem or puzzle that requires your child to apply their mathematical knowledge in a variety of contexts.
Using practice papers
From 2016, the format and content of SATs has been overhauled, but the 2016 official KS2 Maths SATs papers are now available tod download for home study. It may be a good idea to ask your child to do these papers unaided – maybe doing one paper every other day over the course of a week. You can then mark their papers using the mark scheme, which you can also download, and see where their strong and weak areas are.
You can also download past Y6 SATs papers (these will be in the pre-2016 SATs format, but are still useful for practice) and work through them with your child to help familiarise them with exam technique. Looking through the kinds of questions they will be asked will help you identify any areas of difficulty, as well as boosting your child's confidence by highlighting all the skills they've learned in Key Stage 2.
Going through the arithmetic paper with your child is an excellent way to find out if they are secure with their methods for addition, subtraction, multiplication and division, and if not, work to address their difficulties. Some of the questions in the arithmetic paper are very difficult, so don't force them on your child until you think they have all the basics. For example, they need to know how to find 10 per cent of an amount before they can possibly find 45 per cent.
When looking at your child's wrong answers in the reasoning papers, see if you can work out what skills they are lacking. For example, if they have misunderstood a two-step problem that requires multiplication and subtraction, where did they go wrong? If they got the multiplication wrong, they may need to brush up on their times tables. If they weren't sure which operations to use, they may need to practise a range of one-step problems before moving onto two-step problems. Think carefully about the skill they need to practise and then use the relevant TheSchoolRun worksheets to help them; you may want to use Year 4 or Year 5 worksheets, if appropriate.
Little and often is the best way to approach SATs revision; maybe you could work on two or three questions a day with your child so that they are not overloaded (read our parents' guide to using a KS2 SATs maths past paper for tips). Don't keep labouring over something they are finding too difficult; they will just get demoralised and you will put them off altogether.
Here are the main maths objectives children will need to know when sitting the SATs. Each one is followed by a link to a TheSchoolRun worksheet that will help your child practise the skill.
Confused by the maths terminology? Our primary-school maths glossary explains the vocabulary in plain English.
Number and place value
|Read, write, order and compare numbers up to 10,000,000 and determine the value of each digit||Read, write and order 7-digit numbers|
|Find the difference between a positive and negative integer||Differences between positive and negative integers|
|Round any number to a required degree of accuracy||Rounding decimals to one decimal place or the nearest whole number
Rounding to two decimal places
Addition, subtraction, multiplication and division
|Multiply numbers up to four digits by a two-digit number using long multiplication||Multiplying using long multiplication|
|Divide numbers of up to four digits by a two-digit number using long division||Dividing numbers with the long division method|
|Divide numbers of up to four digits using short division||The short division method|
|Identify common factors, common multiples and prime numbers||Common factors, common multiples and prime numbers puzzles|
Fractions, decimals and percentages
|Use common factors to simplify fractions||Simplifying or reducing fractions|
|Compare and order fractions||Compare and order fractions|
|Add and subtract fractions with different denominators and mixed numbers||Fractions: addition and subtraction|
|Multiply simple pairs of proper fractions||Multiplying pairs of fractions|
|Multiply and divide numbers by 10, 100, and 1000 giving answers up to three decimal places||Multiplying and dividing numbers by 10, 100 and 1000 speed challenge|
|Multiply decimal numbers by whole numbers||Multiplying decimals using the grid method|
|Work out equivalence between simple fractions, decimals and percentages||Equivalent fractions, decimals and percentages memory game|
Ratio and proportion
|Solve problems involving ratio and proportion||Ratio problem-solving
Solving problems: ratio and proportion
|Find percentages of amounts||Percentage problem|
|Solve problems involving similar shapes with a scale factor||Shapes and scale factors|
|Work out simple equations||Introduction to algebra|
|Generate and describe linear number sequences||Linear number sequences explained|
|Find pairs of numbers that satisfy an equation with two unknowns||What could the two numbers be?|
|Solve problems involving the calculation and conversion of units of measure, involving answers with up to three decimal places||Solving length problems
Solving weight problems
Solving capacity problems
|Solve problems involving conversion of time||Converting measures time puzzles|
|Convert between miles and kilometres||Miles and kilometres conversions|
|Calculate the area of parallelograms and triangles||Calculating the area of parallelograms and triangles|
|Work out the volume of cubes and cuboids||Volume of cubes and cuboids|
Geometry: properties of shapes
|Draw 2D shapes using given dimensions and angles||Follow instructions to draw shapes|
|Recognise, describe and build simple 3D shapes, including making nets||Draw your own 3D shape net|
|Find unknown angles in triangles, quadrilaterals and regular polygons||Angles in a triangle
Finding unknown angles in quadrilaterals
|Illustrate and name parts of a circle: radius, diameter and circumference||Parts of a circle|
|Calculate unknown angles that are around a point and on a straight line||Angles around a point|
Geometry: position and direction
|Describe positions on all four quadrants of the co-ordinates grid||Co-ordinates and quadrants|
|Draw simple shapes on the co-ordinate plain, then translate and reflect them||Translating and reflecting shapes on all four quadrants|
|Interpret and construct pie charts and line graphs and use these to solve problems||Answering questions on a pie chart
Answering questions on a line graph
Displaying information as a pie chart
Constructing a line graph
|Calculate and interpret the mean as an average||Calculating the mean average|
KS2 English SATs revision
For preparation and revision tips for Year 6 English SATs see our KS2 English SATs revision helper. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.