text
stringlengths
4
602k
Descriptive statistics are ways of summarizing large sets of quantitative (numerical) information. If you have a large number of measurements, the best thing you can do is to make a graph with all the possible scores along the bottom (x axis), and the number of times you came across that score recorded vertically (y axis) in the form of a bar. But such a graph is just plain hard to do statistical analyses with, so we have other, more numerical ways of summarizing the Here is a small set of data: The grades for 15 students. For our purposes, they range from 0 (failing) to 4 (an A), and go up in steps of .2. Here is the information in bar graph form: Central tendency refers to the idea that there is one number that best summarizes the entire set of measurements, a number that is in some way "central" to the set. The mode. The mode is measurement that has the greatest frequency, the one you found the most of. Although it isn't used that much, it is useful when are rare or when the differences are non numerical. The example of something is usually the mode. The mode for our example is 3.2. It is the grade with the most The median. The median number at which half your measurements are more than that number and half are less than that number. The median is actually a better measure of centrality than the mean if your data are skewed, meaning lopsided. If, for example, you have a dozen ordinary folks and millionaire, the distribution of their wealth would be lopsided towards the ordinary people, and the millionaire would be an outlier, or highly deviant member of the group. The millionaire would influence the a great deal, making it seem like all the members of the group are doing quite well. The median would actually be closer to the mean all the people other than the millionaire. The median for our example is 3.0. Half the people scored lower, and half higher (and one exactly). The mean. The mean is just the average. It is the sum of all your measurements, divided by the number of measurements. This is the most used measure of central tendency, because of its mathematical qualities. It works best if the data is distributed very evenly across the range, or is distributed in the form of a normal or bell-shaped curve (see below). One interesting thing about the mean is that it represents the expected value if the distribution of measurements were random! Here is what the formula looks like: Dispersion refers to the idea that there is a second number which tells us how "spread out" all the measurements are from that central number. The range. The range measure from the smallest measurement to the largest one. This is simplest measure of statistical dispersion or "spread." The range for our example is 2.2, the distance from the lowest score, 1.8, to the highest, 4.0. Interquartile range. A slightly more sophisticated measure is the interquartile range. divide the data into quartiles, meaning that one fourth of the measurements are in quartile 1, one fourth in 2, one fourth in 3, and one fourth in 4, you will get a number that divides 1 and 2 and a number that divides 3 and 4. You then measure the distance those two numbers, which therefore contains half of the data. that the number between quartile 2 and 3 is the median! The interquartile range for example is .9, because the quartiles divide roughly at 2.45 and 3.35. The reason for the odd dividing lines is because there are 15 pieces of data, which, of course, cannot be neatly divided into quartiles! The standard deviation. The standard is the "average" degree to which scores deviate from the mean. More precisely, you measure how far all your measurements are from the mean, square each one, and add them all up. The result is called the variance. Take the square root of the variance, and you have the standard deviation. Like the mean, it is the "expected value" of how far the scores deviate from the mean. Here is what the formula looks like: So, subtract the mean from each score and square them and sum: 5.1321. Then divide by 15 and take the square root and you have the standard deviation for our example: .5849.... One standard deviation above the mean is at about 3.5; one standard deviation below is at about 2.3. At its simplest, the central tendency and the measure of dispersion describe a rectangle that is a summary of the set of data. On a more sophisticated level, these measures describe a curve, such as the normal curve, that contains the data most efficiently. This curve, also called the bell-shaped curve, represents a distribution that reflects certain probabilistic events when extended to an infinite number of measurements. It is an idealized version of what happens in many sets of measurements: Most measurements fall in the middle, and fall at points farther away from the middle. A simple example is height: Very few people are below 3 feet tall; very few are over 8 feet tall; most of us are somewhere between 5 and 6. The same applies to weight, IQs, and SATs! In the normal curve, the mean, median, and mode are all the same. One standard deviation below the mean contains 34.1% of the measures, as does one standard deviation above the mean. From one to two below contains 13.6%, as does from one to two above. From two to three standard deviations contains 2.1% on each end. An other way to look at it: Between one standard deviation below and above, we have 68% of the data; from two below to two above, we have 95%; from three below to three above, we have 99.7% Because of its mathematical properties, especially its close ties to probability theory, the normal curve is often used in statistics, with the assumption that the mean and standard deviation of a set of measurements define the distribution. Hopefully, it is obvious that this is not at all true for nearly all cases. The best representation of your measurements is a diagram which includes all the measurements, not just their mean and standard deviation! Our example above is a clear example - a normal curve with a mean of 2.92 and a standard deviation of .58 is quite different from the pattern of the original data. A good real life example is IQ and intelligence: IQ tests are intentionally scored in such a way that they generate a normal curve, and because IQ tests are what we use to measure intelligence, we often assume that intelligence is normally distributed, which is not at all necessarily true!
HTML : Introduction and Basics Tags and Attributes An HTML document is based on the notion of tags. A tag is a piece of text inside angle brackets (<>). Tags typically have a beginning and an end, and usually contain some sort of text inside them. For example, a paragraph is normally denoted like this: <p> This is my paragraph. </p> The <p> indicates the beginning of a paragraph. Text is then placed inside the tag, and the end of the paragraph is denoted by an end tag, which is similar to the start tag but with a slash (</p>.) It is common to indent content in a multi-line tag, but it is also legal to place tags on the same line: <p>This is my paragraph.</p> Tags are sometimes enhanced by attributes, which are name value pairs that modify the tag. For example, the <img> tag (used to embed an image into a page) usually includes the following attributes: <img src = "myPic.jpg" Alt = "this is my picture" /> The src attribute describes where the image file can be found, and the alt attribute describes alternate text that is displayed if the image is unavailable. Tags can be (and frequently are) nested inside each other. Tags cannot overlap, so <a><b></a></b> is not legal, but <a><b></b></a> is fine. HTML VS XHTML HTML has been around for some time. While it has done its job admirably, that job has expanded far more than anybody expected. Early HTML had very limited layout support. Browser manufacturers added many competing standards and web developers came up with clever workarounds, but the result is a lack of standards and frustration for web developers. The latest web standards (XHTML and the emerging HTML 5.0 standard) go back to the original purpose of HTML: to describe the structure of the data only, and leave all formatting to CSS (Please see the DZone CSS Refcard Series). XHTML is nothing more than HTML code conforming to the stricter standards of XML. The same style guidelines are appropriate whether you write in HTML or XHTML (but they tend to be enforced in XHTML): - Use a doctype to describe the language (described below) - Write all code in lowercase letters - Encase all attribute values in double quotes - Each tag must have an end specified. This is normally done with an ending tag, but a special case allows for non-content tags. Most of the requirements of XHTML turn out to be good practice whether you write HTML or XHTML. I recommend using XHTML strict so you can validate your code and know it follows the strictest standards. XHTML has a number of flavors. The strict type is recommended, as it is the most up-to-date standard which will produce the most predictable results. You can also use a transitional type (which allows deprecated HTML tags) and a frameset type, which allows you to add frames. For most applications, the strict type is preferred. The following code can be copied and pasted to form the foundation of a basic web page: <html> <head> <title></title> </head> <body> </body> </html> The XHTML template is a bit more complex, so it’s common to keep a copy on your desktop for quick copy and paste work, or to define it as a starting template in your editor. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html lang="EN" dir="ltr" xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="content-type" content="text/xml; charset=utf-8" /> <title></title> </head> <body> </body> </html> The structure of your web pages is critical to the success of programs based on those pages, so use a validating tool to ensure you haven’t missed anything. |WC3||The most commonly used validator is online at http://validator.w3.org This free tool checks your page against the doctype you specify and ensures you are following the standards. This acts as a ‘spell-checker’ for your code and warns you if you made an error like forgetting to close a tag.| |HTML Tidy||There’s an outstanding free tool called HTML tidy which not only checks your pages for validity, but also fixes most errors automatically. Download this tool at http://tidy.sourceforge.net/ or (better) use the HTML validator extension to build tidy into your browser.| |HTML Validator extension||The extension mechanism of Firefox makes it a critical tool for web developers. The HTML Validator extension is an invaluable tool. It automatically checks any page you view in your browser against both the w3 validation engine and tidy. It can instantly find errors, and repair them on the spot with tidy. With this free extension available athttp://users.skynet.be/mgueury/mozilla/, there’s no good reason not to validate your code.| USEFUL OPEN SOURCE TOOLS Some of the best tools for web development are available through the open source community at no cost at all. Consider these application as part of your HTML toolkit: |Open Source Tool||Description| |Web Developer Toolbar||https://www.addons.mozilla.org/en-US/firefox/addon/60 This Firefox extension adds numerous debugging and web development tools to your browser.| |Firebug||https://addons.mozilla.org/en-US/firefox/addon/1843 is an add-on that adds full debugging capabilities to the browser. The firebug lite version even works with IE.| PAGE STRUCTURE ELEMENTS The following elements are part of every web page. |<html></html>||Surrounds the entire page| |<title></title>||Holds the page title normally displayed in the title bar and used in search results| |<body></body>||Contains the main body text. All parts of the page normally visible are in the body| KEY STRUCTURAL ELEMENTS Most pages contain the following key structual elements: |<h1></h1>||Heading 1||Reserved fo strongest emphasis| |<h2></h2>||Heading 2||Secondary level heading. Headings go down to level 6, but <h1> through <h3> are most common| |<p></p>||Paragraph||Most of the body of a page should be enclosed in paragraphs| |<div></div>||Division||Similar to a paragraph, but normally marks a section of a page. Divs usually contain paragraphs| LISTS AND DATA Web pages frequently incorporate structured data so HTML includes several useful list and table tags: |<ul></ul>||Unordered list||Normally these lists feature bullets (but that can be changed with CSS)| |<ol></ol>||Ordered list||These usually are numbered, but this can be changed with CSS| |<li></li>||List item||Used to describe a list item in an unordered list or an ordered list| |<dl></dl>||Definition list||Used for lists with name-value pairs| |<dt></dt>||Definition term||The name in a name-value pair. Used in definition lists| |<dd></dd>||Definition description||The value (or definition) of a name, value pair| |<table></table>||Table||Defines beginning and end of a table| |<tr></tr>||Table row||Defines a table row. A table normally consists of several <tr> pairs (one per row)| |<td></td>||Table data||Indicates data in a table cell. <td> tags occur within <tr> (which occur within <table>)| |<th></th>||Table heading||Indicates a table cell to be treated as a heading with special formatting| Standard List Types HTML supports three primary list types. Ordered lists and unordered lists are the primary list types. By default, ordered lists use numeric identifiers, and unordered lists use bullets. However, you can use the list-style-type CSS attribute to change the list marker to one of several types. <ol> <li>uno</li> <li>dos</li> <li>tres</li> </ol> Lists can be nested inside each other <ul> <li>English <ol> <li>One <li>Two <li>Three </ol> </li> <li>Spanish <ol> <li>uno <li>dos <li>tres </ol> </li> </ul> The special definition list is used for name / value pairs. The definition term (dt) is a word or phrase that is used as the list marker, and the definition data is normally a paragraph: <h2>Types of list</h2> <dl> <dt>Unordered list</dt> <dd>Normally used for bulleted lists, where the order of data is not important.</dd> <dt>Ordered lists</dt> <dd>Normally use numbered items, for example a list of instructions where the order is significant.</dd> <dt>Definition list</dt> <dd>Used to describe a term and definition. Often a good alternative to a two-column table</dd> </dl> Use of tables Tables were used in the past to overcome the page-layout shortcomings of HTML. That use is now deprecated in favor of CSS-based layout. Use tables only as they were intended, to display tabular data. A table mainly consists of a series of table rows (tr.) Each table row consists of a number of table data (tr) elements. The table heading (th) element can be used to indicate a table cell should be marked as a heading. The rowspan and colspan attributes can be used to make a cell span more than one row or column. Each row of a table should have the same number of columns, and each column should have the same number of rows. Use of the span attribute may require adjustment to other rows or columns. <table border = "1"> <tr> <th> </th> <th>English</th> <th>Spanish</th> </tr> <tr> <th>1</th> <td>One</td> <td>Uno</td> </tr> <tr> <th>2</th> <td>Two</td> <td>Dos</td> </tr> </table> LINKS AND IMAGES Links and images are both used to incorporate external resources into a page. Both are reliant on URIs (Universal Resource Indicators), commonly referred to as URLs or addresses. The anchor tag is used to provide the basic web link: <a href = "http://www.google.com">link to Google</a> In this example, http://www.google.com is the site to be visited. The text “link to Google” will be highlighted as a link. The link tag is used primarily to pull in external CSS files: <link rel = "stylesheet" type = "text/css" href = "mySheet.css" /> The img tag is used in to attach an image. Valid formats are .jpg, .png, and .gif. An image should always be accompanied by an alt attribute describing the contents of the image. <img src = http://www.css-jquery-design.com/wp-content/uploads/2012/02/dhiraj1.jpg alt = "Dhiraj kumar" /> Image formatting attributes (height, width, and align) are deprecated in favor of CSS. HTML / XHTML includes several specialty tags. These are used to describe special purpose text. They have default styling, but of course the styles can be modified with CSS. The quote tag is intended to display a single line quote: <quote>Now is the time for all good men to come to the aid of their country</quote> Quote is an inline tag. If you need a block level quote, use <blockquote>. The <pre> tag is used for pre-formatted text. It is sometimes used for code listings or ASCII art because it preserves carriage returns. Pre-formatted text is usually displayed in a fixed-width font. <pre> for i in range(10): print i </pre> The code format is used to manage pre-formatted text, especially code listings. It is very similar to pre. <code> while i < 10: i += 1 print i </code> This tag is used to mark multi-line quotes. Frequently it is set off with special fonts and indentation through CSS. It is (not surprisingly) a block-level tag. <blockquote> Quoth the raven: Nevermore </blockquote> The span tag is a vanilla inline tag. It has no particular formatting of its own. It is intended to be used with a class or ID when you want to apply style to an inline chunk of code. <span class = "highlight">This text will be highlighted. The em tag is used for standard emphasis. By default, <em> italicizes text, but you can use CSS to make any other type of emphasis you wish. This tag represents strong emphasis. By default, it is bold, but you can modify the formatting with CSS. A number of tags are used to describe the structure of the form. Begin by looking over a basic form: <form action = ""> <fieldset> <legend>My form</legend> <label for = "txtName">Name</label> <input type = "text" id = "txtName" /> <button type = "button" Onclick = "doSomething()"> Do something </button> </fieldset> </form> The <form></form> pair describes the form. In XHTML strict, you must indicate the form’s action property. This is typically the server-side program that will read the form. If there is no such program, you can set the action to null (“”) The method attribute is used to determine whether the data is sent through the get or post mechanism. Most form elements are inline tags, and must be encased in a block element. The fieldset is designed exactly for this purpose. Its default appearance draws a box around the form. You can have multiple fieldsets inside a single form. You can add a legend inside a fieldset. This describes the purpose of the fieldset. A label is a special inline element that describes a particular field. A label can be paired with an input element by putting that element’s ID in the label’s for attribute. This element allows a single line of text input: <input type = "text" id = "myText" name = "myText" /> Passwords display just like textboxes, except rather than showing the text as it is typed, an asterisk appears for each letter. Note that the data is not encoded in any meaningful way. Typing text into a password field is still entirely unsecure. <input type = "password" id = "myPWD" /> Radio buttons are used in a group. Only one element of a radio group can be selected at a time. Give all members of a radio group the same name value to indicate they are part of a group. <input type = "radio" name = "radSize" value = "small" id = "radSmall" selected = "selected" /> <label for = "radSmall">Small</label> <input type = "radio" name = "radSize" value = "large" id = "radLarge" /> <label for = "radLarge">Large</label> Attaching a label to a radio button means the user can activate the button by clicking on the corresponding label. For best results, use the selected attribute to force one radio button to be the default. Checkboxes are much like radio buttons, but they are independent. Like radio buttons, they can be associated with a label. <input type = "checkbox" id = "chkFries" /> <label for = "chkFries">Would you like fries with that?</label> Hidden fields hold data that is not visible to the user (although it is still visible in the code) It is primarily used to preserve state in server-side programs. <input type = "hidden" name = "txtHidden" value = "recipe for secret sauce" /> Note that the data is still not protected in any meaningful way. Buttons are used to signal user input. Buttons can be created through the input tag: <input type = "button" value = "launch the missiles" onclick = "launchMissiles()" /> <button type = "button" Onclick = "launchMissiles()"> Launch the missiles </button> This second form is preferred because buttons often require different CSS styles than other input elements. This second form also allows an <img> tag to be placed inside the button, making the image act as the button. The reset button automatically resets all elements in its form to their default values. It doesn’t require any other attributes. <input type = "reset" /> <button type = "reset""> Reset </button> Select / option Drop-down lists can be created through the select / option mechanism. The select tag creates the overall structure, which is populated by option elements. <select id = "selColor"> <option value = "#000000">black</option> <option value = "#FF0000">red</option> <option value = "#FFFFFF">white</option> </select> The select has an id (for client-side code) or name (for serverside code) identifier. It contains a number of options. Each option has a value which will be returned to the program. The text between <option> and </option> is the value displayed to the user. In some cases (as in this example) the value displayed to the user is not the same as the value used by programs. You can also create a multi-line selection with the select and option tags: <select id = "selColor" size = "3" multiple = "multiple"> <option value = "#000000">black</option> <option value = "#FF0000">red</option> <option value = "#FFFFFF">white</option> </select> DEPRECATED FORMATTING TAGS Certain tags common in older forms of HTML are no longer recommended as CSS provides much better alternatives. The font tag was used to set font color, family (typeface) and size. Numerous CSS attributes replace this capability with much more flexible alternatives. See the CSS refcard for details. HTML code should indicate the level of emphasis rather than the particular stylistic implications. Italicizing should be done through CSS. The <em> tag represents emphasized text. It produces italic output unless the style is changed to something else. The <i> tag is no longer necessary and is not recommended. Add font-style: italic to the style of any element that should be italicized. Like italics, boldfacing is considered a style consideration. Use the <strong> tag to denote any text that should be strongly emphasized. By default, this will result in boldfacing the enclosed text. You can add bold emphasis to any style with the font-weight: bold attribute in CSS. In addition to the deprecated tags, there are also techniques which were once common in HTML that are no longer recommended. Frames have been used as a layout mechanism and as a technique for keeping one part of the page static while dynamically loading other parts of the page in separate frames. Use of frames has proven to cause major usability problems. Layout is better handled through CSS techniques, and dynamic page generation is frequently performed through server-side manipulation or AJAX. Before CSS became widespread, HTML did not have adequate page formatting support. Clever designers used tables to provide an adequate form of page layout. CSS provides a much more flexible and powerful form of layout than tables, and keeps the HTML code largely separated from the styling markup. Sometimes you need to display a special character in a web page. HTML has a set of special characters for exactly this purpose. Each of these entities begins with the ampersand(&) followed by a code and a semicolon. |Non-breaking space|| ||Adds white space| |<||Less than||<||Used to display HTML code or mathematics| |>||Greater than||>||Used to display HTML code or mathematics| |&||Ampersand||&||If you’re not displaying an entity but really want the & symbol| |®||Registered trademark||®||Registered trademark| Numerous other HTML entities are available and can be found in online resources like w3schools. HTML 5 / CSS3 PREVIEW New technologies are on the horizon. Most of the updated browsers now has support for significant new HTML 5 features, and CSS 3 is not far behind. While the following should still be considered experimental, they are likely to become very important tools in the next few years. Firefox 5+, Safari 4+, Chrome 5+, Opera 7+ (and a few other recent browsers) support the following new features: Audio and video tags Finally the browsers have direct support for audio and video without plugin technology. These tags work much like the img tag. <video src = "myVideo.ogg" autoplay> Your browser does not support the video tag. </video> <audio src = "myAudio.ogg" controls> Your browsers does not support the audio tag </audio> The HTML 5 standard currently supports Ogg Theora video, Ogg Vorbis audio, and wav audio. The Ogg formats are opensource alternatives to proprietary formats, and plenty of free tools convert from more standard video formats to Ogg. The autoplay option causes the element to play automatically. The controls element places controls directly into the page. The code between the beginning and ending tag will execute if the browser cannot process the audio or video tag. You can place alternate code here for embedding alternate versions (Flash, for example) The Canvas tag This is actually a CSS improvement, but it’s much needed. It allows you to define a font-face in CSS and include a ttf font file from the server. You can then use this font face in your ordinary CSS and use the downloaded font. If this becomes a standard, we will finally have access to reliable downloadable fonts on the web, which will usher in web typography at long last. Posted by: Dhiraj kumar
Your cart is currently empty! What is Immunity? Immunity refers to the body’s ability to prevent the invasion of pathogens. Pathogens are foreign disease-causing substances, such as bacteria and viruses, and people are exposed to them every day. Antigens are attached to the surface of pathogens and stimulate an immune response in the body. An immune response is the body’s defense system to fight against antigens and protect the body. There are several types of immunity, including innate immunity, passive immunity, and acquired/active immunity. Image 1.1 is a visual showing active immunity as a process of exposing the body to an antigen to produce an adaptive immune response, while passive immunity “borrows” antibodies from another person. - Innate immunity is general protection that a person is born with, including physical barriers (skin, body hair), defense mechanisms (saliva, gastric acid), and general immune responses (inflammation). This type of immunity is considered non-specific (Khan Academy, n.d). Although the immune system does not know exactly what kind of antigen is invading the body, it can respond quickly to defend against any pathogen. - Passive immunity is the body’s capacity to resist pathogens by “borrowing” antibodies. For example, antibodies can be transferred to a baby from a mother’s breast milk, or through blood products containing antibodies such as immunoglobulin that can be transfused from one person to another. The most common form of passive immunity is that which an infant receives from its mother. Antibodies are transported across the placenta during the last one to two months of pregnancy. As a result, a full-term infant will have the same antibodies as its mother. These antibodies will protect the infant from certain diseases for up to a year, and act to defend against specific antigens. Although beneficial, passive immunity is temporary until the antibodies are gone (wane), since the body has not produced the antibodies. - Acquired (adaptive) immunity is a type of immunity that develops from immunological memory. The body is exposed to a specific antigen (which is attached to a pathogen) and develops antibodies to that specific antigen (Khan Academy, n.d.). The next time said antigen invades, the body has a memory of the specific antigen and already has antibodies to fight it off. Acquired immunity can occur from exposure to an infection, wherein a person gets a disease and develops immunity as a result. Acquired immunity also occurs from vaccination wherein the vaccine mimics a particular disease, causing an immune response in the vaccinated individual without getting them ill. What can you do to boost your immune system? The idea of boosting your immunity is enticing, but the ability to do so has proved elusive for several reasons. The immune system is precisely that — a system, not a single entity. To function well, it requires balance and harmony. There is still much that researchers don’t know about the intricacies and interconnectedness of the immune response. For now, there are no scientifically proven direct links between lifestyle and enhanced immune function. But that doesn’t mean the effects of lifestyle on the immune system aren’t intriguing and shouldn’t be studied. Researchers are exploring the effects of diet, exercise, age, psychological stress, and other factors on the immune response, both in animals and in humans. In the meantime, general healthy-living strategies make sense since they likely help immune function and they come with other proven health benefits. Healthy ways to strengthen your immune system Your first line of defense is to choose a healthy lifestyle. Following general good-health guidelines is the single best step you can take toward naturally keeping your immune system working properly. Every part of your body, including your immune system, functions better when protected from environmental assaults and bolstered by healthy-living strategies such as these: - Don’t smoke. - Eat a diet high in fruits and vegetables. - Exercise regularly. - Maintain a healthy weight. - If you drink alcohol, drink only in moderation. - Get adequate sleep. - Take steps to avoid infection, such as washing your hands frequently and cooking meats thoroughly. - Try to minimize stress. - Keep current with all recommended vaccines. Vaccines prime your immune system to fight off infections before they take hold in your body. Increase immunity the healthy way Many products on store shelves claim to boost or support immunity. But the concept of boosting immunity actually makes little sense scientifically. In fact, boosting the number of cells in your body — immune cells or others — is not necessarily a good thing. For example, athletes who engage in “blood doping” — pumping blood into their systems to boost their number of blood cells and enhance their performance — run the risk of strokes. Attempting to boost the cells of your immune system is especially complicated because there are so many different kinds of cells in the immune system that respond to so many different microbes in so many ways. Which cells should you boost, and to what number? So far, scientists do not know the answer. What is known is that the body is continually generating immune cells. Certainly, it produces many more lymphocytes than it can possibly use. The extra cells remove themselves through a natural process of cell death called apoptosis — some before they see any action, some after the battle is won. No one knows how many cells or what the best mix of cells the immune system needs to function at its optimum level. Immune system and age As we age, our immune response capability becomes reduced, which in turn contributes to more infections and more cancer. As life expectancy in developed countries has increased, so too has the incidence of age-related conditions. While some people age healthily, the conclusion of many studies is that, compared with younger people, the elderly are more likely to contract infectious diseases and, even more importantly, more likely to die from them. Respiratory infections, including, influenza, the COVID-19 virus and particularly pneumonia are a leading cause of death in people over 65 worldwide. No one knows for sure why this happens, but some scientists observe that this increased risk correlates with a decrease in T cells, possibly from the thymus atrophying with age and producing fewer T cells to fight off infection. Whether this decrease in thymus function explains the drop in T cells or whether other changes play a role is not fully understood. Others are interested in whether the bone marrow becomes less efficient at producing the stem cells that give rise to the cells of the immune system. A reduction in immune response to infections has been demonstrated by older people’s response to vaccines. For example, studies of influenza vaccines have shown that for people over age 65, the vaccine is less effective compared to healthy children (over age 2). But despite the reduction in efficacy, vaccinations for influenza and S. pneumoniae have significantly lowered the rates of sickness and death in older people when compared with no vaccination. There appears to be a connection between nutrition and immunity in the elderly. A form of malnutrition that is surprisingly common even in affluent countries is known as “micronutrient malnutrition.” Micronutrient malnutrition, in which a person is deficient in some essential vitamins and trace minerals that are obtained from or supplemented by diet, can happen in the elderly. Older people tend to eat less and often have less variety in their diets. One important question is whether dietary supplements may help older people maintain a healthier immune system. Older people should discuss this question with their doctor. Diet and your immune system Like any fighting force, the immune system army marches on its stomach. Healthy immune system warriors need good, regular nourishment. Scientists have long recognized that people who live in poverty and are malnourished are more vulnerable to infectious diseases. For example, researchers don’t know whether any particular dietary factors, such as processed foods or high simple sugar intake, will have adversely affect immune function. There are still relatively few studies of the effects of nutrition on the immune system of humans. There is some evidence that various micronutrient deficiencies — for example, deficiencies of zinc, selenium, iron, copper, folic acid, and vitamins A, B6, C, and E — alter immune responses in animals, as measured in the test tube. However, the impact of these immune system changes on the health of animals is less clear, and the effect of similar deficiencies on the human immune response has yet to be assessed. So, what can you do? If you suspect your diet is not providing you with all your micronutrient needs — maybe, for instance, you don’t like vegetables — taking a daily multivitamin and mineral supplement may bring other health benefits, beyond any possibly beneficial effects on the immune system. Taking megadose of a single vitamin does not. More is not necessarily better. Improve immunity with herbs and supplements? Walk into a store, and you will find bottles of pills and herbal preparations that claim to “support immunity” or otherwise boost the health of your immune system. Although some preparations have been found to alter some components of immune function, thus far there is no evidence that they actually bolster immunity to the point where you are better protected against infection and disease. Demonstrating whether an herb — or any substance, for that matter — can enhance immunity is, as yet, a highly complicated matter. Scientists don’t know, for example, whether an herb that seems to raise the levels of antibodies in the blood is actually doing anything beneficial for overall immunity. Stress and immune function Modern medicine has come to appreciate the closely linked relationship of mind and body. A wide variety of maladies, including stomach upset, hives, and even heart disease, are linked to the effects of emotional stress. Despite the challenges, scientists are actively studying the relationship between stress and immune function. For one thing, stress is difficult to define. What may appear to be a stressful situation for one person is not for another. When people are exposed to situations they regard as stressful, it is difficult for them to measure how much stress they feel, and difficult for the scientist to know if a person’s subjective impression of the amount of stress is accurate. The scientist can only measure things that may reflect stress, such as the number of times the heart beats each minute, but such measures also may reflect other factors. Most scientists studying the relationship of stress and immune function, however, do not study a sudden, short-lived stressor; rather, they try to study more constant and frequent stressors known as chronic stress, such as that caused by relationships with family, friends, and co-workers, or sustained challenges to perform well at one’s work. Some scientists are investigating whether ongoing stress takes a toll on the immune system. But it is hard to perform what scientists call “controlled experiments” in human beings. In a controlled experiment, the scientist can change one and only one factor, such as the amount of a particular chemical, and then measure the effect of that change on some other measurable phenomenon, such as the amount of antibodies produced by a particular type of immune system cell when it is exposed to the chemical. In a living animal, and especially in a human being, that kind of control is just not possible, since there are so many other things happening to the animal or person at the time that measurements are being taken. Despite these inevitable difficulties in measuring the relationship of stress to immunity, scientists are making progress. Does being cold give you a weak immune system? Almost every mother has said it: “Wear a jacket or you’ll catch a cold!” Is she right? Probably not, exposure to moderate cold temperatures doesn’t increase your susceptibility to infection. There are two reasons why winter is “cold and flu season.” In the winter, people spend more time indoors, in closer contact with other people who can pass on their germs. Also the influenza virus stays airborne longer when air is cold and less humid. But researchers remain interested in this question in different populations. Some experiments with mice suggest that cold exposure might reduce the ability to cope with infection. But what about humans? Scientists have performed experiments in which volunteers were briefly dunked in cold water or spent short periods of time naked in subfreezing temperatures. They’ve studied people who lived in Antarctica and those on expeditions in the Canadian Rockies. The results have been mixed. For example, researchers documented an increase in upper respiratory infections in competitive cross-country skiers who exercise vigorously in the cold, but whether these infections are due to the cold or other factors — such as the intense exercise or the dryness of the air — is not known. A group of Canadian researchers that has reviewed hundreds of medical studies on the subject and conducted some of its own research concludes that there’s no need to worry about moderate cold exposure — it has no detrimental effect on the human immune system. Should you bundle up when it’s cold outside? The answer is “yes” if you’re uncomfortable, or if you’re going to be outdoors for an extended period where such problems as frostbite and hypothermia are a risk. But don’t worry about immunity. Exercise: Good or bad for immunity? Regular exercise is one of the pillars of healthy living. It improves cardiovascular health, lowers blood pressure, helps control body weight, and protects against a variety of diseases. But does it help to boost your immune system naturally and keep it healthy? Just like a healthy diet, exercise can contribute to general good health and therefore to a healthy immune system. 2 responses to “IMMUNITY” I would also love to add if you do not now have an insurance policy otherwise you do not remain in any group insurance, you may well take advantage of seeking assistance from a health insurance broker. Self-employed or people who have medical conditions usually seek the help of the health insurance brokerage. Thanks for your short article. Good One But Check out this also
The vast majority of matter in the universe is made up of dark matter, a mysterious non-luminous substance. Even though scientists have been observing the gravitational effects of dark matter for decades, they are still perplexed as to its true nature. WHO DISCOVERED DARK MATTER? Astronomers started to speculate about an unseen material in the late nineteenth century, where electro dim stars or gas and dust scattered throughout the universe. According to a 2018 review in the journal Reviews of Modern Physics, researchers had even begun to estimate its mass. Most people assumed that this enigmatic substance was a minor component of the universe’s total mass. It wasn’t until 1933 that Swiss-American astronomer Fritz Zwicky noticed distant galaxies spinning around each other much faster than should be possible given their visible matter as seen through telescopes. “If this is confirmed, we will get the surprising result that dark matter is present in much greater quantities than luminous matter,” he wrote in a paper published that year in the journal Helvetica Physica Acta. Many in the field, however, remained skeptical of Zwicky’s findings until the 1970s, when astronomers Kent Ford and Vera Rubin conducted detailed studies of stars in the neighboring Andromeda galaxy. These stars were orbiting the galactic core far too quickly, almost as if some invisible material was gravitationally tugging on them and propelling them forward — an observation that scientists quickly noticed in galaxies all over the universe. Researchers did not know what this unseen mass was made of, with some astronomers hypothesizing that dark matter comprised small black holes or other compact objects that emitted too little light to be seen through telescopes. As per NASA, the results started to appear even stranger in the 1990s, when Wilkinson Microwave Anisotropy Probe (WMAP), a space telescope discovered that dark matter outweighed ordinary visible matter by a factor of five. Telescope surveys could never find enough small compact objects to account for this massive influx of material. Dark matter, according to most modern astronomers, is composed of subatomic particles with properties distinct from protons and neutrons. A WIMP, or Weakly Interacting Massive Particle, is the current candidate for dark matter. These speculative entities are not described by the Standard Model of particle physics, which encompasses almost all particles and forces. WIMPs would be more like ghostly neutrinos, weighing 10 to 100 times more than a proton. (Although the exact mass of neutrinos is unknown, they are significantly lighter than electrons.) WIMPs, like neutrinos, would interact with only two of the universe’s four fundamental forces: gravity and the nuclear weak force, which mediates the decay of radioactive atomic nuclei. These particles of dark matter seem to be electrically neutral, which means that the axioms would not interact with electromagnetism, which is NA the basis of light, and thus would remain invisible. In an attempt to detect WIMPs, physicists have built massive detectors and buried them deep underground to protect them from interfering cosmic rays, but no experiment has found evidence for them. This failure has led some in the field to wonder if they’ve embarked on a wild particle chase with no real end in sight. According to the Proceedings of the National Academy of Science, some scientists are turning their attention to a newer dark matter candidate called the axion, which would be a millionth or can be a billionth the mass of an electron. These hypothetical particles are particularly appealing to researchers because they have the potential to solve another unsolved physics problem by interacting with neutrons and explaining why they can feel magnetic fields but not electric ones. In the month of June of the year 2020, members of the Laboratory based in Italy, Gran Sasso National Laboratory, XENON1T experiment, announced the discovery of a small but unexpected signal that could only be explained by the presence of axions. The findings stunned the scientific community, but they have yet to be confirmed by other experiments. IS DARK MATTER EVEN REAL? This means that scientists are still puzzled as to what Dark Matter is. Also, some of the theorists have speculated that there is a whole dark sector of the universe having electrons, so many particles and even dark forces that only affect dark matter, similar to the subatomic complexity seen in the visible universe. At the same time, a small number of scientists believe that dark matter is a figment of their imagination. They believe in a theory known as a modified version of inertia, or MOND, which proposes that gravity behaves differently than expected on large scales, accounting for observed rotations of stars and galaxies. Most experts, however, are skeptical of the need for such a radical departure from known physics, which would also necessitate changes to our understanding of large parts of reality. As per the information available and our knowledge, dark matter is not associated with dark energy, another mysterious phenomenon responsible for accelerating the expansion of the universe. The two simply share the word “dark,” which scientists frequently use as a placeholder for things they don’t fully comprehend.
Adam In Depth Patient Reports - defaultSearch Health Information Colds and the flu - Upper respiratory tract infections affect the air passages in the nose, ears, and throat. - Organisms that cause these upper respiratory tract infections are generally spread by direct contact (such as hand-to-mouth) with germs or by someone coughing or sneezing. - The common cold is the most common upper respiratory tract infection. - The two major flu strains are referred to as A and B: - Influenza A is the cause of the major pandemics of influenza that have occurred so far. - Influenza B infects only humans. - The Centers for Disease Control and Prevention (CDC) recommends that everyone over the age of 6 months receive the flu vaccine every year. The only exceptions are for those allergic to the vaccine. Two types of flu vaccine are available: a killed vaccine that comes in 3 injectable forms, and a live vaccine given as a nasal spray. - Newer vaccines contain very little egg protein, but an allergic reaction still may occur in people with strong allergies to eggs. A new vaccine made in animal cell culture, not in eggs, was approved by the Food and Drug Administration (FDA) in November 2012, for people aged 18 years and older. - In December 2012, the FDA approved a new type of influenza vaccine, which will be used for the first time in the 2013 - 2014 season. This vaccine will match the 2 current strains of both influenza A and B, to provide wider protection. The vaccine is approved for ages 3 years and older. Cold and Flu Treatments - Antibiotics are often prescribed inappropriately for colds, flu, and for sore throats that are not caused by strep bacteria. Studies indicate that fewer than half of adults and far fewer of the children with even strong signs and symptoms for strep throat actually have strep infections. - Dozens of remedies are available that combine ingredients aimed at more than one cold or flu symptom. In most cases these preparations are safe, but they can cause problems and their effectiveness is open to question Upper respiratory tract infections affect the air passages in the nose, ears, and throat. Structures of the throat include the esophagus, trachea, epiglottis, and tonsils. The infections can be caused by viruses, bacteria, or other microscopic organisms. In most cases, these infections lead to colds or mild influenza (flu) and are temporary and harmless. In rare cases, flu can be severe, or the infections may turn into pneumonia. Organisms that cause these upper respiratory tract infections are generally spread by: - Direct contact (such as hand-to-mouth) - Coughing or sneezing The Common Cold The common cold (medically known as infectious nasopharyngitis) is the most common upper respiratory tract infection. More than 200 different viruses can cause colds. The most common cause is the rhinovirus, which is responsible for about half of all colds. The adenovirus family also causes upper respiratory infections (it is one of the many viruses that cause the common cold). It also causes pneumonia, conjunctivitis, and several other diseases. A newer strain of adenovirus has caused several deaths. Symptoms usually develop 1 to 3 days after being exposed to the cold virus. A cold usually progresses in the following manner: - It nearly always starts rapidly with throat irritation and stuffiness in the nose. - Within hours, full-blown cold symptoms usually develop, which can include sneezing, mild sore throat, fever, minor headaches, muscle aches, and coughing. - Fever is low-grade or absent. In small children, however, fever may be as high as 103 °F for 1 or 2 days. The fever should go down after that time, and be back to normal by the 5th day. - Nasal discharge is usually clear and runny the first 1 to 3 days. It then thickens and becomes yellow to greenish. - The sore throat is usually mild and lasts only about a day. A runny nose usually lasts 2 to 7 days, although coughing and nasal discharge can persist for more than 2 weeks. Influenza ("The Flu") Every year, influenza strikes millions of people worldwide. Influenza epidemics are most serious when they involve a new strain, against which most people around the world are not immune. Such global epidemics (pandemics) can rapidly infect more than one fourth of the world's population. For example, the Spanish flu in 1918 and 1919 killed an estimated 20 million people in the U.S. and Europe and 17 million people in India. With modern society's dependence on air travel, an influenza pandemic could potentially inflict catastrophic damage on human lives, and disrupt the global economy. More recently, the new H1N1 ("Swine Flu") that emerged in Mexico in the spring of 2009 quickly became a pandemic, though it was far less severe or deadly than the Spanish flu of 1918. As of February 24 2010, the World Health Organization estimated the total deaths from the H1N1 pandemic at over 16,000 people. It is likely the actual total is somewhat larger, because not all victims are tested for H1N1 influenza. The H1N1 pandemic was declared over by the World Health Organization in August 2010. This particular influenza strain became one of the seasonal influenza viruses circulating world-wide during the 2010 - 2011 flu season, and to a lesser extent during the 2011 - 2012 flu season. It is still a part of the 2012 - 2013 seasonal flu vaccine. Symptoms of influenza. Patients usually feel sick 1 to 2 days after exposure to the influenza (flu) virus. The flu usually involves: - Abrupt onset of severe symptoms, which include headache, muscle aches, fatigue, and high fever (up to 104 °F). - Cough (which is usually dry but is often severe) and sometimes a runny nose and sore throat. - Children may experience vomiting, diarrhea, and ear infections, as well as other flu symptoms. - The symptoms usually resolve in 4 to 5 days, although some people can experience coughing and feelings of illness for more than 2 weeks. In some cases, flu can become more severe or make other conditions worse. Transmitting the Virus. The flu virus is spread primarily when a person with the flu coughs or sneezes near someone else. Adults with flu typically spread it to someone else from 1 day before symptoms start to about 5 days after symptoms develop. Children can spread the infection for more than 10 days after symptoms begin, and young children can transmit the virus 6 days or even earlier before the onset of symptoms. People with severely compromised immune systems can transmit the virus for weeks or months. Flu Strains. A virus is a cluster of genes wrapped in a protein membrane, which is coated with a fatty substance that contains molecules called glycoproteins. Strains of the flu are identified according to the number of membranes and type of glycoproteins present. The two major flu strains are referred to as A and B: - Influenza A is the most widespread and can infect animals and humans. Influenza A is the cause of the major pandemics of influenza that have occurred so far. It is usually further categorized by two subtypes based on two substances that occur on the surface of the viruses: hemagglutinin (H) and neuraminidase (N). - Influenza B infects only humans. It is less common and less severe than type A, but is often associated with specific outbreaks, such as in nursing homes. The vast majority of flu cases are type A. Influenza A usually causes more serious disease than type B. There is some concern, however, that since influenza B has been less common in the past few years, some people, particularly small children, may have fewer antibodies to it and so may be at higher risk for severe infection. Avian Influenza (Bird Flu) The influenza virus mutates (changes) rapidly as it moves from species to species. While most avian influenza (bird flu) virus strains are relatively harmless, a few develop into "highly pathogenic avian influenza," which can be very deadly for domesticated poultry. As recent events have shown, these strains can also be deadly to humans. People can become infected by these bird flu strains through contact with contaminated chickens and other birds. The medical community is concerned about the H5N1 bird flu virus, which has infected and killed people in several countries. Since 1997, the H5N1 virus has triggered deadly outbreaks in poultry across Southeast Asia. As of December 17, 2012, 610 people had been infected with the bird flu in 15 countries. Of these people, 360 have died, according to the World Health Organization. No cases have been reported in the United States. So far, the virus has spread only from birds to humans. The virus does not seem to be easily spread from person to person. However, scientists and public health officials are monitoring the spread of H5N1 and working to contain it. Efforts include slaughtering infected birds, developing new vaccines, and stockpiling antiviral drugs such as oseltamivir (Tamiflu). Many poor nations have limited resources and already contend with other serious health problems, including HIV-AIDS. If H5N1 does mutate and spread, the consequences could be especially severe for these countries. In April 2007, the Food and Drug Administration (FDA) approved a vaccine to protect humans from avian influenza. Currently this vaccine is not being used for routine immunization. However, if the avian flu develops the ability to spread fairly easily from human to human, this vaccine may be made available. A new avian influenza vaccine is currently in clinical trials and is showing promising results. On November 14, 2012 the FDA Vaccines and Related Biological Products Advisory Committee unanimously decided that these clinical trials' results support the vaccine's licensure. The FDA’s decision on whether to approve the vaccine is pending. Differentiating between a cold and flu may be difficult. Cold symptoms are nearly always less severe than those of the flu. Comparing Colds and Flus None or low grade Common and high (102 - 104 °F); lasts 3 to 4 days None or mild Almost always present General aches and pains Mild, if they occur at all Fatigue, exhaustion, and weakness Mild, it they occur at all Extreme exhaustion is early and severe; can last 2 to 3 weeks Chest discomfort and cough Mild-to-moderate, hacking cough Common, can be severe Source: National Institute of Allergy and Infectious Disease Diagnosing the Flu Several available tests can isolate and identify the viruses responsible for some respiratory infections. They are generally not needed, since most cases of the flu are self-evident. Decisions about treatment are almost always made based on how sick an individual is, and whether the person is at risk for more severe complications. If a doctor believes a diagnosis would help, samples using a swab should be taken from the nasal passages or throat within 4 days of the first symptoms. Several rapid tests for the flu can produce results in less than 30 minutes, but vary on the specific strain or strains that they can detect. They are not as accurate as a viral culture, however, in which the virus is reproduced in the laboratory. Culture results can take 3 to 10 days. Blood tests can also document the infection several weeks after symptoms appear. Diagnosing Avian Influenza In February 2009, the FDA approved a faster test for diagnosing H5 strains of avian influenza in people suspected of having the virus. The test is called A/H5N1 Flu Test. The test gives preliminary results within 40minutes. Older tests required 3 to 4 hours. It checks for the presence of the protein NS1, which indicates an influenza H5N1 strain, the current strain of concern. Other Causes of Congestion Ruling out Allergic Rhinitis. Symptoms of allergic rhinitis include nasal obstruction and congestion, which are similar to the symptoms of a cold. People with allergies, however, are likely to have the following: - Thin, clear, and runny nasal discharge - An itchy nose, eyes, or throat - Recurrent sneezing There are two forms of allergic rhinitis: - Symptoms that appear only during allergy season are called allergic rhinitis, commonly known as hay or rose fever. [For more information, see In-Depth Report #77: Allergic rhinitis.] - Allergens in the house, such as house dust mites, molds, and pet dander, can cause year-long allergic rhinitis, referred to as perennial rhinitis. Ruling out Sinusitis. The signs and symptoms suggestive of true acute sinusitis include the following: - A return of congestion and discomfort after initial improvement in a cold (called double sickening) - Purulent (pus-filled) nasal secretion - A lack of response to decongestant or antihistamine - Pain in the upper teeth or pain on one side of the head - Pain above or below both eyes when leaning over Children with sinusitis are less likely to have facial pain and headache and may only develop a high fever or prolonged upper respiratory symptoms (such as a daytime cough that does not improve for 11 to 14 days). When the diagnosis is unclear or complications are suspected, further tests may be required. [For more information, see In-Depth Report #62: Sinusitis.] Other Causes of Coughing Acute Bronchitis. Acute bronchitis is usually caused by a virus and in most cases is self-limiting. The cough it causes typically lasts for about 7 to 10 days, but in about half of patients, coughing can last for up to 3 weeks, and 25% of patients continue to cough for over 1 month. Atypical Pneumonia. Pneumonia caused by atypical organisms (such as Mycoplasma pneumoniae, Chlamydia pneumoniae, and Legionella) can cause symptoms similar to the flu. Only laboratory tests can diagnose the difference. [For more information, see In-Depth Report #64: Pneumonia.] Ruling out other Viral Infections. Respiratory syncytial virus (RSV), and possibly human parainfluenza viruses (HPV), are proving to be important causes of serious respiratory infections in infants, the elderly, and people with damaged immune systems. (Both also cause mild conditions.) RSV may be a much more common cause of flu-like symptoms than previously thought. Pertussis. Pertussis (whooping cough) was a very common childhood illness throughout the first half of the century. Although immunizations caused a decline in cases to only 1,730 in the U.S. in 1980, the incidence has risen recently, with 18,719 cases in 201According to the Centers for Disease Control and Prevention (CDC), many additional cases in the United States go undiagnosed. Many more cases are reported worldwide. Nearly half of pertussis cases now occur in people 10 years of age or older, perhaps due to waning immunity in adolescents and adults. Up to 25% of adults who see a doctor for persistent cough may actually have pertussis. It may go undiagnosed, however, because their symptoms are usually mild, and adults are unlikely to have the classic "whooping" cough. This is of some concern because such adults may unknowingly infect unvaccinated children. The younger the patient with pertussis, the higher the risk for severe complications, including pneumonia, seizures, and even death. Children younger than 6 months are at particular risk because protection is incomplete, even with vaccination. Pertussis vaccines safe for older children and adults are now available. Other Causes of Sore Throat In addition to common cold viruses, other, less frequent causes of sore throat include the following: - Strep throat - Foodborne and waterborne infections (Streptococcus C and G) - An uncommon organism called Arcanobacterium haemolyticum (infection with this bacterium can mimic strep throat and may even cause a rash) - Infectious mononucleosis ("mono") - Herpesvirus 1 What is Strep Throat? Group A Streptococcal bacteria is the most common bacterial cause of the severe sore throat known commonly as "strep throat." It occurs mostly in school age children, but people of all ages are susceptible. (Strep throat constitutes about 12% of all sore throat cases seen by doctors.) The symptoms of strep throat include the following: - A sudden onset of severe sore throat - Difficulty in swallowing - Stomach pain Only about half of patients with strep throat have such clear-cut symptoms. Furthermore, half of people who have these symptoms do not actually have strep throat. How Is Strep Throat Diagnosed? Most cold-related sore throats are caused by viruses and require no treatment. They usually do not last more than a day. When the sore throat persists and is very painful the doctor will want to rule out or confirm the presence of the Streptococcus bacteria. - The doctor will look for redness and pus-filled patches on the tonsils and back of the throat. - The doctor will feel the sides of the neck for swollen lymph nodes. If the lymph nodes are not swollen, it is less likely to be a strep throat. - A cotton swab is used to take a sample of pus in the throat for a throat culture. A throat culture is the most effective and least expensive test for confirming the presence of strep throat. It takes 24 to 48 hours to obtain a result. Rapid Antigen-Detection Test for Strep Throat. A faster test, called the Rapid Strep Antigen Test, uses chemicals to detect the presence of bacteria in a few minutes. A positive result nearly always means that streptococcal bacteria are present in the throat. The test, however, fails to detect 5 to 10% of cases, so a culture may still be necessary to catch any missed infections, particularly in children. How Serious is Strep Throat? The use of antibiotics has removed the threat of most complications from streptococcus infection in the throat. However, untreated strep throat could lead to the following complications: - Abscess in the tonsils - Scarlet fever - Rheumatic fever (rare in the U.S.) Colds rarely cause serious complications. In about 1% of cases, a cold can lead to other complications, such as sinus or ear infections. It can also aggravate asthma and, in uncommon situations, increase the risk for lower respiratory tract infections. Ear Infections. The rhinovirus, a major cause of colds, also commonly predisposes children to ear infections, possibly by blocking the Eustachian tube, which leads to the middle ear. Viruses may even attack the ear directly. Sinusitis. Between 0.5 to 3% of people with colds develop sinusitis, an infection in the sinus cavities (air-filled spaces in the skull). Sinusitis is usually mild, but if it becomes severe, antibiotics generally eliminate further problems. Lower Respiratory Tract Infections. The common cold poses a risk for bronchitis and pneumonia in nursing home patients, and in other people who may be vulnerable to infection. Aggravation of Asthma. Rhinovirus infections can aggravate asthma in both children and adults. In fact, rhinovirus has been reported to be the most common infectious organism associated with asthma attacks. Problems with wheezing may persist for weeks after a cold. Complications of Influenza The flu is usually self-limited. However, each flu season is unpredictable and can make varying numbers of people dangerously sick. According to the CDC, between 1976 and 2006, flu-associated deaths ranged from about 3,000 to 49,000. People at highest risk for serious complications from seasonal flu are those over 65 years old and those with chronic medical conditions. Influenza A is the most severe strain. Influenza B tends to be milder. Unlike the seasonal flu, children younger than 5 years old, especially those younger than age 2, with H1N1 (swine) flu are also at risk for more serious complications. Pregnant women with H1N1 influenza are also at increased risk for complications. Pneumonia. Pneumonia is the major serious complication of influenza and can be very serious. It can develop about 5 days after the flu. More than 90% of the deaths caused by influenza and pneumonia occur among older adults. Flu-related pneumonia nearly always occurs in high-risk individuals. It should be noted that pneumonia is an uncommon outcome of influenza in healthy adults. Complications in the Central Nervous System in Children. Influenza increases the risk for complications in the central nervous system of small children. Febrile seizures are the most common neurologic complication in children. The risks decline after a child turns 1 year old, but are still high in children aged 3 to 5 years old. The very young and the very old are at higher risk for upper respiratory tract infections and their associated complications. Children. Young children are prone to colds and may have as many as 12 colds every year. Millions of cases of influenza develop in American children and adolescents each year. Before the immune system matures, all infants are susceptible to upper respiratory infections, with a possible frequency of one cold every 1 to 2 months. Smaller nasal and sinus passages also make younger children more vulnerable to colds than older children and adults. Upper respiratory infections gradually diminish as children grow, until at school age their rate of such infections is about the same as an adult's. There is almost never cause for concern when a child has frequent colds, unless the colds become unusually severe or more frequent than usual. The Elderly. The elderly have diminished cough and gag reflexes, and their immune systems are often weaker. They are therefore at greater risk for serious respiratory infections than the young and middle-aged adults. Exposure to Smoke and Environmental Pollutants The risk of respiratory infections is increased by exposure to cigarette smoke, which can injure airways and damage the cilia (tiny hair-like structures that help keep the airways clear). Toxic fumes, industrial smoke, and other air pollutants are also risk factors. Parental smoking increases the risk of respiratory infections in their children. People with AIDS and other medical conditions that damage the immune system are extremely susceptible to serious infections. Cancers, especially leukemia and Hodgkin's disease, put patients at risk. Patients who are on corticosteroid (steroid) treatments, chemotherapy, or other medications that suppress the immune system are also prone to infection. People with diabetes are at a higher risk for the flu. Certain genetic disorders predispose people to respiratory infections. They include sickle-cell disease, cystic fibrosis, and Kartagener syndrome (which results in malfunctioning cilia). People under Stress A number of studies suggest that stress increases one's susceptibility to a cold. Stress appears to increase the risk for a cold regardless of lifestyle or other health habits. And once a person catches a cold or flu, stress can make symptoms worse. It is not clear why these events occur. Some experts believe that stress alters specific immune factors, which cause inflammation in the airways. Colds and the flu occur predominantly in the winter. Flu season typically starts in October and lasts into mid March. The reasons for this seasonal bias are not due to the cold itself, but to other factors. Certainly, the flu and colds are more likely to be transmitted in winter because people spend more time indoors and are exposed to higher concentrations of airborne viruses. Dry winter weather also dries up nasal passages, making them more susceptible to viruses. Some experts theorize that the high rates of viral infections in winter may be due to certain immune factors, which react to light and dark and affect a person's susceptibility to viruses. Traveling in Trains, Buses, and Planes Traveling in close contact with people, whether on trains, planes, or buses, can increase the risk for respiratory infections. Day Care Centers Children who attend day care may have an increased risk of colds. However, they may have lower cold rates in their first years of regular school. The colds they catch in day care, then, may bestow some immunity to future colds for a few years. By age 13, such protection has worn off. There is also some evidence that frequent colds in young children may help protect against future allergies and asthma. Because colds and the flu are easily spread, everyone should always wash their hands before eating and after going outside. Ordinary soap is sufficient. Waterless hand cleaners that contain an alcohol-based gel are also effective for everyday use and may even kill cold viruses. Antibacterial soaps add little protection, particularly against viruses. In fact, one study suggests that common liquid dish washing soaps are up to 100 times more effective than antibacterial soaps in killing respiratory syncytial virus (RSV), which is known to cause pneumonia. Wiping surfaces with a solution that contains one part bleach to 10 parts water is very effective in killing viruses. Alcohol-based hand cleaners are very effective, as mentioned above, and are recommended by the CDC. Colds are not caused by insufficiently warm clothes or by going outside with wet hair. The following are some food and fluid recommendations. They will not cure a cold, but they may help a person deal better with the symptoms: - Drinking plenty of fluids and getting lots of rest when needed is still the best bit of advice to ease the discomforts of the common cold. Water is the best fluid and helps lubricate the mucous membranes. (There is no evidence that drinking milk will increase or worsen mucus.) - Chicken soup does indeed help congestion and body aches. The hot steam from the soup may be its chief advantage, although laboratory studies have actually reported that ingredients in the soup may have anti-inflammatory effects. In fact, any hot beverage may have similar soothing effects from steam. Ginger tea, fruit juice, and hot tea with honey and lemon may all be helpful. - Spicy foods that contain hot peppers or horseradish may help clear sinuses. Despite a few studies that suggest that large doses of vitamin C may reduce the duration of a cold, most of the scientific evidence finds no benefit. Taking high doses of vitamin C is not recommended, for the following reasons: - High doses of vitamin C may cause headaches, intestinal and urinary problems, and even kidney stones. - Because vitamin C increases iron absorption, people with certain blood disorders, such as hemochromatosis, thalassemia, or sideroblastic anemia, should avoid high doses of this vitamin. - Large doses of vitamin C can also interfere with anticoagulant medications ("blood thinners"), blood tests used in diabetes, and stool tests. In addition, a review of evidence suggests that taking large doses of vitamin C after the onset of cold symptoms does not improve the symptoms or shortens the duration of the cold. Zinc appears to influence the immune system and it may have a direct effect on viruses. Zinc preparations in lozenge or nasal gel form are marketed as cold treatments. Studies are very mixed on the effects of zinc on colds. A review of available studies comparing zinc treatment to placebo ("sugar pill") found only one high-quality study, which showed that zinc nasal gels might provide a benefit. Another review of 14 studies showed that oral zinc may shorten the duration of colds, but cautioned that large high-quality studies are needed before any treatment recommendations can be made. The overall benefit of zinc in the prevention of colds remains unclear. In any case, no one with an adequate diet and a healthy immune system should take zinc for prolonged periods, for the purpose of preventing colds. Side Effects. Side effects, particularly of the lozenges form, include the following: - Dry mouth - Bad taste (possibly only with zinc gluconate lozenges) - Severe vomiting, dehydration, and restlessness (signs of overdose, seek medical help) - Allergic response (rare) In 2009, the FDA issued a warning regarding Zicam nasal gel swabs containing zinc. The FDA has received reports of cases of anosmia (loss of the sense of smell) following use of these products. These reports are corroborated by several studies connecting nasal zinc applications with anosmia. The reports concerned only nasal gel containing zinc, not oral preparations of zinc. Food and Drug Interactions. Zinc may also interact with drugs or other elements: - It may reduce absorption of certain antibiotics. - Foods high in calcium or phosphorus may reduce zinc absorption. - In high doses and for long periods of time, zinc can cause copper deficiencies. Medications for Mild Pain and Fever Reduction Many people take medications to reduce mild pain and fever. Adults most often choose aspirin, ibuprofen (Advil), or acetaminophen (Tylenol). The following are recommendations for children: - Acetaminophen (Tylenol) or ibuprofen (usually Advil or Motrin) are the typical pain-relievers parents give their children. Most pediatricians advise such medications for children who run fevers over 101 °F. Some suggest alternating the two agents, although there is no evidence that this regimen offers any benefits, and it might be harmful. - Aspirin and aspirin-containing products should never be used in children or adolescents. Reye syndrome, a very serious condition that can be life threatening, has been associated with aspirin use in children who have flu symptoms or chicken pox. Nasal strips (such as Breathe Right) are placed across the lower part of the nose and pull the nostrils open. These strips may open the nasal passages and claim to ease congestion due to a cold, sinusitis, or hay fever. As of yet, there is no scientific evidence that they offer such benefits. A nasal wash can be helpful for removing mucus from the nose. A saline solution can be purchased at a drug store or made at home. If you make a salt solution at home, you should first boil tap water and carefully clean and dry any device that was used to store the water. Although nasal washes have long been recommended, one study reported that neither a homemade solution (using one teaspoon of salt and one pinch of baking soda in a pint of warm water) nor a commercial hypertonic saline nasal wash had any effect on symptoms. Further, one preliminary study found that over-the-counter saline nasal sprays that contain benzalkonium chloride as a preservative may actually worsen symptoms and infection. Some physicians, however, advocate a traditional nasal wash that has been used for centuries and is different from that used in most studies. It contains no baking soda and uses more fluid for each dose and less salt. The nasal wash should be performed several times a day. A simple method for administering a nasal wash: - Lean over the sink head down. - Pour some solution into the palm of the hand and inhale it through the nose, one nostril at a time. - Spit the remaining solution out. - Gently blow the nose. The solution may also be inserted into the nose using a large rubber ear syringe, available at a pharmacy. In this case, the process is the following: - Lean over the sink head down. - Insert only the tip of the syringe into one nostril. - Gently squeeze the bulb several times to wash the nasal passage. - Then press the bulb firmly enough so that the solution passes into the mouth. - The process should be repeated in the other nostril. Nasal-delivery decongestants are applied directly into the nasal passages with a spray, gel, drops, or vapors. Nasal forms work faster than oral decongestants and have fewer side effects. They often require frequent administration, although long-acting forms are now available. Ingredients and brands of nasal decongestants include the following: Long Acting Nasal-Delivery Decongestants. They are effective in a few minutes and remain so for 6 - 12 hours. The primary ingredient in long-acting decongestant is: - Oxymetazoline: Brands include Vicks Sinex (12-hour brands), Afrin (12-hour brands), Dristan 12-Hour, Good Sense, Nostrilla, Neo-Synephrine 12-Hour - Xylometazoline: Inspire, Otrivin, Natru-vent Short-Acting Nasal-Delivery Decongestants. The effects usually last about 4 hours. The primary ingredients in short-acting decongestants are: - Phenylephrine: Neo-Synephrine (mild, regular, high-potency), 4-Way, Dristan Mist Spray, Vicks Sinex - Naphazoline (Naphcon Forte, Privine) Dependency and Rebound. The major hazard with nasal-delivery decongestants, particularly long-acting forms, is a cycle of dependency and rebound effects. The 12-hour brands pose a particular risk for this effect. This effect works in the following way: - With prolonged use (more than 3 - 5 days), nasal decongestants lose effectiveness and even cause swelling in the nasal passages. - The patient then increases the frequency of their dose. The congestion worsens, and the patient responds with even more frequent doses, in some cases as often as every hour. - Individuals then become dependent on them. Tips for Use. The following precautions are important for people taking nasal decongestants: - When using a nasal spray, spray each nostril once. Wait a minute to allow absorption into the mucosal tissues, and then spray again. - Keep the nasal passages moist. All forms of nasal decongestants can cause irritation and stinging. They also may dry out the affected areas and damage tissues. - Do not share droppers and inhalators with other people. - Use decongestants only for conditions requiring short-term use, such as before air travel or for a single-allergy attack. Do not take them more than 3 days in a row. - Discard sprayers, inhalators, or other decongestant delivery devices when the medication is no longer needed. Over time, these devices can become reservoirs for bacteria. - Discard the medicine if it becomes cloudy or unclear. Oral decongestants also come in many brands, which mainly differ in their ingredients. The most common active ingredients are pseudoephedrine (Sudafed, Actifed, Drixoral) or phenylephrine (Sudafed PE and many other cold products). Note that pseudoephedrine sales are restricted in many communities because of potential use in the manufacturing of meth. Side Effects of Decongestants. Decongestants have certain adverse effects, which are more apt to occur in oral than nasal decongestants and include the following: - Agitation and nervousness - Drowsiness (particularly with oral decongestants and in combination with alcohol) - Changes in heart rate and blood pressure Avoid combinations of oral decongestants with alcohol or certain drugs, including monoamine oxidase inhibitors (MAOI) and sedatives. Individuals at Risk for Complications from Decongestants. People who may be at higher risk for complications are those with certain medical conditions, including disorders that make blood vessels highly susceptible to contraction. Such conditions include the following: - Heart disease - High blood pressure - Thyroid disease - Prostate problems that cause urinary difficulties - Raynaud's phenomenon - High sensitivity to cold - Emphysema or chronic bronchitis Anyone with the above conditions should not use either oral or nasal decongestants without a doctor's guidance. In addition, people taking medications that increase serotonin levels, such as certain antidepressants, anti-migraine agents, diet pills, St. John's wort, and methamphetamine, should avoid decongestants. The combinations can cause blood vessels in the brain to narrow suddenly, causing severe headaches and even stroke. Others who should use these drugs with caution are the following (consult your health care provider): - Anyone who is pregnant. - Children: Children appear to metabolize decongestants differently than adults. Decongestants should not be used at all in infants and small children under the age of 4. Young children are at particular risk for side effects that depress the central nervous system. Such symptoms cause changes in blood pressure, drowsiness, deep sleep, and, rarely, coma. Studies have also shown that these cough and cold products generally are not effective in the treatment of children under 6 years of age. In October 2007, drug manufacturers voluntarily withdrew from the market all oral cough and cold products, including decongestants, aimed at children under 2, due to potential harm from misuse. In late 2008, the Consumer Healthcare Products Association, which represents most of the US makers of nonprescription over-the-counter cough and cold medicines in children, began voluntarily modifying its products' labels to read "Do Not Use in Children Under 4." This action is supported by the FDA. Under no circumstances should children be given adult medicines, including over-the-counter medications. Major studies have indicated that over-the-counter cough medicines are not very effective, but they are also not harmful. - For thick phlegm, patients may try cough medications that contain guaifenesin (Robitussin, Scot-Tussin Expectorant), which loosens mucus. Patients should not suppress coughs that produce mucus and phlegm. It is important to expel this substance. To loosen phlegm, patients should drink plenty of fluids and use a humidifier or steamer. - For patients with a dry cough, a suppressant may be useful, such as one that contains dextromethorphan (Drixoral Cough, Robitussin Maximum Strength Cough Suppressant). Medications that contain both a cough suppressant and an expectorant are not useful and should be avoided. Medicated cough drops that contain dextromethorphan are not very useful. A patient is just as likely to find relief from hard candy or lozenges. Prescription cough medications with small doses of narcotics are available. They are usually reserved for lower respiratory infections with significant coughs. Remedies for Sore Throat Associated with Colds Sore throats that are associated with colds are generally mild. The following may be helpful: - Cough drops, throat sprays, or gargling warm salt water may help relieve sore throat and reduce coughing. - Throat sprays that contain phenol (such as Vicks Chloraseptic) may be helpful for some. - Cough drops that contain menthol and mild anesthetics, such as benzocaine, hexylresorcinol, phenol, and dyclonine (the most potent), may soothe a mild sore throat. - People with sore throats from postnasal drip might try taking a teaspoon of liquid antacid. They shouldn't drink anything afterward, since the intention is to coat the throat and help neutralize the acid in the mucus that might be causing pain. If soreness in the throat is very severe and does not respond to mild treatments, the patient or parent should check with the physician to see if a strep throat is present, which would require antibiotics. [See "What is Strep Throat?" in the Diagnosis section of this report.] Combination Cold and Flu Remedies and Antihistamines Dozens of remedies are available that combine ingredients aimed at more than one cold or flu symptom. In general, they do no harm, but they have the following problems: - Some ingredients may produce side effects without even helping a cold. - In some cases, the ingredients conflict (such as a cough expectorant and a cough suppressant). - In other cases, a patient may wish to increase the dosage to improve one symptom, which serves to increase other ingredients that do no good and, in higher doses, may cause side effects. Acetaminophen. Many cold and flu remedies contain acetaminophen, the active ingredient in Tylenol. Acetaminophen in high dosages can cause serious liver injury. When taking combination medicines, always check the ingredients for the presence of acetaminophen, and be sure never to take more than the recommended daily dose of 4g acetaminophen. Note on Antihistamines. Many combination remedies contain antihistamines. Antihistamines are used principally for allergies and the common cold. First-generation antihistamines may reduce cold symptoms by drying out nasal passages; this may help with a running nose caused by colds (but it also interferes with treatments of sinusitis). Their benefits for the cold are likely to be due to the drowsiness they cause. Such antihistamines include Benadryl, Tavist, and Chlor-Trimeton. The newer, second-generation antihistamines (Claritin, Allegra, Zyrtec) do not have these effects and also appear to have no benefits against colds. Herbs and Supplements Herbal remedies and dietary supplements are not regulated by the FDA. This means that manufacturers and distributors do not need FDA approval to sell their products. However, any substance that affects the body's chemistry can, like any drug, produce side effects that may be harmful. There have been numerous reported cases of serious and even deadly side effects from herbal products. The following are special concerns for people taking natural remedies for colds or influenza: - Echinacea is commonly taken to prevent onset and ease symptoms of colds or flu. High quality studies have failed to show that this herb helps prevent or treat colds. In addition, some people are allergic to echinacea. People who have autoimmune diseases or plant allergies should avoid it. There have been a few reports of people experiencing a skin reaction to this herb. This particular reaction, called erythema nodosum, is characterized by tender, red nodules under the skin. - Chinese herbal cold and allergy products can contain trace amounts of aristolochic acid, a chemical that causes kidney damage and cancer. Many herbal remedies imported from Asia may contain potent pharmaceuticals, such as phenacetin and steroids, as well as toxic metals. - The use of elderberry extract has been shown in laboratory studies to inhibit the activity of certain viruses, including flu viruses. A small randomized controlled study in humans has shown elderberry extract shortened the duration of flu symptoms in participantsHowever, larger studies are needed to confirm these observations. Vaccines are available to prevent influenza (See Viral Influenza Vaccines section in this report). For mild influenza, symptom relief is similar to that for colds. Who Needs Antiviral Drugs Two classes of antiviral agents have been developed to treat influenza: neuraminidase inhibitors and M2 inhibitors. These drugs can shorten symptoms but there is no indication that they can prevent or reduce complications such as pneumonia. They do not help if they are started after the first 36 hours of illness. Because of emerging drug resistance, some experts suggest these drugs be reserved for severely ill patients or those at high risk. Most people who get seasonal or H1N1 flu will likely recover without needing medical care. Doctors, however, can prescribe antiviral drugs to treat people who become very sick with the flu or are at high risk for flu complications. If you need treatment for the flu, the CDC recommends that your doctor give you zanamivir (Relenza) or osteltamivir (Tamiflu). These drugs work best if you receive them within 2 days of becoming ill. You may get them later if you are very sick or if you have a high risk for complications. Those at high risk for complications and are more likely to need treatment include: - People with weakened immune systems, such patients patients being treated for AID or cancer - Elderly patients, particularly patients in nursing home - Very young children (it may be difficult to tell whether pneumonia is related to influenza or caused by respiratory syncytial virus [RSV]) - Hospitalized patients and anyone with serious medical conditions, such as diabetes, heart, circulation, or lung disorders, particularly chronic lung disease - Drug abusers who use needles - Pregnant woman, especially those suspected of having H1N1 flu To prevent infection with H1N1 flu, people who are at risk for complications and living in the same house as someone diagnosed with the virus should ask their doctor if they also need a prescription for these medicines. Anti-Viral Drugs: Neuraminidase Inhibitors Brands and Benefits. Zanamivir (Relenza) and oseltamivir (Tamiflu) are neuraminidase inhibitors. They are newer agents that have been designed to block a key viral enzyme, neuraminidase, which is involved with viral replication. While effective, their overall benefit is modest. Important points about the use of these drugs: - The main benefit of these drugs is a reduction in the length of symptoms by about one day, and only when started within 48 hours after symptoms become evident. They may be used for treating both A and B strains of influenza. - They may help reduce transmission of the virus. - Both show some benefits for preventing influenza. Only oseltamivir has been approved for this purpose, however, in people over the age of 1 year. - They reduce complications of influenza, and decrease mortality when given within the first 4 days of onset of symptoms. - Oseltamivir is the only drug studied in avian flu cases. Although it is active in lab experiments, it has not been successful clinically. Experience is very limited, however, and it is not clear whether people infected with avian flu received the drug in time for it to be useful. Limitations and Side Effects. Although they have many advantages compared to the M2 inhibitors, neuraminidase inhibitors are much more expensive. They also need to be taken within 2 days of the start of symptoms to be effective. Neither neuraminidase inhibitor is effective against influenza-like illness (one that is not caused by an influenza virus). There are also some differences between the two drugs that could be significant for some individuals: - Zanamivir is administered through an inhaler. People with asthma or other lung disorders may experience airway spasms and should use this drug with caution. Side effects are generally minor in most patients. It is important to make sure that elderly patients are able to properly use the zanamivir inhaler device. Zanamivir should ONLY be used in its original inhaler device. - Oseltamivir comes in capsule and liquid form. Side effects are also minor, but about 10 to 15% of patients experience nausea and vomiting. Patients with kidney dysfunction should take lower doses. The current use of neuraminidase inhibitors in different age and patient groups is as follows: - Adults: Both drugs are approved for treatment in adult patients. - Children: Oseltamivir is approved for use in children 2 weeks and older for treatment. Studies report significant reduction in symptoms and in the incidence of ear infections in this population. The American Academy of Pediatrics recommends the following: Therapy should be provided to children with influenza infection who are at high risk of severe infection, and to children with moderate-to-severe influenza infection who may benefit from a decrease in the duration of symptoms. Prophylaxis should be provided (1) to high-risk children who have not yet received immunization and during the 2 weeks after immunization, (2) to unimmunized family members and health care professionals with close contact with high-risk unimmunized children or infants who are younger than 6 months, and (3) for control of influenza outbreaks in unimmunized staff and children in an institutional setting. Children aged 3 to 11 months who were born at full term may receive oseltamivir for prevention. The use of this medication for prevention of influenza in full term infants younger than 3 months of age is not recommended unless the situation is judged critical, such as a critically ill family member is hospitalized with the flu. Oseltamivir should not be given to preterm infants. - High-risk Patients. Recent studies indicate neuraminidase inhibitors are safe and effective in patients with serious medical problems or other conditions that put them at risk for complications of flu. A third neuroaminidase product, peramivir, is now in clinical trials. However, it was authorized as emergency treatment for severely ill, hospitalized patients with H1N1 "swine" flu. This authorization was terminated in June 2010. Peramivir is given intravenously. Anti-Viral Drugs: M2 Inhibitors Brands and Benefits. Amantadine (Symmetrel) and rimantadine (Flumadine) are M2 inhibitors. The following benefits may apply to the minority of strains of influenza A that remain sensitive to the drugs: - Both offer some protection against influenza A and prevent severe illness if a person contracts the infection. (To be effective, it must be administered within 2 days of onset.) - They may shorten the duration and lessen the severity of the flu if given within 48 hours of onset of symptoms. Limitations. Drawbacks of M2 inhibitors include: - They are not effective against the 2012 - 2013 flu strains. - Viral resistance to these agents is rapidly increasing. - M2 inhibitors are not effective against influenza B. - Neither drug has proven to reduce the risk for complications of the flu, including pneumonia and bronchitis. Side Effects. Both M2 inhibitors occasionally cause nausea, vomiting, indigestion, insomnia, and hallucinations. Amantadine affects the nervous system and about 10% of people experience nervousness, depression, anxiety, difficulty concentrating, and lightheadedness. Rimantadine is less likely to do so. Rarely, amantadine can cause seizures. Note: Amantadine is a standard treatment for Parkinson's disease and should be continued for that condition. "Flu Shots." These vaccines use inactivated (not live) viruses. They are designed to provoke the immune system to attack antigens found on the surface of the virus. (Antigens are foreign molecules that the immune system specifically recognizes as alien and targets for attack.) Unfortunately, the antigens in these influenza viruses undergo genetic changes (called antigenic drift) over time, so they are likely to become resistant to a vaccine that worked in the previous year. Vaccines are then redesigned annually to match the current strain. - Influenza A. The influenza A virus is further categorized by primary molecular antigens (hemagglutinin and neuraminidase), which serve as the targets for the vaccines. Influenza A is a particular problem, because it can infect other species, such as pigs or chicken, and undergo major genetic changes. - Influenza B viruses tend to be more stable than influenza A viruses, but they too vary. Although influenza B has been far less common than A, a vaccine for type B is important because experts are concerned that small children will not have developed any immunity to the virus, and will experience severe flu if they are exposed to type B viruses. - Existing influenza vaccines match 2 current influenza A strains and 1 current influenza B strain. In December 2012, the FDA approved a new type of influenza vaccine, which will be used for the first time in the 2013-2014 season. This vaccine will match the 2 current strains of both influenza A and B, to provide wider protection. The vaccine is approved for ages 3 years and older. Injectable vaccines. There are 3 types of influenza injectable vaccines: - The regular killed vaccine is licensed for use in everyone 6 months and older. - The intradermal injection uses a much smaller needle, and a smaller dose of the same killed vaccine. It is injected into the skin instead of the muscle. - The high-dose injection is for people 65 and older, whose immune system is possibly weaker as a result of normal aging. This killed vaccine is identical to the other two in the strains it carries, but delivers a much higher dose of the antigens, to create a strong immune response in the recipients. Intranasal (inside the nose) vaccine. A live but weakened intranasal vaccine (FluMist) is effective and safe in healthy, non-pregnant people aged 2 to 49 years. It is known as a live, attenuated, intranasal influenza vaccine (LAIV). The vaccine is engineered to grow only in the cooler temperatures of the nasal passages, not in the warmer lungs and lower airways. It boosts the specific immune factors in the mucous membranes of the nose that fight off the actual viral infections. FluMist is given using a nasal spray. It should NOT be used in those who have asthma or in children under age 5 who have repeated wheezing episodes. Timing and Effectiveness of the Vaccine. Ideally, everyone should be vaccinated every October or November. However, it may take longer for a full supply of the vaccine to reach certain locations. In such cases, the high-risk groups should be served first. Antibodies to the flu virus usually develop within 2 weeks of vaccination, and immunity peaks within 4 to 6 weeks, then gradually wanes. - Children younger than 9 years of age, who have not been previously vaccinated or received only 1 dose prior to July 2010 should be given 2 vaccine doses, spaced 4 weeks apart. - It should be noted that if an individual develops flu symptoms and is accurately diagnosed in time, vaccination of the other members of the household within 36 to 48 hours affords effective protection to those individuals, according to an analysis of multiple studies. In healthy adults, immunization typically reduces the chance of getting the seasonal flu by about 70 to 90%. The current flu vaccines may be slightly less effective in certain patients, such as the elderly and those with certain chronic diseases. Some evidence suggests, however, that even in people with a weaker response, the vaccine is usually protective against serious flu complications, particularly pneumonia. Some evidence also suggests that among the elderly, a flu shot may help protect against stroke, adverse heart events, and death from all causes. Everyone aged 6 months and over should get a flu vaccine; the only exception is for those who are allergic to the vaccine. Vaccination is especially important in the following groups, who are at a high risk for complications from the flu: - People who are 50 or more years of age - People who are 6 to 49 months of age - People who have chronic lung disease, including asthma and COPD, or heart disease - People who are 18 years old or younger AND taking long-term aspirin therapy - People who have sickle cell anemia or other hemoglobin-related disorders - People who have kidney disease, anemia, diabetes, or chronic liver disease - People who have a weakened immune system (including those with cancer or HIV/AIDS) - People who receive long-term treatment with steroids for any condition - Women who are pregnant or plan to become pregnant during the flu season. Women who are pregnant should receive only the inactivated flu vaccine. (Vaccinations should usually be given after the first trimester. Exceptions may be women who are in their first trimester during flu season, because their risk from complications of the flu is higher than any theoretical risk to the baby from the vaccine) Possible side effects of the flu vaccine include: - Allergic Reaction. Newer vaccines contain very little egg protein, but an allergic reaction still may occur in people with strong allergies to eggs. A new vaccine (Flucelvax) made in animal cell culture, not in eggs, was approved by the FDA in November 2012 for people aged 18 years and older. - Soreness at the Injection Site. Up to two-thirds of people who receive the influenza vaccine develop redness or soreness at the injection site for 1 or 2 days afterward. - Flu-like Symptoms. Some people actually experience flu-like symptoms, called oculorespiratory syndrome, which include conjunctivitis, cough, wheeze, tightness in the chest, sore throat, or a combination. Such symptoms tend to occur 2 to 24 hours after the vaccination and generally last for up to 2 days. It should be noted that these symptoms are not the flu itself but an immune response to the virus proteins in the vaccine. (Anyone with a fever at the time the vaccination is scheduled, however, should wait to be immunized until the ailment has subsided.) - Guillain-Barre Syndrome. Isolated cases of a paralytic illness known as Guillain-Barre syndrome have occurred, but if there is any higher risk following the flu vaccine, it is very small (one additional case per 1 million people), and does not outweigh the benefits of the vaccine. Guillain-Barre syndrome resolves in most cases, but recovery is slow. There has been some question concerning influenza vaccinations because of reports that these vaccines may worsen asthma. Recent and major studies have been reporting, however, that the vaccination is safe for children with asthma. It is also very important for these patients to reduce their risk for respiratory diseases. Avian Influenza Vaccine The FDA approved the first vaccine for humans against H5NI influenza virus in April 2007. The vaccine, which is made from a human strain of the virus, could be used in people ages 18 to 64 to prevent the spread of the virus from human to human. The vaccine requires two doses, given about a month apart. It will not be sold commercially, but instead is being purchased by the U.S. government to be stockpiled and distributed to public health officials in the event of an outbreak of avian flu. The vaccine led to the development of antibodies in 45% of those who received the higher dose studied. The most common side effects reported were pain at the injection site, headache, and muscle pain. Research on the vaccine is continuing. A new vaccine, currently in clinical trials, is made from artificial virus-like particles -- a collection of proteins that look like the outside of the virus but are made in the lab and cannot reproduce. Who Needs Antibiotic How Is Strep Throat Treated? Strep throat infections require antibiotics. Antibiotics prevent a serious complication called rheumatic fever, which can result in permanent damage to the heart. Fortunately, this complication rarely occurs in United States anymore. Antibiotic treatment of strep throat will almost always prevent this complication. In addition, antibiotics shorten the recovery time from strep throat. The following antibiotics are generally used to treat strep throat: - Penicillin is usually the antibiotic of choice unless the patient is allergic to it. A full 10 days of treatment may be necessary to clear the infection. Amoxicillin, a form of penicillin, is proving to be effective when taken in a single daily dose for 10 days. - Macrolide antibiotics. Erythromycin is known as a macrolide antibiotic and is an appropriate choice for patients with penicillin allergies. A 10-day regimen is needed to clear the infection. The drug often causes gastrointestinal distress. Another macrolide, azithromycin, can be given as a single daily dose and is effective in 5 days. It has fewer side effects than erythromycin but is more expensive. Bacterial resistance to macrolides is increasing. - Cephalosporins are also very effective in eradicating the bacteria. but they may cause reactions in people with severe penicillin allergies. Antibiotics are often prescribed inappropriately for non-strep sore throats. Studies indicate that fewer than half of adults and far fewer of the children with even strong signs and symptoms for strep throat actually have strep infections. Parents should be comforted that a delay in antibiotic treatment while waiting for lab results does not increase the risk that the child will develop serious long-term complications, including acute rheumatic fever. If a patient is severely ill, however, it is reasonable to begin administering antibiotics before the results are back. If the culture is negative (there is no evidence of bacteria), the doctor should call the family to make certain the patient stops taking the antibiotics and any remaining pills are discarded. Children who have a sore throat and who have had rheumatic fever in the past should receive antibiotics immediately, even before culture results are back. Children with a sore throat who have a family member with strep throat or rheumatic fever should also receive immediate antibiotic treatment. The intense and widespread use of antibiotics is leading to a serious global problem of antibiotic resistance. The inappropriate use of powerful newer antibiotics for conditions such as colds or sore throats poses a particular risk for the development of resistant strains of bacteria. For example, the number of cases of methicillin-resistant Staphylococcus aureus (MRSA) is increasing in people who have no known risk factors. (MRSA can cause severe skin infections.) In 2006, rates of Neisseria gonorrhoeae resistance to the fluoroquinolone antibiotics family exceeded 10%. The CDC no longer recommends treating gonorrhea infections with fluoroquinolone first. When Antibiotics Are Needed for Upper Respiratory Infections. Antibiotics do not affect viruses and, in healthy individuals, these drugs are not necessary or helpful for influenza or colds, even with persistent cough and thick, green mucus. In one disturbing study, antibiotics were prescribed for nearly half of children who went to the doctor for a common cold. Antibiotics may be required for upper respiratory tract infections only under certain situations, such as the following: - Patients, particularly small children or elderly people, who have medical conditions that put them at high risk for complications from any respiratory tract infections, may sometimes be given antibiotics. - Patients with severe sinusitis that does not clear up within 7 days (some experts say 10 days) and whose symptoms include one or more of the following: green and thick nasal discharge, facial pain, or tooth pain or tenderness. [For more information, see In-Depth Report # 62: Sinusitis.] - Some children with middle ear infections, although experts differ on who will benefit. Some experts recommend that only children under the age of 2 years should be treated with antibiotics, and children over 2 should be treated on a case-by-case basis. [For more information, see In-Depth Report # 78: Ear Infections.] - Patients with strep throat or severe sore throat that involves fever, swollen lymph nodes, and absence of cough. (Strep throat makes up only 10 to 15% of all sore throat cases.) Patients at Highest Risk for Infection with Resistant Bacteria Strains. Some patients are at greater risk for developing an infection resistant to common antibiotics. At this time, the average person is not endangered by this problem. Risk factors include: - Very old or very young age - Exposure to patients with drug-resistant infection - Hospitalization in intensive care units - History of an invasive surgical procedure - Staying in the hospital - Prolonged course of antibiotics, particularly within the past 4 to 6 weeks - Serious wounds - Tubes down the throat, catheters, or intravenous (I.V.) lines Children at higher risk for antibiotic resistance are those who attend day care, who are exposed to cigarette smoke, who were bottle-fed, and who had siblings with recurrent ear infections. What the Health Care Community Is Doing. Prescribing antibiotics only when necessary is the most important step in restoring bacterial strains that are susceptible to antibiotics. Encouraging studies are reporting that inappropriate antibiotic prescriptions are on the decline. Prescriptions for other common respiratory infections, such as otitis media, sore throat, acute bronchitis, and colds and flus have been decreasing. What Patients and Parents Can Do. Patients and parents can also help with the following tips: - Use home or over-the-counter remedies to relieve symptoms of mild upper respiratory tract infections. - Realize that antibiotics will not shorten the course of a viral infection. It is important for patients and parents to understand that although antibiotics may bring a sense of security, they provide no significant benefit for a person with viral infection, and overuse can contribute to the growing problem of resistant bacteria. - Don't pressure a doctor into prescribing an antibiotic if it is clearly inappropriate. The doctor very often will give in. - If a child needs an antibiotic, ask the doctor whether it is appropriate to use high-dose short-term antibiotics, which may lower the risk for developing resistant strains. - If an antibiotic is prescribed, take the full course, even if you feel better before finishing it. - www.cdc.gov/flu -- U.S. Centers for Disease Control and Prevention - www.niaid.nih.gov -- National Institute for Allergy and Infectious Diseases - www.who.int/csr/disease/influenza/en -- World Health Organization - www.cdc.gov/vaccines -- National Immunization Program - www.immunize.org -- Immunization Action Coalition - www.entnet.org -- American Academy of Otolaryngology -- Head and Neck Surgery - www.cdc.gov/flu/avianflu -- Avian Influenza Information Altamimi S, Khalil A, Khalaiwi KA, Milner R, Pusic MV, Al Othman MA. Short versus standard duration antibiotic therapy for acute streptococcal pharyngitis in children. Cochrane Database of Systematic Reviews 2009, Issue 1. Art. No.: CD004872. American Academy of Pediatrics Committee on Infectious Diseases. Recommendations for prevention and control of influenza in children, 2012-2013. Pediatrics. 2012;130(4):780-792. Burch J. Prescription of anti-influenza drugs for healthy adults: A systematic review and meta-analysis. Lancet Infect Dis. 2009;9(9):537-545 Centers for Disease Control and Prevention (CDC). Prevention and control of influenza with vaccines: recommendations of the Advisory Committee on Immunization Practices (ACIP) -- Unites States, 2012-2013 Influenza Season. MMWR. 2012;61(32):613-618. Centers for Disease Control and Prevention. Key Facts About Seasonal Influenza (Flu). Available online. Last accessed 1/10/2013. Centers for Disease Control and Prevention. Influenza Prevention & Control Recommendations: Vaccination of Specific Populations. Available online. Last accessed 1/10/2013. Centers for Disease Control and Prevention Update to CDC's Sexually Transmitted Diseases Treatment Guidelines, 2010: Oral Cephalosporins No Longer a Recommended Treatment for Gonococcal Infections. MMWR. 2012; 61(31);590-594. Centers for Disease Control and Prevention. Influenza Antiviral Medications: Summary for Clinicians. Available online. Last accessed 1/10/2013. Chan TV. The patient with sore throat. Med Clin North Am. 2010;94:923-943. D'Cruze H, Arroll B, Kenealy T. Is intranasal zinc effective and safe for the common cold? A systematic review and meta-analysis. J Prim Health Care. 2009;1(2):134-139. GlaxoSmithKline. RELENZA prescribing information. December, 2010. Interagency Task Force on Antimicrobial Resistance. A Public Health Action Plan to Combat Antimicrobial Resistance. 2012 Update. Available online. Last accessed 1/13/2013. Jefferson T, Jones M, Doshi P, Del Mar C. Neuraminidase inhibitors for preventing and treating influenza in healthy adults: systematic review and meta-analysis. BMJ. 2009;339:b5106. Khurana S, Wu J, Verma, N, et al.H5N1 Virus-Like Particle Vaccine Elicits Cross-Reactive Neutralizing Antibodies That Preferentially Bind to the Oligomeric Form of Influenza Virus Hemagglutinin in Humans. Journal of Virology. 2011;85:0945-10954. Science M, Johnstone J, Roth DE, Guyatt G, Loeb M. Zinc for the treatment of the common cold: a systematic review and meta-analysis of randomized controlled trials. CMAJ. 2012; 184(10):E551-61. Shah SA, Sander S, White CM, Rinaldi M, Coleman CI. Evaluation of echinacea for the prevention and treatment of the common cold: a meta-analysis. Lancet Infect Dis. 2007;7(7):473-80. Shaikh N, Leonard E, Martin JM. Prevalence of streptococcal pharyngitis and streptococcal carriage in children: a meta-analysis. Pediatrics. 2010;126(3):e557-564. Thompson MG et al. Updated Estimates of Mortality Associated with Seasonal Influenza through the 2006-2007 Influenza Season. MMWR. 2010; 59(33): 1057-1062. Turner RB. The common cold. In: Mandell GL, Bennett JE, Dolin R, eds. Principles and Practice of Infectious Diseases. 7th ed. Philadelphia, Pa: Elsevier Churchill Livingstone; 2009:chap 53. U.S. Food and Drug Administration. FDA Clears Rapid Test for Avian Influenza A Virus in Humans. April 7, 2009. Available online. U.S. Food and Drug Administration: Nonprescription Drugs and Pediatric Advisory Committee Meeting. Joint Meeting of the Nonprescription Drugs Advisory Committee and the Pediatric Advisory Committee October 18-19, 2007. Available online. Last accessed 1/13/2013. U.S. Food and Drug Administration. FDA expands Tamiflu's use to treat children younger than 1 year [Press Release]. Available online. Last accessed 1/13/2013. U.S. Food and Drug Administration. FDA approves first seasonal influenza vaccine manufactured using cell culture technology [Press Release]. Available online. Last accessed 1/13/2013. World Health Organization. Cumulative Number of Confirmed Human Cases of Avian Influenza A/(H5N1) Reported to WHO December 17, 2012. Available online. Last accessed 1/06/2012 Reviewed By: Harvey Simon, MD, Editor-in-Chief, Associate Professor of Medicine, Harvard Medical School; Physician, Massachusetts General Hospital. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc.
Since, we are going to do a series of tutorials on packet tracer. In this manner, we need to have a familiarity of various networking components and devices. We are going to discuss some important devices which are going to be used in networking. All networks are made up of basic hardware building blocks to interconnect network nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers etc. These devices also need cables to connect them. In this tutorial, we are going to discuss these important devices. Network interface cardsA NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It provides physical access to a networking medium and often provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.The NIC provides the transfer of data in megabytes. Every device on a network that needs to transmit and receive data must have a network interface card (NIC) installed. They are sometimes called network adapters, and are usually installed into one of the computer's expansion slots in the same way as a sound or graphics card. The NIC includes a transceiver, (a transmitter and receiver combined). The transceiver allows a network device to transmit and receive data via the transmission medium. Each NIC has a unique 48-bit Media Access Control (MAC) address burned in to its ROM during manufacture. The first 24 bits make up a block code known as the Organisationally Unique Identifier (OUI) that is issued to manufacturers of NICs, and identify the manufacturer. The issue of OUIs to organisations is administered by the Institute of Electrical and Electronics Engineers (IEEE). The last 24 bits constitute a sequential number issued by the manufacturer. The MAC address is sometimes called a hardware address or physical address, and uniquely identifies the network adapter. It is used by many data link layer communications protocols, including Ethernet, the 802.11 wireless protocol and Bluetooth. The use of a 48-bit adress allows for 248(281,474,976,710,656) unique addresses. A MAC address is usually shown in hexadecimal format, with each octet separated by a dash or colon, For example: 00-60-55-93-R2-N7 A repeater is an electronic device that receives a signal and retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair ethernet configurations, repeaters are required for cable runs longer than 100 meters away from the computer. As signals travel along a transmission medium there will be a loss of signal strength i.e. attenuation. A repeater is a non-intelligent network device that receives a signal on one of its ports, regenerates the signal, and then retransmits the signal on all of its remaining ports. Repeaters can extend the length of a network (but not the capacity) by connecting two network segments. Repeaters cannot be used to extend a network beyond the limitations of its underlying architecture, or to connect network segments that use different network access methods. They can, however, connect different media types, and may be able to link bridge segments with different data rates. Repeaters are used to boost signals in coaxial and twisted pair cable and in optical fibre lines. An electrical signal in a cable gets weaker the further it travels, due to energy dissipated in conductor resistance and dielectric losses. Similarly a light signal traveling through an optical fiber suffers attenuation due to scattering and absorption. In long cable runs, repeaters are used to periodically regenerate and strengthen the signal. A hub contains multiple ports. When a packet arrives at one port, it is copied to all the ports of the hub for transmission. In a hub, a frame is passed along or "broadcast" to every one of its ports. It doesn't matter that the frame is only destined for one port. The hub has no way of distinguishing which port a frame should be sent to. Passing it along to every port ensures that it will reach its intended destination. This places a lot of traffic on the network and can lead to poor network response times. Additionally, a 10/100Mbps hub must share its bandwidth with each and every one of its ports. So when only one PC is broadcasting, it will have access to the maximum available bandwidth. If, however, multiple PCs are broadcasting, then that bandwidth will need to be divided among all of those systems, which will degrade performance. A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges do not copy traffic to all ports, as hubs do, but learn which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address only to that port. Bridges do send broadcasts to all ports except the one on which the broadcast was received. Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived. Bridges don't know anything about protocols, but just forward data depending on the destination address in the data packet. This address is not the IP address, but the MAC (Media Access Control) address that is unique to each network adapter card. The bridge is basically just to connect two local-area networks (LANs), or two segments of the same LAN that use the same protocol. Bridges can extend the length of a network, but unlike repeaters they can also extend the capacity of a network, since each port on a bridge has its own MAC address. When bridges are powered on in an Ethernet network, they start to learn the network's topology by analysing the source addresses of incoming frames from all attached network segments (a process called backward learning ). Over a period of time, they build up a routing table . The bridge monitors all traffic on the segments it connects, and checks the source and destination address of each frame against its routing table. When the bridge first becomes operational, the routing table is blank, but as data is transmitted back and forth, the bridge adds the source MAC address of any incoming frame to the routing table and associates the address with the port on which the frame arrives. In this way, the bridge quickly builds up a complete picture of the network topology. If the bridge does not know the destination segment for an incoming frame, it will forward the frame to all attached segments except the segment on which the frame was transmitted. Bridges reduce the amount of traffic on individual segments by acting as a filter, isolating intra-segment traffic. This can greatly improve response times. The switch is a relatively new network device that has replaced both hubs and bridges in LANs. A switch uses an internal address table to route incoming data frames via the port associated with their destination MAC address. Switches can be used to connect together a number of end-user devices such as workstations, or to interconnect multiple network segments. A switch that interconnects end-user devices is often called a workgroup switch. Switches provide dedicated full-duplex links for every possible pairing of ports, effectively giving each attached device its own network segment This significantly reduces the number of intra-segment and inter-segment collisions. Strictly speaking, a switch is not capable of routing traffic based on IP address (layer 3) which is necessary for communicating between network segments or within a large or complex LAN. Some switches are capable of routing based on IP addresses but are still called switches as a marketing term. A switch normally has numerous ports, with the intention being that most or all of the network is connected directly to the switch, or another switch that is in turn connected to a switch. Routers are networking devices that forward data packets between networks using headers and forwarding tables to determine the best path to forward the packets. A network environment that consists of several interconnected networks employing different network protocols and architectures requires a sophisticated device to manage the flow of traffic between these diverse networks. Such a device, sometimes referred to as an intermediate system, but more commonly called a router, must be able to determine how to get incoming packets (or datagrams) to the destination network by the most efficient route. Routers gather information about the networks to which they are connected, and can share this information with routers on other networks. The information gathered is stored in the router's internal routing table, and includes both the routing information itself and the current status of various network links. Routers exchange this routing information using special routing protocols. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Routers are located at gateways, the places where two or more networks connect, and are the critical device that keeps data flowing between networks and keeps the networks connected to the Internet. When data is sent between locations on one network or from one network to a second network the data is always seen and directed to the correct location by the router. The router accomplishes this by using headers and forwarding tables to determine the best path for forwarding the data packets, and they also use protocols such as ICMP to communicate with each other and configure the best route between any two hosts. The Internet itself is a global network connecting millions of computers and smaller networks. There are various routing protocols which are helpful for various different environments and will be discussed later.
Contents: (Click to go to that topic) The integral, along with the derivative, are the two fundamental building blocks of calculus. Put simply, an integral is an area under a curve; This area can be one of two types: definite or indefinite. Definite integrals give a result (a number that represents the area) as opposed to indefinite integrals, which are represented by formulas. Indefinite Integrals (also called antiderivatives) do not have limits/bounds of integration, while definite integrals do have bounds. Watch the video for a quick introduction on to definite integrals, or read on below for more definitions, how-to articles and videos. - Additive Interval Property - Divergent Integrals - Double Integrals - Elliptic Integrals - Isotropic / Anisotropic Definition, Examples - Fundamental Theorem of Calculus - Fresnel Integrals - Fubini’s Theorem - Gauge Integral - Improper Integrals - Integral Bounds / Limits of Integration - Integral Function - Integral Kernel - Integral Operator - Iterated Integrals - Lebesgue Integration - Line Integral - Mean Value Theorem for Integrals - Multiple Integrals: Definition, Examples - Numerical Quadrature (Numerical Integration) - Order of Integration - Ordinary Integral - Probability Integral - Product Integral - Quadruple Integral: Definition, Uses of - Riemann Integral - Singular Integral: Simple Definition - Sum Rule - Stratonovich Integral: Definition - Triple Integral (Volume Integral) General How-To Integrals - Area Between Curve and Y-Axis - Area Function. - Indefinite Integrals of power functions - Constant Rule of Integration - Finding definite integrals - Integration Using Long Division: Definition, Examples - Integration by parts - Integration by Separation - Log Rule for Integration - Integral of a Natural Log - Integrate with U Substitution - How to Integrate Y With Respect to X - Method of Partial Fractions - Rationalizing Substitutions - Tabular Integration (The Tabular Method) - Trig Substitution Integral Calculus Advanced Problem Solving - Find Total Distance Traveled (opens in new window) - How to find the volume of an egg(opens in new window) - How to prove the volume of a cone(opens in new window) - How to find the area between two curves An elliptic integral is an integral with the form Here R is a rational function of its two arguments, w, and x, and these two arguments are related to each other by these conditions: - w2 is a cubic function or quartic function in x, i.e. w2= f(x) = a0 x4 + a1 x3 + a2 x2 + a3 x + a4 - R(w,x) has at least one odd power of w - w2 has no repeated roots In a way, these integrals are generalizations of inverse trigonometric functions. They provide solutions to a wider class of problems than inverse trigonometric functions do; simple problems like calculating the position of a pendulum as well as more complicated problems in electromagnetism and gravitation. Reducing Elliptic Integrals As a rule, elliptic integrals can’t be written in terms of elementary functions. There are some special integrals, though: the Legendre elliptic integrals or the canonical elliptic integrals of the first, second and third kinds. Every elliptic integral can be written as a sum of elementary functions and linear combinations of these. These get their name because they were first studied by mathematicians looking to calculate the arc length of an ellipse. The first recorded study of this problem was in 1655 by John Wallis and shortly after by Isaac Newton, who both published an infinite series expansion that gave the arc length of an ellipse. Later, French mathematician Adrien Marie Legendre (who lived between 1752 and 1833) spent nearly forty years researching elliptic integrals, and he was the first to classify elliptic integrals and find ways of defining them in terms of simpler functions. Elliptic Integrals, Elliptic Functions, and Theta Functions. Retrieved from http://www.mhtlab.uwaterloo.ca/courses/me755/web_chap3.pdf on April 22, 2019 Carlson, B. C. NIST Digital Library of Mathematical Functions. Chapter 19: Elliptic Integrals. Release 1.0.22 of 2019-03-15. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, and B. V. Saunders, eds. Retrieved from https://dlmf.nist.gov/19 on April 22, 2019. Hall, L. (1995). Special Functions. Retrieved May 15, 2019 from: http://web.mst.edu/~lmhall/SPFNS/sfch3.pdf An integral kernel is a given (known) function of two variables that appears in an integral equation; This unknown function appears with an integral symbol. The kernel is symmetric if If K(x, y) = K(y, x). Notation for the Integral Kernel The kernel is denoted by K(x, y): As well as K(x, y), you might also see slightly different notation depending on what variables are used in the equation. For example: - A(x, y), - Ta(x, y), or - K(x, x′). What notation is used sometimes depends on exactly what the kernel is representing. Some specific representations include (Wolf, 2013): - A translation operation 𝕋a: Ta(x, y), - Inversions: I0(x, y), - The operator of differentiation: ∇(x, y). Avramidi (2015) describes an integral operator on the Hilbert space L2 ([a, b]) as follows: Where the function K(x, x′) is the integral kernel. Note that the author also uses “K” on the left hand side of the equation to denote the operator, a distinction that “…shouldn’t cause any confusion because the meaning of the symbol is usually clear from the context”. Integral Kernel, or Symbol? Although the term “integral kernel” is widely used, many authors prefer the alternate term symbol instead, to avoid confusion with many other meanings for the word kernel in mathematics. For example, in geometry, a kernel is the set of points inside a polygon from where the entire boundary of the polygon is visible; In statistics, a kernel is a weighting function used to estimate probability density functions for random variables in kernel density estimation. Integral Kernel: References Avramidi, I. (2015). Heat Kernel Method and its Applications 1st ed. Birkhäuser Paulsen, V. & Raghupathi, M. (2016). An Introduction to the Theory of Reproducing Kernel Hilbert Spaces. Wolf, K. (2013). Integral Transforms in Science and Engineering. Springer Science & Business Media. Generally speaking, an integral operator is an operator that results in integration or finding the area under a curve. It is defined by the integral symbol: ∫. It’s counterpart in calculus is the differential operator (d/dx), which results in differentiation. The integral operator is sometimes called a standard integral operator to separate it from special cases used in complex analysis, operator theory and other areas of mathematical analysis. The term integral operator is also used as a synonym for an integral transform, which is defined via an integral and maps one function to another. Special Cases of Integral Operator The first operators appeared at the beginning of the 20th century, at the beginning of the theory of complex-variable functions. Many operators have been developed over the years and are defined very narrowly for special circumstances. They include: - Alexander integral operator: Defined for a class of analytic functions on the unit disk D : - Fredholm operator: Arises in the Fredholm equation, an integral equation where the term containing the kernel function has constants as limits of integration. - The Volterra integral equation is similar to the Fredholm equation, except that it has variable integral limits. - A variety of pseudo-differential operators are used to study elliptic differential equations. These operators, as well as Fourier integral operators, make it possible to handle differential operators with variable coefficients in about the same way as differential operators with constant coefficients using Fourier transforms . Integral Operator: References Anderson, A. SOME CLOSED RANGE INTEGRAL OPERATORS ON SPACES OF ANALYTIC FUNCTIONS. Retrieved April 23, 2021 from: http://www2.hawaii.edu/~austina/documents/research/aatgpaper2.5.1.pdf Gao, C. (1992). On the Starlikeness of the Alexander Integral Operator. Proc. Japan Acad. 68. Ser. A. Hormander, L. Fourier Integral Operators, I. Retrieved April 23, 2021 from: https://projecteuclid.org/journalArticle/Download?urlid=10.1007%2FBF02392052 How to find the area between two curves in integral calculus Finding the area between two curves in integral calculus is a simple task if you are familiar with the rules of integration (see indefinite integral rules). The easiest way to solve this problem is to find the area under each curve by integration and then subtract one area from the other to find the difference between them. You may be presented with two main problem types. The first is when the limits of integration are given, and the second is where the limits of integration are not given. Area Between Two Curves: Limits of Integration Given Example problem 1: Find the area between the curves y = x and y = x2 between x = 0 and x = 1. Step 1: Find the definite integral for each equation over the range x = 0 and x = 1, using the usual integration rules to integrate each term. (see: calculating definite integrals). Step 2: Subtract the difference between the areas under the curves. You’ll need to visualize the curves (sketch or graph the curves if you need to); you’ll want to subtract the bottom curve from the top one. The curve on top here is f(x) = x, so: 1⁄2 – 1⁄3 = 1⁄6. Limits of Integration NOT Given Example problem: Find the area between the curves y = x and y = x2. Step 1: Graph the equations. In most cases, the limits of integration will be clear, especially if you’re using a TI-calculator with an Intersection feature (just find the intersections of the two graphs). If you can find the intersection by graphing, skip to Step 3. Step 2: Find the common solutions of these two equations if you cannot find the intersection by graphing (treat them as simultaneous equations). Substituting y = x for x in y = x2 gives an equation y = y2, which has only two solutions, 0 and 1. Putting the values back into y = x to give the corresponding values of x: x = 0 when y = 0, and x = 1 when y = 1. The two points of intersection are (0,0) and (1,1). Step 3: Complete the steps in Example Problem 1 (limits of integration given) to complete the calculation. Back to top Integration by Separation Integration by separation takes a complicated-looking fraction and breaks it down into smaller parts that are easier to integrate. For example, the following fraction is challenging (impossible?) to integrate using the usual rules of integration: However, you can rewrite it as a series of fractions, using algebra: Simplifying, this becomes: These fractions can be individually integrated, using the power rule and the common integral ∫1⁄xdx = ln |x|: Trig substitution helps you to integrate some types of challenging functions: - Radicals of polynomial functions, like √(4 – x2), - Rational powers of the form n/2, e.g. (x2 + 1)(3/2). Although trig substitution is fairly straightforward, you should use it when more common integration methods (like u substitution) have failed. The technique is very similar to u substitution: you substitute a new term (one made from integer powers of trig functions) in place of the one you have, in order to make the integration easier. At the end, you simply substitute the original function back in. Why Is Trig Substitution a Last Resort? Although it’s straightforward, trig substitution requires you to have a lot of background knowledge. Unlike a table of integrals, you can’t just look up an integral for a particular expression. It’s a must that you are able to recognize the trigonometric identities. Let’s look at an example to see why this is so important. Example question: Integrate To solve this, you need to consider all of the trig identities to see which would be a good fit. If you aren’t familiar with them, this could be a stumbling block before you’ve even started. In order to solve this particular integral, you need to recognize that it looks very similar to the trig identity 1 + tan2 x = sec2 x. Here are the solution steps: Step 1: : Rewrite the expression using a trig substitution (and derivative). The goal here is to get the expression into something you can simplify with a substitution: Here, I substituted in tan2θ for x. As the substitution for x has been made, I also had to change the “dx” to represent the derivative of tan2θ (instead of plain old derivative of “x”). So the new “dx” became sec2 θ dθ. Step 2: : Simplify by using a trig identity. In this example, we’ve been heading towards changing 1 + tan2 x to sec2 x. There’s no magic here—if you chose the correct trig function in Step 1, you should already know which trig identity you’re going to use here: Step 3: : Simplify using algebra (if possible). For this example, notice that we can cancel out the sec2 in the numerator and denominator, ∫ 1 dθ. Step 4: Integrate. The integral of a constant function is just the constant * x (or constant * θ) + C, so: ∫ 1 dθ = θ + C Step 5: Substitute your original term back in. In Step 1, I substituted tan-1 θ for x, so putting that back in gives the solution: = tan -1 x + C Useful Background Information As you may be able to tell from the above example, trig substitution requires you to have some strong background skills in algebra, derivatives, and trigonometric identities. “…any teacher of Calculus will tell you that the reason that students are not successful in Calculus is not because of the Calculus, it’s because their algebra and trigonometry skills are weak” ~ Jones (2010) Also extremely helpful: - Integrals of Trig Functions, - U Substitution for Trigonometric Functions, The following table shows how to express one of the common six trig functions as a pair of other trig functions. These may also come in handy: Trig Substitution: References Banner, A. (2007). The Calculus Lifesaver: All the Tools You Need to Excel at Calculus (Princeton Lifesaver Study Guides) Illustrated Edition. Princeton University Press. Jones, J. (2010). Skills Needed for Success in Calculus 1. Kouba, D. (2017). Finding Integrals Using the Method of Trigonometric Substitution. Retrieved November 9, 2020 from: https://www.math.ucdavis.edu/~kouba/CalcTwoDIRECTORY/trigsubdirectory/TrigSub.html Change of Variable. Contour Integral: Simple Definition, Examples What is an Iterative Process? What is an Iterative Process? An iterative process is a process which is run over and over again repeatedly, to ultimately reach or approach a desired result. Each cycle, or repetition, is called an iteration. An iterative process may include a finite (fixed) number of iterations with a definite stopping point, or it may go on infinitely. Iterative Process Examples As a very simple example, counting by twos is an iterative process, because you’re adding 2 each time, over and over again. The Koch snowflake is a more complicated example of an iterative process. The first iteration is an equilateral triangle; each successive iteration is formed by adding smaller equilateral triangles to the first. A recursive formula is a sequence or quantity that is described by a iterative process. One recursive formula is the one which describes the Fibonacci sequence. The Fibonacci sequence can be defined as: - F0 = 0 - F1 = 1 - Fn = Fn-1 + Fn-2 To find the value of a given Fibonacci number, you can run an iterative process, starting from F0 and F1 and finding the next number by repeatedly using the formula Fn = Fn-1 + Fn-2. Using this formula, F2 is 0 + 1 = 1. To find F4, though, you would need to run several iterations of the formula—first find F2 = 1 and F3 = F2 + F1 = 1 + 1 = 2. For any Fibonacci number n >2, you would have to run n – 1 calculations to find out its value using only the definition above. Calculating compound interest is another example of iterative processes. If interest is compounded yearly, we find the total interest accumulated in ten years by multiplying, year by year, the interest + 100% by the previous years balance. An iterated integral has the general form (Rogawski, 2007): The expression is made up of an “inner integral” and one or more outer integrals. An iterated integral with two integrals is called a double integral; A triple integral is a three integral expression. Solving the Iterated Integral An iterated integral is worked much in the same way that inner functions and outer functions are worked in the chain rule for derivatives: you start by evaluating the inner function (or in this case, the inner integral), then work your way out. In other words, you’re performing iterative integration. In the generic example given above, you would integrate with respect to y first (using c, d as the bounds of integration), then work the new integral with respect to x (using a, b as the bounds of integration). For an example, see: Solving a Double Integral or Solving a Triple Integral. One of the surprising benefits of iterated integrals is that you can change the order of integration if, for example, the inner integral is impossible to evaluate. While “regular” integration is like slicing a loaf of bread along its length, changing the order of integration allows you to slice across its width instead. Theorems which relate multiple integrals (integrated over subsets of ℝ) to iterated integrals are normally called Fubini Theorems (Swarz, 2001). In simple terms, Fubini’s theorem states: “…when we have a ‘nice’ function, that the n-dimensional multiple integral of this function is the same as the n-fold iterated integral” (Tollas, 2007). Iteration in Matlab. Retrieved from https://www.mathworks.com/content/dam/mathworks/mathworks-dot-com/moler/exm/chapters/iteration.pdf on January 8, 2018. Rogawski, J. Multivariable Calculus. W. H. Freeman. 2007. Rudin, W., Real and complex analysis, 1970. McGraw-Hill Education. Swarz, C. Introduction to Gauge Integrals. World Scientific. 2001. Section 12.2/12.3: Iterated Integrals Double Integrals over General Regions. Retrieved July 3, 2020 from: https://www.radford.edu/npsigmon/courses/calculus4/mword/Section12.2-12.3notes.pdf Tollas, L. (2007). Iterated Integrals. Article posted on Reed College website. Retrieved July 3, 2020 from: https://blogs.reed.edu/projectproject/2017/07/14/iterated-integrals/
Launched in 1973, Skylab was the world’s first successful space station. The space station known as Skylab was designed as an orbiting workshop for research on scientific matters, such as the effects of prolonged weightlessness on the human body. Because the project represented the next step towards wider space exploration, NASA threw itself into successfully putting Skylab in orbit. What was Skylab’s mission? Skylab was the first manned mission. This cylindrical space station was 118 feet tall and weighed 77 tons. Three separate three-man crews occupied the Skylab workshop. Skylab missions included veteran astronauts, including some who had walked on the Moon. The crews of Skylab spent more than 700 hours observing the sun and brought home more than 175,000 solar pictures. They also provided us with important information about the biological effects of living in space for prolonged periods of time. So what went wrong? After the last crew left the station, Skylab continued to orbit Earth, but it was not intended to remain lifeless. NASA maintained contact with the empty station in the hope to continue space exploration. But unfortunately, the procedure for bringing the space station back to Earth elegantly after the end of its mission wasn’t worked out correctly. This lack of preparation brought a problem, when NASA engineers made a threatening discovery that the space station was losing its orbit that too earlier than what was anticipated. The reason being unexpectedly high sunspot activity. Solar activity caused our atmosphere to expand, so the space station met increasing drag as it circled Earth. It’s orbit was clearly decaying. Now What !!? As the news started to spread across the world about the Station’s lost orbit, panic grew all over rapidly. Its impending fate prompted rather more interest than its successful career as a research center which has been forgotten today. To this growing fear, NASA responded with a plan to redirect one of the Space Shuttle test flights to rendezvous with Skylab to fit a rocket motor to boost it into a higher orbit. Unfortunately Skylab was predicted to fall back to Earth before 1980. The Space station’s fall was delayed now. However, NASA had to come up with a solution to this falling giant. Skylab’s decaying orbit could not be stopped and the deteriorating space station started re-entering our atmosphere on July 11, 1979. Skylab had become a 77-ton loose cannon now. Finally it was July 11th, 1979, with Skylab rapidly descending from its orbit, when NASA engineers fired the station’s booster rockets, hoping it would bring it down in the Indian Ocean. They were very close. While large chunks did go into the ocean, parts of the space station also littered populated areas of western Australia. Fortunately, no one was injured. Finally ‘The End’ Thus the empty Skylab spacecraft made a spectacular return to earth, breaking up in the atmosphere and showering burning debris over the Indian Ocean and Australia. Today recovered fragments of Skylab are in museums. A complete back up Skylab is on public display in the National Air and Space Museum in Washington DC. After Skylab, NASA space station/laboratory projects included Spacelab, Shuttle-Mir, and Space Station Freedom which was later merged into the International Space Station.
The previous chapter may be found here. Pyroclastic flow at Mount Mayon. Such flows occurred on a vastly greater scale in the Valles and In this chapter, we will look at the evolution of the Jemez volcanic field into a supervolcano, and the catastrophic eruptions that produced the Bandelier Tuff. In Chapter 7, we discussed the various kinds of rock formations that can be produced by volcanic ash generated from a high-silica magma. These included ignimbrites, often called tuffs, which are produced by pyroclastic High-silica magma that reaches the surface disintegrates into a mixture of hot gas and tiny shards of volcanic glass. If this mixture is not buoyant enough to rise into the atmosphere as an eruption column, then it flows across the surface as a pyroclastic flow. The pyroclastic flow is sometimes described as a nuée ardente (from the French for "glowing cloud"), since it glows bright red when seen at night. Pyroclastic flows can travel for many kilometers, destroying everything in their path, before settling to the surface and cooling to form an ignimbrite. Pyroclastic flows are relatively uncommon and very dangerous to observe, so many vulcanologists have never seen one. However, they are common in the geological record, and many of these prehistoric pyroclastic flows completed dwarf anything witnessed in human history. These huge flows are almost always associated with calderas, huge depressions in the crust of the Earth formed when the roof of the magma chamber from the which the pyroclastic flows erupted collapsed. Vulcanologists have taken to describing such volcanoes as supervolcanoes. Supervolcanoes are not actually the largest volcanic eruptions. That honor would go to flood basalt eruptions, in which vast quantities of basaltic magma are erupted through sets of fissures that can be hundreds of kilometers long, drowning areas of up to a million square kilometers in a sea of lava. However, flood basalts are even rarer than supervolcanoes, do not produce calderas, do not produce pyroclastic flows or vast quantities of ash, and in most cases a human could have outrun the advancing lava. (Though, since the most recent flood basalts erupted some millions of years ago in British Columbia, no human has ever witnessed a flood basalt.) Perhaps it is for these reasons that flood basalts have not captured the public imagination the way that supervolcanoes have, even though the largest flood basalts are far more destructive. That no supervolcano has erupted in historic times is something to be grateful for. Such an eruption would be a global catastrophe. Apart from the devastation in the area immediately around the volcano, significant amounts of ash would be deposited for hundreds of kilometers downwind of the volcano, and the dust and gas in the stratosphere could cause catastrophic global cooling. The eruption of Tambora in 1815, the largest in human history, killed over 70,000 persons and caused "The Year Without a Summer", but Tambora produced only 160 km3 (38 cubic miles) of ash and flows and a caldera not quite 6 km (4 miles) across, versus 600 km3 (145 cubic miles) of ash and a caldera 23 km (14 miles) across for the Valles caldera. Though geologists have never witnesses so large an eruption, they have examined the rock record from prehistoric eruptions. These include old calderas that have been deeply eroded to expose their interiors. Geologists thus can extrapolate from smaller historic eruptions to understand how a supervolcanic eruption might behave. By this point in our story, nearly two million years ago, the Jemez had already been an area of high volcanic activity for twelve million years. Repeated injections of magma into the crust had heated and softened the country rock, and numerous pockets of magma accumulated at depths of around 10 km (6 miles). These gradually coalesced into a single large magma chamber full of silica-rich magma. Computer analysis of the motion of seismic waves under today's Jemez Mountains yields a rough image of what the remains of the magma chamber look like. The most recent and sophisticated analysis shows a partially molten zone extending from 7 to 13 km (4 to 8 miles) beneath the center of the caldera and 12 to 14 km (7 to 9 miles) across. This zone is a network of molten rock, including some sill-like bodies, in which liquid magma makes up at least 13% of the total volume of the rock. More sills were detected at a depth of around 20 km (12 miles), and the crust-mantle boundary is located at a depth of 30 to 40 km (19 to 25 miles). There is some indication that the crust is underplated by primitive basalt. The most silica-rich magma, together with considerable quantities of dissolved gases (primarily water and carbon dioxide), accumulated at the top of the chamber. Because its density was relatively low, this gas-charged magma had considerably buoyancy and exerted enormous pressure on the solid rock forming the roof of the the magma chamber. The stage was set for a devastating eruption. One of the mysteries of giant calderas is called the room problem. How did all that magma rising from the depths shoulder aside enough country rock to make room for itself? At great depth, the rock of the crust is hot enough to be ductile, so that it could slowly deform to make room for the rising magma. The upper crust is more brittle and is not so easily shoved aside. One theory is that the magma didn't move at all; it formed in place, from melted country rock. This would explain why giant calderas erupt high-silica lava similar in composition to the crust. We know that there was substantial, prolonged heating of the crust in the Jemez. Perhaps the magma chamber formed and grew from repeated injection of very hot low-silica magma into its base. This is consistent with the observation that basalt flows were erupted in the early stages of the Jemez volcanic field and, later, around its periphery. The same pattern is seen at Yellowstone and other supervolcanoes. This would also explain why blobs of low-silica rock are often found in tuffs. However, as attractive as this theory is, it has some problems. The composition of the magma erupted from tuffs does not quite match that of the crust, even when mixing of some low-silica magma is taken into account. Chemical analysis of the magma of the Bandelier Tuff suggests that not more than 30% of the magma came from the crust. The remainder appears to have formed by differentiation of basalt magma from the mantle. Also, the amount of heating required is enormous. Another possibility is that the magma rose through stoping. This is a process in which solid pieces of rock break off the roof of the magma chamber. The denser rock sinks to the bottom of the magma chamber, and the magma rises to take its place. There is considerable evidence supporting this mechanism, coming particularly from ancient calderas whose magma chambers have been exposed by prolonged erosion. Very likely both repeated hot magma injection and stoping play a role in the formation of a giant magma chamber. Whatever the mechanism, the long history of volcanic activity in the Jemez, extending over millions of years, would have provided plenty of opportunity for a large magma chamber to form. Since the catastrophic eruption of so large a magma chamber has never been witnessed, the details of how such an eruption takes place remain uncertain. Smaller eruptions, such as that of Tambora in 1815, took place from relatively small magma chambers containing intermediate-silica magma following a conventional eruption through a central vent. Geologists long assumed that giant supervolcano eruptions took place in a similar way, starting with an eruption from the central vent of a giant volcano. Some older books on the Valles volcanic field speculated that the Valles caldera was once the location of a very large volcanic peak of enormous height, perhaps as high as the highest peaks on earth today. Geologist no longer think so. Drilling and seismic studies have shown that the floor of the Valles caldera consists of relatively flat beds of rock that collapsed into the magma chamber more or less like a piston moving in a cylinder. The terrain in the central Jemez could not have been much higher than the peaks around the rim we see today. Geologists are more inclined to think that the eruption began with the formation of a large circular fracture (a so-called ring fracture) around a giant plug of rock that began dropping into the magma chamber even as gas and ash erupted through the fracture. Such ring fractures are exposed in ancient calderas, and we have convincing evidence of the existence of a ring fracture within the Valles caldera, as we'll see in the next chapter. Just what triggered the breakout is also not known with certainty. However, there is considerable evidence that injection of fresh magma immediately precedes breakout. There would also likely have been some warning signs, though there was no one in northern New Mexico to observe them. One of the surest signs of an impending eruption in modern volcanoes is an increase in sulfur oxide emissions. This might have preceded the supervolcano eruption by months or even years. Another warning sign would be inflation of the volcanic field, where the surface of the ground began to rise over the magma chamber. Inflation of Mauna Loa and Kilauea in Hawaii follows so regular a pattern that eruptions can be predicted with high confidence. Finally, when breakout is imminent, the rising magma begins fracturing rock above the magma chamber, and this produces a characteristic seismic signal called harmonic tremor. Viscous magma flows into a crack, increasing the stress on the crack tip until the crack abruptly advances further into the rock. The magma then slowly flows into the new crack space. This produces a surprisingly regular pattern of seismic impulses. In the last chapter, we examined the mafic eruptions around the perimeter of the Jemez volcano field that peaked not quite 3 million years ago. These eruptions showed that mafic magma was continuing to rise from the mantle and heat the crust under the Jemez. As a result, an enormous magma chamber was forming under the heart of the Jemez. One imagines that this must have been accompanied by considerable volcanic activity in the area. However, the collapse of the caldera destroyed most of the evidence of early activity above what would become the Toledo caldera. We do have a few hints. The time scale for this collapse is not known with certainty, but it could not have been long. There may be preliminary low-volume eruptions that weaken the roof of the chamber and set up the main eruption. These so-called leakage events may take placed up to hundreds of thousands of years before the main event. However, the main event itself cannot last more than a few weeks, based on the character of the ash and tuff that are emplaced by the eruption. These show that individual beds of ash and tuff could not have cooled very much between flows. Geologists mapping the rocks in San Diego Canyon discovered a small tuff bed at the very base of the Bandelier Tuff. This was originally given the sensible name of the San Diego Canyon Ignimbrite, but because it is clearly related to the later Bandelier Tuff eruptions (and is not that much older, with a radiometric date of 1.85 million years) it has been renamed the La Cueva Member of the Bandelier Tuff. This tuff is found also in the walls of the caldera, in a few places underlying Redondo Peak, and as thin beds interbedded with part of the Puye Formation north of Los Alamos and in lower Alamo Canyon in Bandelier National Monument. One readily visible exposure is in the east face of Virgin Mesa, high above State Road 4 in Canon de San Diego. Bandelier Tuff in Virgin Mesa, showing all three members. Looking northwest from 35 45.417N 106 42.194W All three members of the Bandelier Tuff are visible here, resting on much older red beds of the Abo Formation. The La Cueva Member shows as a thin layer at the bottom of the cliffs, with a clearly visible notch separating it from the much thicker and nearly featureless beds of the Otowi Member. Above are multiple flows of the Tsherige Member. The best exposure for close examination of the La Cueva Member is probably in the La Cueva area itself, in the caldera wall south of the Obsidian Way subdivision. This is an area of spectacular tent rocks eroded out of the Otowi and La Cueva Members, with Tsherige Member exposed in the cliffs overhead. Here's a tent rock showing contacts between beds. Contact. Looking south from 35 52.017N 106 39.543W It's a little challenging to tell the various contacts in this area apart. The geologic map does not have nearly the detail necessary, and while I have a road log discussing this location, it has a single photograph of only the uppermost Tshirege beds and a crude diagram of this area as a whole. But I believe this is the contact between the La Cueva Member below with the Otowi Member Here's an isolated tent rock with the beds showing clearly. Contact. Looking north from 35 52.017N 106 39.543W This one seems to more closely match my road log's diagram. If so, the bottom of the tent rock is the "A" bed of the La Cueva Member; the lithic-rich layer halfway up (the thin layer full of dark pebbles) marks the transition to the "B" bed of the La Cueva Member; and the cap at the very top is Otowi Member. Here's a view of the formation from the other side. Contact. Looking west from 35 52.029N 106 39.512W The lithic-rich bed represents a lag breccia and is an indication that the source vents for the La Cueva Member could not have been far from this location. Lag breccias represent large clasts of country rock caught up in the eruption that are too heavy to be transported far from the vent. The La Cueva Member was clearly much less voluminous than the eruptions that followed. It has been described by some geologists as an early leakage of the magma chamber. Nevertheless, the eruption was not trivial. Drilling in the Redondo Creek area has discovered buried beds of the La Cueva Member that are up to 400m (1200') thick. This suggests that the source vents for the La Cueva Member were located in the area between Redondo Creek and the southwest caldera wall, and may even have formed a small caldera in this area. This preliminary eruption took place some 230,000 years prior to the first truly enormous caldera eruption. About 1.62 million years ago, the first truly enormous eruption burst out of the Jemez volcanic field. This eruption is known as the Toledo Event and it produced a caldera roughly coinciding with the present Valles caldera. It also emplaced the Otowi Member of the Bandelier Tuff, sometimes called the lower Bandelier Tuff in older geologic writings. The Otowi Member covers a vast area in all directions from the caldera, although there are areas where it was clearly channeled by existing high ground. For example, none is found on the La Grulla Plateau in the northern Jemez or Lobato Mesa in the northeastern Jemez, and it is sparse in the highlands of the southern Jemez. On the other hand, substantial flows of Otowi Member are found in the lower ground between the La Grulla Plateau and Mesa Alta and west of the La Grulla Plateau and in Cañon de San Diego. The bulk of the Otowi Member is found in the finger mesas of the Pajarito Plateau northeast to southeast of the Sierra de los Valles and of the Jemez Plateau between the western topographic rim of the caldera and the Sierra Nacimiento Mountains. The latter are the most extensive and best preserved beds of the Otowi Formation. One area that was not reached by the Otowi Member is the village Rock. Although there are thin flows of the later Tsherige Member north, west, and south of the village, there are none underlying the village itself, and no flows of the Otowi Formation anywhere in the area. The pre-Bandelier surface beneath the Pajarito Plateau can be reconstructed in considerable detail, thanks to numerous test wells drilled to evaluate contamination from the early days of Los Alamos National Laboratory, and we now know that there is a ridge of basalt of the Cerros del Rio extending north from roughly the location of my house, on the west side of White Rock, for about three kilometers (two miles). Prior to the Toledo Event, there was a low valley west of this ridge, dominated by a large cinder cone to the south, at the present location of a LANL technical site, and the Sierra de los Valles to the west and north. The valley was underlain by beds of coarse sediments, nowhere exposed today but dubbed the Chaquehui Formation by LANL hydrologists, that could become an important aquifer for Los Alamos in the future. This valley corresponds with a region of slightly lower surface gravity that suggests it is a particularly deep part of the Rio Grande Rift. In the Pajarito Plateau, the Otowi Member is typically found at the base of sheer mesas of the Bandelier Formation, capped with the more densely welded Tshirege Member. In the last chapter, we saw an excellent exposure in Pueblo Canyon on the main road to Los Alamos. Bandelier Formation sitting on top of Cerros del Rio Formation on State Road 502 west of Totavi. 35 52.099N 106 11.913W The road cut here exposes the base of the Otowi Member, which rests on three-million-year-old basalt of the Cerros del Rio Formation. There is a thin layer of paleosoil between the two formations that has been interpreted as shallow lake sediments from Culebra Lake. The base of the Otowi Member itself consists of a thick bed of air fall pumice, the Guaje Pumice. The Toledo Event began when a large vent opened somewhere almost directly west of the present location of Los Alamos. A convective column of ash and pumice emerged from the vent and rose into the atmosphere. There must have been a stiff breeze blowing to the east, because the air fall pumice beds of the Guaje Pumice are thickest east of Los Alamos. This area includes the road cut along State Road 502, where the Guaje Pumice is several meters thick. North, west, and south of the caldera, the Guaje Pumice is very thin or is missing altogether. Just how many vents were involved, and where they were located, remains controversial. The original picture was of a more or less central vent. Later, geologists began to lean towards the view that the initial vents were located at one or more points along the developing ring fracture. However, the isopach map for the Guaje Pumice, which shows the estimated thickness of the bed at various locations around the Jemez, strongly suggests the Guaje Pumice was erupted from a central vent or a few closely clustered vents. We cannot say for certain whether the vents were located near the center of what is now the caldera, or at the easternmost point of the ring fracture, or somewhere else along the same east-west line. Thick beds of Guaje Pumice were deposited north of Guaje Canyon. Here the Copar Pumice Mine operated for many years, and there are still large beds exposed in the area. Guaje Pumice beds. Near 35.917N 106.225W Another pumice pit is located just south of Thirtyone Mile Road. Pumice pit in Guaje Pumice beds. Near 36.009N 106.203W I took the photographs without much thought about land jurisdiction, but a pumice pit will certainly be a private holding and this one is on tribal lands. I should not have been surprised that someone materialized to ask what I was photographing and why. The correct answer in such situations is the truthful one. Tourists out looking at geology are fairly nonthreatening, though one should always be respectful of privacy on non-public lands. The large pumice beds on the west rim of White Rock Canyon at Overlook Park are probably Guaje Pumice. Though locally substantial, these beds are not mapped even on the most recent geological map of the White Rock quadrangle. Here's a sample. Near 35 49.407N 106 11.089W Close examination shows that the pumice has abundant phenocrysts of quartz and feldspar, likely sanidine (alkali feldspar.) There are no visible phenocrysts of biotite. This is characteristic of the Bandelier Formation pumice beds and rules out the much younger El Cajete Pumice, which has obvious biotite phenocrysts. The El Cajete Pumice likely was deposited for great distances downwind. One deposit was located at Truchas, in the foothills of the Sangre de Cristo Mountains some 65 km (40 miles) downwind of the likely source vents. Near 36 02.560N 105.49.562W This localized deposit has been entirely quarried away, but the borrow pit remains. This is not the furthest identifiable airfall deposit of Guaje Pumice. In 1972, a team of geologists reported that an ash bed found at Mount Blanco, Texas, closely matched the Guaje Pumice in both chemical composition and radiometric age. This bed, which is 30 cm (1') thick, is nearly 500 km (300 mi) downwind of the source vents. Some of the ash was carried far downriver by the Rio Grande, producing ash beds in the upper Sierra Ladrones Formation near Socorro. This Bosquecito Ash appears to match the Guaje Pumice in age and composition. The eruption column gradually eroded the sides of the vent, widening it and increasing the volume of the eruption. Additional vents likely opened as the roof of the magma chamber lost support and a ring fracture began to form. The eruption column abruptly collapsed, producing the first of the massive pyroclastic flows of the Otowi Member. Unlike the Guaje Pumice, the pyroclastic flows extended in all directions, which is unsurprising given that pyroclastic flows move at great speed. The breeze from the west was essentially irrelevant at this point. The most impressive exposures of Otowi Member ignimbrites are on the west side of the caldera. There seems to have been no topographic barrier on the west side of the Toledo caldera, and hot pyroclastic flows moved freely across this area, producing great thicknesses of moderately to highly welded tuff. There is an impressive exposure of welded Otowi Member tuff north of Fenton Lake. Otowi Member north of Fenton Lake. 35 52.905N 106 43.376W The impressive cliffs are mostly Otowi Member with a layer of Tsherige Member at the top. The Tsherige Member caps most of the mesas of the Jemez Plateau, but is generally less thick than the underlying Otowi Member. This is likely because the caldera rim left by the Toledo event formed a modest topographic barrier to the younger flows. Similar impressive exposures are found along State Road 126, in Bear Canyon. Otowi Member in Bear Canyon. 35 54.885N 106 42.841W The prominent cliff at left is Otowi Member, with the Tsherige Member forming the sloping ground above the cliffs. Another impressive exposure is found in Calaveras Canyon. Otowi Member in Calaveras Canyon. 35 56.249N 106 42.519W The entire cliff here is Otowi Member, with no cap of Tsherige Member. Note what looks like a very thick sequence of surge beds making up the lower half of the exposure. On the east side of the Jemez, the Otowi Member is much less prominent. It is often buried under colluvium at the base of mesas of Tsherige Member. As State Road 502 approaches Los Alamos, it climbs the north side of Los Alamos Mesa, where there is a fine exposure of the contact between the Otowi and Tshirege Members. The road here is narrow with no really safe places to pull over, so the visitor wishing to examine this exposure is well advised to park on a pullout on a side road just before SR 502 begins the climb, then walk to the outcropping. Be very careful of traffic on this busy and narrow road. Otowi Member on State Road 502 ascending Los Alamos Mesa. 35 52.233N 106 13.390W The walking stick provides scale. The ignimbrite here is pinkish white ash with numerous small mafic clasts (probably country rock entrained in the flow) and numerous pumice clasts. The pumice clasts are completely undeformed, showing that this is an unwelded tuff. Here's a sample of Otowi Member up close. This comes from an outcropping along 31 Mile Road. Sample of Otowi Member from Thirtyone Mile Road The sample contains abundant phenocrysts of quartz and sanidine (alkali feldspar). The Otowi Tuff contains up to 30% of such Another exposure of the Otowi Member and its contact with the Tshirege Member is found in Ancho Canyon along State Road 4 between White Rock and Bandelier National Monument. Road cut in Bandelier Tuff. 35 47.322N 106 15.405W There is little evidence of the Cerro Toledo Interval here, but there is about a foot of Tsankawi Pumice at the base of the Tshirege Member. On the east side of the Jemez, the Otowi Member is often difficult to distinguish from colluvium at a distance, since it is usually present only under cliffs of more durable Tshirege Member and it tends to weather to a gentle slope. As one hikes down the trail on the south side of Pueblo Canyon onto the talus slope, one sees that it is indeed deeply mantled with soil. Colluvium on south wall of Pueblo 52.890N 106 15.945W But one of the attractions of the canyon is the hoodoos and tent rocks that are found typically at the base of the mesas. Examination of the eroded bank of this slope shows that it's clearly solid tuff, with few signs of bedding or any other indications that this is reworked volcanic sediments rather than an original pyroclastic flow. To be sure, there are no signs of welding, but the Otowi Member of the Bandelier Tuff is generally not welded in the Los Alamos area. Tent rocks seem to be a common feature of the Otowi Member. Other prominent outcroppings showing tent structure are found in Guaje Canyon below its confluence with Rendija Canyon Hoodoos in Otowi Member, Bandelier Tuff. 35.904N 106.210W and in Valle de los Indios on the southwest caldera rim. The entire thickness of the tuff beds here, including the skyline, is Otowi Member, with no overlying Tsherige Member. The flows at top appear to be more welded than those further down. Some of the furthest remaining deposits of the Otowi Member are found to the southeast. These include deposits cut by the highway near the Cochiti golf course. Otowi Member near Cochiti Golf Course. 35 39.727N 106 21.117W As the magma chamber emptied, a ring fracture formed and the roof of the magma chamber began to collapse. This led to deposition of ignimbrite beds rich in lithic clasts close to the caldera. In the exposures closest to the ring fracture, there are beds of lag breccia associated with caldera collapse. Some of the best exposures showing these lag breccias are in upper Cochiti Canyon. Lithic beds of Otowi Member. Looking nortthwest from 35 45.999N 106 25.150W The entire canyon wall in the center of the photograph is Otowi Member. The darker beds towards the bottom are the lithic beds. The rugged cliffs at top are more densely welded post-caldera flows. These later flows came from deep in the magma chamber, and they were both hotter and more mafic. We will see excellent examples of this later on, when we consider the Valles event. The ring fractures from the Toledo caldera and, later, the Valles caldera are now deeply buried under younger sediments and flows. However, there are other locations in the Southwest where erosion has exposed the ring fractures of older calderas, and this allows us to study their structure. One such ring fracture zone is exposed in Red River Canyon west of Questa, New Mexico. Rhyolite dike swarm marking ring fracture zone of Questa caldera. 36 40.866N 105 30.985W The jagged rocks along the cliff are rhyolite dikes marking the ring fracture zone of the Questa caldera on its south side. With the collapse of the magma chamber, the eruption began to come to a close. The steep rim of the newly-formed caldera was unstable and quickly collapsed into the caldera. Erosion further wore down the rim and deposited new beds of sediments in the caldera. Quite likely one or more crater lakes formed. However, the traces of all these events were erased by the subsequent The Toledo even coincided roughly with the arrival of the first mammoths in New Mexico. These large relatives of modern elephants mostly disappeared at about the time humans arrived in North America, which many paleontologists do not see as a coincidence. The last mammoths are thought to have survived on Wrangel Island in the Russian Arctic until about four thousand years The Toledo event produced the first giant caldera in the Jemez region. However, the subsequent Valles event obscured much of the geological record of the Toledo event and the subsequent Toledo Interval, and geologists must search for clues to the size and location of this caldera. One reconstruction of the likely location of the Toledo ring fracture is shown below. Reconstruction of Toledo ring fracture (dashed red) with Toledo Embayment (dashed yellow) and Cerro Toledo domes (circled in red.) The red dotted line shows the likely location of the main Toledo Event ring fracture, while the solid circles identify ring The formation of a caldera in a supervolcanic eruption leaves a considerable amount of high-silica lava still underground, with a natural route to the surface through the ring fracture along which the caldera floor collapsed. The remaining magma is likely to be lower in gas content than before the caldera eruption, and it tends to erupt effusively rather than explosively through the ring fracture. Such eruptions tend to localize at a single point along the fracture where the path to the surface is the most clear. The result is that a rhyolite dome forms at a single point along the ring fracture. Once the energy of the eruption is spent, the vent is plugged with solidified rhyolite and overlain by the mass of the dome itself. This effectively seals the ring fracture in the immediate vicinity of the dome. If magma is still being injected into the old magma chamber, which seems to be a frequent occurrence with supervolcanoes, then the next eruption must come through a different part of the ring fracture. The result is that individual dome complexes will line the ring fracture almost like beads on a wire. We will see superb examples of this in the aftermath of the Valles event. The ring domes produced by the Toledo event have been largely destroyed by the subsequent Valles event. However, some remnants of domes remain, and it is largely from these remnants that the location of the Toledo ring fracture has been inferred. In the previous photograph, the red circles show rhyolite domes or remnants of domes dating between the Toledo and Valles events, which have been assigned to the Cerro Toledo Formation. Those close to the ring fracture are interpreted as ring fracture domes. These are, starting from upper left, a small unnamed dome dating back 1.59 million years; the small Warm Springs dome dating back 1.26 million years; the Cerro Transquilar dome at the east margin of the Toledo Embayment, dating back 1.36 million years; the West and East Los Posos domes dating back 1.54 and 1,45 million years; and, to the south, the Rabbit Mountain dome (1.43 million years) and Paso del Norte (1.47 million years) domes. Rabbit Mountain is likely the southern remnant of a much larger dome, most of which foundered into the subsequent Valles caldera along with other Toledo ring fracture domes. The identification of the small, unnamed dome furthest to the northwest as a Toledo ring fracture dome is questionable. Its age is nearly indistinguishable from that of the Toledo event itself, and the most recent geologic map of the area maps it as Otowi Member. The dome at Warm Springs is nearly the same age as the Valles Event, but its location well outside the Valles ring fracture supports its identification as a remnant of a Toledo ring fracture dome. The domes in and near the Toledo embayment and the Rabbit Mountain and Paso del Norte domes seem beyond question, showing that Toledo caldera extended at least across the eastern half of the present Valles caldera. The Cerro Trasquilar dome is accessible by passenger vehicle with a Valles Preserve back country permit. Cerro Trasquilar. From near 35 58.310N 106 31.102W This small dome is parked right in the middle of the moat, outside the ring of larger domes that mark the Valles ring fracture. Its age from radioisotope dating (1.36 million years) reveals that Cerro Trasquilar is a remnant of one of the domes that likely formed over the ring fracture of the earlier Toledo caldera. There is some disagreement over the naming here; geologists have generally referred to this small dome as Cerro Trasquilar and the much larger, younger dome complex to the south as Cerro Santa Rosa, but the Forest Service topographic map identifies the entire complex as Cerro Transquilar and the dome immediately to the south as Cerro Santa Rosa. On the east side of Cerro Trasquilar, the visitor reaches the end of Pipeline Road, or at least of that part of Pipeline Road that a back country permit gives one permission to drive. The road continues on through the Sierra de los Valles to Los Alamos, a drive I'd very much like to take someday. Here is a Valle Toledo panorama. 35 57.461N 106 28.890W The panorama starts at the soutwest, looking towards the Cerro Santa Rosa complex. The dome known by area geologists as Santa Rosa II is prominent in the second frame while Cerro Trasquilar is on the boundary of the second and third frames, with the north caldera wall beyond. The gentle slope in the foreground is an old Turkey Ridge forms the skyline across the fourth and fifth ridge. This dome complex is about 1.38 million years old and occupies the mouth of the Toledo Embayment, an odd feature of the caldera whose interpretation geologists have not been able to agree on. It looks like a pocket in the northeastern topographic rim of the caldera that is full of rhyolite domes, all between 1.33 and 1.45 million years old, but it is not clear what would cause this. The two leading theories are that it is a structural feature, formed by deep faulting that connects the Redondo Graben to the Escondido Fault Zone north of Espanola; or that it is an extension of the Toledo caldera to the northeast. The two are not mutually incompatible. The notion that this is the remnants of the caldera from which the Otowi Member was erupted -- the Toledo caldera -- is no longer accepted, but a few geologists think it may be the caldera from which the La Cueva Member of the Bandelier Tuff was erupted. The dome on the left side of the sixth frame is the nearest dome of Cerros de los Posos, age 1.54 million years, which is thought to be another remnant of the domes of the Toledo ring fracture. The last few frames show the toe of a landslide in the foreground and Cerros del Medio, from which the landslide originated, in the background. The big log in the middle of the road marks the limit of where a back country permit allows the visitor to drive. The Rabbit Mountain and Paso del Norte dome complex is one of the more accessible of the Cerro Toledo dome complexes. Rabbit Mountain forms much of the southeast rim of the caldera and is prominent on the skyline as seen from the Valles Preserve This is looking southeast across the Valle Grande towards the caldera rim. The Paso del Norte Dome is accessible from Forest Road 36 south of State Road 4, from which one gets an excellent view of the dome. Del Norte dome. Looking southwest from. 35 49.603N 106 28.551W The dome can be climbed by a short but strenuous hike from a pullout here. Be advised that there is no trail and the dome is steep and heavily overgrown with thorny shrubs. However, the top of the dome shows excellent exposures of Del Norte rhyolite and a view (through the trees) of nearby Rabbit Mountain. Paso Del Norte dome summit. Looking northeast from. 35 49.236N 106 28.735W Paso Del Norte dome and the southern flank of Rabbit Mountain are on National Forest land just south of the preserve, so I took a sample. No individual crystals are visible even under the loupe; the rock is completely aphanitic. The Del Norte dome is well outside the actual south caldera rim. Like Rabbit Mountain, it was part of a much larger dome to the northwest that foundered into the Valles caldera, leaving a small remnant overlying the precaldera rocks of the south rim. Rabbit Mountain is accessible via the Coyote Call Trail of the Valles Preserve. The trailhead is a small pullout from State Road 4. Visitors should be aware that the parking is very limited here, though the trail requires no use fee. The hike itself is not particularly difficult nor lengthy except where downed trees force a detour. The latter seems to be a common problem throughout the Valles preserve, a consequence of severe forest fires in the area rather than any lack of diligence on the part of the Preserve. Surprisingly, the rhyolite on the north flank of Rabbit Mountain is quite different in character from the rhyolite of the Paso del Norte dome. It is generally darker, with no hint of the purple discoloration, and one can understand why earlier geologists mapped the Paso del Norte dome as Bearhead Rhyolite until radioisotope dating showed it to be much younger (1.47 million years), only slightly older than the Rabbit Mountain rhyolite. I am struck by the peculiar surface texture of some of the rhyolite clasts: Rabbit Mountain rhyolite. 35 50.578N 106 27.648W The peculiar bark-like texture might be a weathering surface, but it seems more likely this is a cooling surface of a flow. Such textures are occasionally seen in basalt flows and are described as spiny pahoehoe. Here's a photograph from a very young basalt flow at Craters of the Moon National Monument in Idaho: In both cases, these textures were probably produced here by mild stretching of the partially solidified surface. Another feature of the mountain is occasional outcrops that show a distinctive tan color and coarser texture, almost resembling a sandstone. Examination under the loupe shows that this is crystalline rock, however. There is no sampling here on the Preserve and a large-scale photograph does not do the outcrop A distinctive feature of Rabbit Mountain is the presence of significant quantities of obsidian in the rhyolite flows. Rabbit Mountain rhyolite. 35 50.807N 106 27.190W Obsidian is volcanic glass. It forms from magma that is cooled so rapidly that the atoms freeze into a tangled mess before they can arrange themselves into a regular crystal structure. Volcanic glass is relatively uncommon in subaerial mafic rocks, which because of their lower viscosity require very rapid chilling to form glass, but it is quite common in felsic rocks. In fact, some geologists believe that almost all extrusive felsic rocks start off as volcanic glass, which slowly devitrifies over geologic time as the atoms gradually work themselves into a regular crystalline arrangement. If this is true, then it is possible that the entire boulder here was once obsidian, which has now devitrified to the point where only a few thin layers of obsidian remain. Weathering has released a considerable number of small obsidian nodules from the rhyolite at some locations along the trail. Obsidian nodules in the trail. Click to enlarge. 35 50.765N 106 27.135W It is not uncommon for obsidian to weather out of formations as small nodules, and these are sometimes called "Apache tears." It surprises me that obsidian would weather more slowly than devitrified rock, but it seems to be the case. One piece of evidence for widespread devitrification of obsidian flows is the presence of spherulites. Spherulites near northeast summit of Rabbit Mountain. Click to enlarge. 35 50.771N 106 27.180W Spherulites are characteristic of volcanic glasses, but one can see that a great many of the spherulites here have devitrified. Rabbit Mountain was large enough to experience several episodes of dome collapse. This produces a form of pyroclastic flow described as a glowing avalanche. The steep face of the dome becomes unstable as it is pushed out from within by fresh magma entering the dome, and the face collapses in a landslide. The magma beneath, depressurized by the removal of the overlying rock, disintegrates into hot gas and volcanic ash. This mingles with the larger clasts of the original landslide to form the glowing avalanche, which can travel for miles. One such glowing avalanche deposit is found along the Dome Road south of Graduation Flats. Pullout atop Rabbit Mountain debris flow. 35 47.537N 106 25.174W The pullout and the area around it are underlain by glowing avalanche deposits from Rabbit Mountain, over five kilometers (three miles) to the northwest. Such avalanches occurred at least three times from the southeast flank of the dome, forming the deposits on which my car is parked. (The dome in the background is an older dacite dome, which we visited several chapters back.) The ground surface shows numerous clasts of rhyolite and obsidian. Rabbit Mountain debris flow. Pencil for scale. 35 47.537N 106 25.174W Here are samples of a couple of larger obsidian fragments. Obsidian of Rabbit Mountain debris flow. Pencil for scale. 35 47.537N 106 25.174W These chunks are about three inches long. The fragment on the right shows flow banding. Both are weapons grade, suitable for manufacturing obsidian weapon tips. There is abundant archaeological evidence of widespread trade of Jemez obsidian throughout the Southwest. Deposits from this avalanche are found in a canyon bottom northeast of St. Peter's Dome, a distance of 13 km (8 miles) from A similar avalanche deposit is found south of Rabbit Mountain. This flow is thought to have come off the Del Norte dome. Del Norte debris flow. 35 48.5785N 106 27.897W Like Cerro Trasquilar, the Warm Springs dome is accessible by passenger vehicle with a Valles Preserve back country permit. There actually are warm springs here, with an old bath house dating from the time when the Valles Caldera was owned privately as a single large ranch. The bath house is decorated with cow skulls; I assume as a warning to others. Warm Springs. 35 58.328N 106 33.631W The small hill west of the bathhouse is actually a rhyolite dome, albeit a small one. Its radioisotope age is just a hair more than the Tshirege Member of the Bandelier Tuff, 1.26 million years. It is interpreted as a dome on the Toledo fracture ring that was formed just prior to the Valles event. In the background is Cerro Seco, a Valles ring fracture dome. I'm going to get just a bit ahead of my story here, and take a moment to talk about the relatively young rock on the north side of the Warm Springs dome. The processes that formed these rocks were doubtless at work during the Cerro Toledo interval as well, but most of the traces have since been buried or destroyed. The Warm Springs dome is partially buried in phreatomagmatic deposits from Cerro Seco. These are beds of small rock fragments produced when lavas from Cerro Seco came into contact with the lake that filled this part of the caldera 0.78 million years ago, with explosive results. The phreatomagmatic beds are well exposed on the north part of the dome. Hydromagmatic beds on north side of Warm Springs dome. 35 58.340N 106 33.702W Because this is on the Valles Preserve, I could take no samples. However, the beds here resemble the maar beds of the Cerros del Rio that we saw in the last chapter. The chief difference is that surface water rather than groundwater was involved here, and the lava was high-silica rhyolitic magma rather than low-silica Near the top of the dome, the phreatomagmatic beds give way to large broken pieces of rhyolite, presumably from the Warm Springs Warm Springs Dome. 35 58.298N 106 33.728W Dome eruptions tend to produce pumice fall beds over a significant area downwind, and the Cerro Toledo domes laid down significant pumice beds on top of the Otowi Member of the Bandelier Tuff. One such series of beds is visible in the road cut of State Road 502 as it climbs the north side of Los Alamos Mesa, which we visited earlier. Let's take a look now at the Cerro Toledo beds. Cerro Toledo Interval on State Road 502 ascending Los Alamos Mesa. 35 52.233N 106 13.390W At bottom is the Otowi Member of the Bandelier Tuff. Above are a pair of pumice beds of the Cerro Toledo Interval. Above these is the basal Tsankawi Pumice and the lowest ignimbrite bed of the The Cerro Toledo Interval becomes thicker closer to the Los Alamos town site. There is a prominent exposure in Pueblo Canyon north of Los Alamos Airport. Bandelier Tuff in Pueblo Canyon. 35 53.182N 106 16.262W The top three layers of the mesa (North Mesa) are units of the Tshirege Member, Bandelier Formation. The topmost is partially obscured by houses and trees on the canyon rim. The second layer plunges into the canyon, and the third layer reaches to the sloping ground at the canyon bottom in most places. These are three flow units, erupted far enough part in time that each had cooled slightly before the next was erupted on top of it. At the center of this photograph, you see a narrow banded layer with just a little of an eroded formation showing beneath it. (Click on the photograph to see an enlarged version.) The uppermost part of this banded layer is the Tsankawi Pumice, while the lower portion is the Cerro Toledo Interval. The boundary between the two is difficult to pin down at this distance. So what we're seeing is tall, resistant mesas of Tshirege Member sitting on top of low, gently eroded ridges of Otowi Member. This is an excellent example of inverted topography. The Tshirege Member settled preferentially in river channels and other low points in the existing erosional surface of the Otowi Member. Subsequent erosion preferentially eroded the Otowi Member, leaving the more durable Tshirege Member as finger mesas coinciding with the ancient river channels. Thus, the topography has been inverted: What was the high points of the old surface are now the canyons, and what was the low points are now the tall finger Going back to the original photo, of the north wall of Pueblo Canyon, one can easily imagine the gentler slope showing the outline of the Otowi topography, with a paleochannel near the center of the photo which became partially filled with Cerro Toledo tephra. The sediments might well have been unusually wet. The deposit of Tshirege Member, Bandelier Tuff, on top would have vaporized the water, account for what look like vapor phase pipes in the Tshirege Member above the paleochannel. Vapor phase refers to all the fluids that percolate through the pores of a rock bed, but particularly when the fluids are very hot. Deposition of minerals from the vapor phase can indurate the rock, and if the vapor phase is mostly moving upwards along narrow channels, the result is cylinders of particularly hard rock. We'll see an even more striking example later in this chapter. Here's another shot of the base of the Tshirege Member, where the Guaje Pumice and Cerro Toledo Formation have been deeply eroded. Further up Pueblo Canyon, the Cerro Toledo Interval becomes still more prominent. Cerro Toledo interval in Pueblo Canyon. Looking north from 36 01.121N 106 13.867W The Cerro Toledo Interval fills the notch in the talus slope. The Cerro Toledo Interval is particularly thick in the area around Rendija Canyon and to its north. The area is underlain by hills rich with Cerro Toledo Interval pumice. Soil rich in Cerro Toledo interval pumice. 35 54.766N 106 17.006W There is a fine exposure of the Cerro Toledo Interval in the canyon wall at the end of an unnamed spur of Cabra Canyon. Cerro Toledo interval exposure in Cabra Canyon. 35 54.766N 106 17.006W The uppermost half of the mesa is Tsherige Member, Bandelier Tuff, with a base of light-colored Tsankawi Pumice. Below this are a mixture of pumice, tuff, fluvial sediment, and paleosoil beds of the Cerro Toledo Interval. These appear to have accumulated in a paleovalley in the underlying Puye Formation; there is no Otowi Member of the Bandelier Tuff mapped anywhere in this area. The next photograph show the east face of this exposure. Cerro Toledo interval exposure in Cabra Canyon. Near 35 55.157N 106 17.563W There are a great variety of deposits here: a pumice-rich bed at bottom, followed by thin alternating ash-rich and pumice-rich beds likely reworked by streams, then a thicker pumice bed that is obviously eroded along its contact with a thick paleosol, then another pumice-rich bed at the very top of the photograph above the thick paleosol. Here's a closer view of the base of the paleosol. Cerro Toledo interval exposure in Cabra Canyon. Near 35 55.157N 106 17.563W There is a mixture of pumice and clasts of Tschicoma Formation dacite at the base of the paleosol, which strongly resembles the alluvium in modern drainage channels in this area. This transitions to a thick clay-rich bed with fewer clasts. On the south side of the exposure, there is a cave eroded deeply into the paleosol layer. Cerro Toledo interval exposure in Cabra Canyon. Near 35 55.157N 106 17.563W The paleosol layer is particular susceptible to erosion, being very poorly cemented. It is likely that the caves seen in the cliffs in Pueblo Canyon in the photograph I showed earlier are in this paleosol layer. This area is littered with fragments of rock from the cliff face. These include large clasts of paleosol Cerro Toledo interval clay in Cabra Canyon. Near 35 55.157N 106 17.563W Cerro Toledo interval pumice in Cabra Canyon. Near 35 55.157N 106 17.563W Note the reddish color on the freshly fractured surface. This has the appearance of hematite cement rather than clay, and it may represent mafic minerals in the pumice that have been oxidized. One can also see the contact between the Otowi Member and the Tshirege Member in the southwest caldera wall south of La Cueva. We saw exposures of the La Cueva Member in this area earlier in Contact. Looking south from 35 52.029N 106 39.512W The tan band is probably Cerro Toledo interval sediments, while the bed of pumice is the Tsankawi Pumice that marks the lowest part of the Tshirege Member. Finally, some purely sedimentary beds are assigned to the Cerro Toledo Interval. These include this gravel bed on 30 Mile Road, northwest of Espanola, that was deposited on top of Otowi Member. Cerro Toledo interval gravel bed. 36 01.121N 106 13.867W The satellite photograph near the start of this section shows a number of Cerro Toledo Formation domes northeast of the caldera that almost fill a large embayment in the much older rocks of the Tschicoma Highlands and La Grulla Plateau. The largest of these is Cerro Toledo itself. The earliest interpretation of the Toledo Embayment is that it was the caldera from which the Otowi Member of the Bandelier Formation was erupted, and this led to this event being named the Toledo Event. However, as the previous reconstruction shows, geologists now believe the main Toledo caldera was located nearly in the same location as the subsequent Valles caldera. Based on stratigraphy, the Toledo Embayment must have formed between 2.3 and 1.5 million years ago. One interpretation is that it formed slightly after the Toledo Event during eruption of Cerro Toledo Formation domes. Another interpretation is that it formed as part of the Toledo Event as a kind of offshoot of the main magma chamber along Rio Grande Rift faults. A few geologists have revived a version of the original interpretation, suggesting that the Toledo Embayment was the source area for the La Cueva Member of the Bandelier Tuff. The latter interpretation gains support from gravity modeling of the caldera. Gravitational modeling begins with taking very precise measurements of the gravitational field at as many points as practical within the caldera. The structure of the caldera is then modeled on a computer, assuming one density for the basement rock, a somewhat lower density for old sedimentary beds, and a still lower density for Bandelier tuff and Valles rhyolite, which fill the interior of the caldera. The deep structure of the caldera is inferred by adjusting the depth of the various rock layers to match the gravity measurements. In 1996, geologist D.A.G. Nowell carried out this procedure using gravity measurements published by another geologist, R.L. Segar, in 1974. The most interesting part of the model is the depth of the basement rocks relative to sea level, which presumably formed a more or less level surface before the caldera formed. Relief map of the Jemez with inferred depth of basement relative to sea level shown in color contours. There are two striking aspects of this plot. First, the basement under the Valles caldera did not sink evenly; the depth of basement is much greater on the east side. I'll have more to say about this later in this chapter. The other striking feature is the indication that the Toledo Embayment coincides with a low spot in the underlying basement rocks. The interpretation of the Toledo Embayment as a small caldera from which the Culebra Member erupted is intriguing but has not attracted much support from most geologists. The most popular interpretation is sill that it is an offshoot of the Toledo caldera that formed at the same time. By 1.22 million years ago, what is now the Pajarito Plateau was a surface of relatively gentle relief. The White Rock basalt ridge still stood out, but the northern part of the valley to its west was filled with Otowi Member ignimbrites and the southern part was quite shallow. Stream valleys were typically 15 to 30 meters (50' to 100') deep, compared with over 300m (1000') for canyons today. Because the Otowi Member is unwelded on this side of the Jemez, there were no towering cliffs on the rims of canyons. Geologists have identified four such valleys in the Otowi surface. The Jemez volcanic field as we know it today was largely shaped by the Valles Event 1.25 million years ago. This massive eruption produced the Tshirege Member of the Bandelier Formation, which is the iconic geological formation of the Los Alamos area. Like the Toledo Event, the Valles Event emptied a magma chamber in which large quantities of gas-rich, high-silica magma had accumulated. This magma chamber likely nearly coincided with the Toledo magma chamber. As with the Toledo event, the Valles eruption was preceded by increased emissions of sulfur oxides, inflation of the volcanic field as magma rose beneath it, and, immediately before the eruption, seismic signals such as harmonic tremor. But the earliest precursor may have been the eruption of the Warm Springs dome, which we visited earlier. Eruption of this dome took place along the old Toledo ring fracture only a few thousand years before the Valles event. The Valles Event resembled the Toledo Event, but with some significant differences. Not least of these is that much more of the Valles Event is preserved in the geological record. Much like the Toledo Event, the Valles Event opened with the eruption of a convective column through one or more vents. However, while the Guaje Pumice is strongly concentrated directly east of the caldera center, the Tsankawi Pumice erupted early in the Valles event is spread more evenly around the caldera. It is thickest to the northwest, and ash of the Guaje Pumice has been found as far northwest as central Utah. This suggests a mild breeze blowing from the southeast, but tells us little about the source vent locations. These are thought to have been near the center of the caldera. A moderately thick bed of Tsankawi Pumice is exposed along State Road 126 on the way to Fenton Lake. At bottom and at left is the Ojo Caliente Member of the Tesuque Formation. The darker patch of rock at top left is Otowi Member, Bandelier Tuff, as are most of the boulders in the middle layer. Atop this is a bed of Tsankawi Pumice, and above that are surge beds of the Tsherige Member. A thin bed of Tsankawi Pumice can be found at the base of the Tsherige Member southwest of the village of White Rock. Tsherige Member on paleosoil above Cerros del Rio basalt. 35 48.649N 106 13.640W The Tsankawi Pumice forms a very thin bed on the right side of the photograph and pinches out on the left. This location is almost directly southeast of the caldera, the direction from which the wind was presumably blowing during most of the eruption. We've already seen similar thin beds in Ancho Canyon and on the north side of Los Alamos Mesa. One of the better exposures of the Tsankawi Pumice west of the caldera is in Rendija Canyon, northwest of the sportsmen's club. Tsherige Member in Rendija Canyon. 35 54.655N 106 16.990W The pink bed at bottom is Toledo Interval sediments. Above is the Tsankawi Pumice, which consists of two beds separated by a very thin ash layer (mostly eroded at this location to form a pronounced notch.) Above are surge beds, which we'll discuss presently. These beds are subtly present in the exposure along Los Alamos Mesa as well, where the upper bed is quite thin and the ash layer separating it from the lower bed is no more than a slight notch. The Tsankawi Pumice can be distinguished from other pumices of the Jemez area by a grayish color and the presence of needles of After the initial stage of the eruption, in which the convective column deposited air-fall pumice and ash over a wide area, the eruption column collapsed to produce pyroclastic flows. Column collapse occurs when the eruptive column no longer entrains enough air to remain buoyant, but exactly what triggers this transition is uncertain. Perhaps the volume of the eruption increases to the point where enough air can no longer be entrained. When the column collapses, a dense mixture of hot gases and ash flows out across the landscape as pyroclastic flows. These are thick with ash and move steadily across the ground with little turbulence (laminar flow). They are often preceded by pyroclastic surges, which contain less ash and are much more turbulent, and these can deposit thin beds of fine ash that have such features as crossbedding. The previous photograph shows surge beds immediately above the Tsankawi Pumice, which is a frequent but not universal occurrence in the Tsherige Member. A road cut on Dome Road shortly before it descends the Pajarito Escarpment shows the contact between the Tsherige Member and the underlying volcaniclastic beds of the Paliza Canyon Formation. Contact in road cut. 35 42.800N 106 23.406W There are surge beds in the Tshirege Member along the contact, and also near ground level to the left. The Tscherige Member is here a particularly vivid red, which may also be an indication of hydrothermal alteration. Tshirege Member. Just north of 35 42.800N 106 23.406W There are numerous bore holes here and in an outcrop on the other side of the road, suggesting this rock has attracted attention from geologists. I don't have any dates marked on my map for this area, though a nearby andesite outcrop is dated to 9.33 million And next to the bore holes are some hash marks scratched crudely in the rock. Either this is some kind of Secret Geologist Language, or some non-geologist is less than overawed by the activities of scientists. Pyroclastic surges were followed by pyroclastic flows. These produced the bulk of the Tsherige Member of the Bandelier Tuff. There are spectacular exposures of the Tshirege Member throughout the Jemez area, but perhaps none are better known than those visible from the Clinton P. Anderson Scenic Overlook on the south face of Pueblo Canyon. Clinton P. Anderson Scenic Overlook. 35 52.391N 106 14.006W The finger mesas visible here are all Tshirege Member resting on a base of Otowi Member, with the Otowi Member partially mantled with colluvium. Geologists have identified at least five separate units in the Tshirege Member, and some of these are clearly visible here. We'll discuss these units presently. Before doing so, we'll take a tour around the Jemez of the Bandelier Tuff. The finger mesas east of Los Alamos continue west to the foothills of the Sierra de los Valles and underlie the entire Los Alamos town site. Geology around Los Alamos largely consists of climbing down into canyons to see what is below the Bandelier Tuff or climbing hills and mountains to see what is sticking up above the Bandelier Tuff. The entire terrain in the next photograph, except for the distant Sangre de Cristo Mountains, is Tshirege Member. At this location, close to the Sierra de los Valles forming the caldera's east rim, the Tshirege Member tends to be densely welded and darker in color than further from the caldera. Los Alamos Canyon and Omega Bridge. 35 52.689N 106 20.615W Tshirege Member near Camp May Road. 35 52.689N 106 20.615W Looking up Los Alamos Canyon from Camp May Road, we see that the Tshirege Member extends quite high up the Sierra de los Valles. Tshirege Member in upper Los Alamos Canyon. 35 52.689N 106 20.615W The canyon rim in the distance, above and left of center and left of the small peak, is the upper surface of the Tshirege Member. East of where this photograph was taken (behind the photographer), the Tshirege Member is thrown down a substantial distance by the Pajarito Fault Zone. In the foreground, across the canyon, Tshirege Member sits directly on the Pajarito Mountain Member of the Tshichoma Formation, with no Otowi Member mapped bertween the two. Apparently any Otowi Member that was deposited in this area had eroded away in the 400,000 years between the two events. Here's the view from the opposite (north) side of Los Alamos Canyon at a different time of day. Los Alamos Reservoir is just visible at the canyon bottom in the distance. Tshirege Member in upper Los Alamos Canyon. Near 35 52.801N 106 20.292W The section of Los Alamos Canyon between Los Alamos Reservoir and Omega Bridge is crossed by numerous strands of the Pajarito Fault Zone. Looking across the canyon to the north, one sees here two likely strands of the fault, showing as gullies with a slight displacement in the Tshirege beds. Faults in Tshirege Member in Los Alamos Canyon. 35 52.689N 106 20.615W The fault traces are visible as two shallow gulleys in the side of the canyon. The displacement here is not particularly great. The finger mesas of the Pajarito Plateau also extend southeast to the White Rock area and Bandelier National Monument. In fact, the Tsherige Member takes its name from Tsherige ruins, an archaeological site of the Ancestral Pueblo People located just northwest of the village. Prominent cliffs of Tshirege Member form the entire skyline north of White Rock. Tshirege Member north of White Rock. Looking north from 35 49.452N 106 11.072W The cliffs here are the furthest eastern extent of the Tshirege Member in this area. Here the mesas are thinner than to the north, since they are underlain by a high plateau of the Cerros del Rio Throughout the Pajarito Plateau, the Tshirege Member was erupted onto a surface of relatively gentle relief, underlain in most locations by low hills of the Otowi Member. The furthest southeastern extent of the Bandelier Tuff is an outcropping filling an ancient meander of the Rio Grande that cuts into in the Cerros del Rio on the east side of White Rock Canyon. This outcropping may be responsible for the name of the canyon. Tongue of Bandelier Tuff perched in paleocanyon on the east side of White Rock Canyon. 35 47.214N 106 11.356W Here's the same outcropping seen from near the mouth of Potrillo Canyon, nearly to its west. We saw this in the last chapter. White Rock Canyon panorama showing tongue of Bandelier Tuff perched in paleocanyon. Looking east from 35.789N 106.210W There are several large banks or hills of coarse gravel a short distance south of the White Rock Canyon Rim trail head. The gravel is well-sorted rounded tan clasts, likely of dacite. Though not mapped on any geological map, they have been identified as lake bars in New Mexico Geological Society field guides. There are scattered beds of similar gravel for at least a mile further down the canyon rim, and there is also a considerable quantity of this gravel on a landslide block east of this point, halfway down the canyon rim. The gravels beds appear to overlie remnants of the Tsankawi Pumice and thus must be younger than 1.25 million years in age. They were likely deposited when the Rio Grande was dammed by the Tsherige Member, creating another Culebra Lake.The dam produced by the Tsherige Member is estimated to have been 100m (180') higher than the canyon rim where these cobbles are located, and the resulting lake extended perhaps 70 km (45 miles) to the north. The Rio Grande was forced 2 km (1.25 miles) east of its former course, and had to cut through 200m (650') of basalt to reach its former level and finally drain the lake. The Bandelier Tuff rests on basalt of the Cerros del Rio throughout the White Rock and Bandelier area. For example, there is the outcrop southwest of White Rock we saw earlier. Tsherige Member resting directly on Cerrros del Rio Basalt just west of White Rock. 35.811N 106.227W This shows Tsherige Member, Bandelier Tuff, sitting on top of Cerros del Rio basalt. This is appears to be a high point of the Cerros del Rio surface. On the other side of the road, the basalt is exposed in the canyon wall descending into Potrillo canyon. There is no Otowi Tuff in this area; if it was ever deposited, it was completely eroded away before the Valles event. The Cerros del Rio was already cut by paleocanyons at the time of the Valles event, including one near present-day Water Canyon. Water Canyon panorama showing Bandelier Tuff in paleocanyon. Looking south from 35.791N 106.212W Water Canyon descends from the Pajarito Plateau in the third frame and continues to its confluence with White Rock Canyon in the first frame. Part of the east rim of White Rock Canyon is visible in the first frame, with Montoso Peak on the skyline. The feature of greatest geological interest is probably the large outcropping of light pink Bandelier Tuff in the south wall of Water Canyon in the second frame. This does not appear to be an outcropping that has slumped down the canyon wall. Since older Cerros del Rio basalt and underlying Santa Fe Group sediments form the rest of the canyon wall, this shows that the Bandelier Tuff filled a deep ancestral Water Canyon when it was erupted, and erosion has since re-cut the canyon, leaving this remnant on the canyon walls. This is probably the path the pyroclastic flow followed to leave the isolated exposures of Bandelier Tuff on the east side of White Rock Canyon. As we move soutwest from White Rock, the canyons become progressively deeper and the finger mesas thicker. They reach their maximum thickness in a paleocanyon in lower Frijoles Canyon, which we saw in the last chapter. Contact between Cerros del Rio and Bandelier Formations in lower Frijoles Canyon. 35 45.888N 106 15.661W The rock to the left is benmoreite of the Cerros del Rio. The steep contact with the Bandelier Tuff to the right shows that the latter filled a deep paleocanyon in the Cerros del Rio. This particular paleocanyon has been interpreted as the previous course of the Rio Grande, prior to the Bandelier Tuff eruption. The Tsherige Member thins out again around the San Miguel Mountains, which form a kind of island in the surrounding Tsherige Member surface. East of the San Miguel Mountains, in the southeast Jemez, the Bandelier Tuff sits on volcaniclastic beds of the Paliza Canyon Formation. Paliza Canyon Formation volcaniclastics under Bandelier Tuff. 35 46.001N 106 25.159W In this photograph, the Tshirege Member is at top, with Otowi Member exposed as white patches at two locations partway down the can yon. Below this is a jumble of heavily-eroded block and ash flows and lahars of the Paliza Canyon Formation. On the skyline is Aspen Ridge, part of the Keres highlands that acted as a topographical barrier to the Bandelier pyroclastic flows. The Tsherige Member lapped up against Aspen Ridge, so that the whole area between Aspen Ridge and the San Miguel Mountains became a relatively level surface of Tsherige Member. This has since been cut by deep canyons and displaced by faults. Panorama from a knoll on the east side of Aspen Ridge. 35 47.888N 106 30.012W The panorama begins to the west, and Aspen Ridge extends across the first four frames. The forest road to this area is visible along the ridge. Redondo Peak forms the skyline in the third frame, and Cerros del Abrigo and Cerro del Medio are visible in the fourth frame, with the north caldera wall behind. Rabbit Mountain dominates the fifth frame. On the other side of the foreground trees, we see the San Miguel Mountains in the distance in the fourth to last frame, with mesas of Bandelier Tuff in the nearer distance. The third frame from the left looks almost directly down Bland Canyon. The last two frames look down the southern part of Aspen Ridge. Though the Keres Highlands generally formed an effective barrier against the pyroclastic flows, there are isolated outcrops of Tsherige Member along Peralta Canyon and of Otowi Member in the lower canyon. Tsherige Member in upper Peralta Canyon. 48.113N 106 30.943W The presence of these beds shows that Peralta Canyon already existed when the Toledo caldera collapsed 1.65 million years ago. The canyon bottom was filled with Otowi Member, which subsequently mostly eroded away, and then the canyon bottom was filled again with Tsherige Member when the Valles caldera collapsed 1.21 million years ago, which has also mostly eroded away. Beds of Bandelier Tuff again reappear west of Cerro del Pino. These extend south to Borrego Mesa and west to Canon de San Diego in the southwest Jemez. Here the Bandelier Tuff lies directly on Permian red beds that must have already been exposed at the surface 1.6 million years ago. These contacts are sometimes quite dramatic. Here is a case where the Permian red beds of the Yeso Group and the Glorieta Sandstone are cut by a sizable fault, which has been filled in by Bandelier Tuff. The upper beds of the Bandelier Tuff are not displaced by this fault, showing that it has not been active in at least 1.6 million years. Fault in Permian red beds buried by Bandelier Tuff. Looking east from near 35 43.073N 106 43.152W A little up the canyon, one can see hills of Yeso Group that were engulfed by the Bandelier Tuff, forming a dramatic discontinuity. Permian hills engulfed by Bandelier Tuff. Looking northeast from near 35 43.073N 106 43.152W The same contact is seen along the west side of Mesa de Mesa de Guadalupe. 35 41.718N 106 44.783W The Tsherige Member caps most of the mesas of the Jemez Plateau west of the Caldera. State Road 126 cut into Tsherige mesa top. 35 57.505N 106 42.768W In the northwest Jemez, the Bandelier pyroclastic flows were channeled into the low ground between the Sierra Nacimiento Mountains to the west and the La Grulla Plateau to the east, forming Mesa Pinabetosa: Mesa Pinabetosa. Looking southwest from near 36 7.650N 106 33.133W The mesa is somewhat obscured by haze, but is the low ridge below the skyline. As in the southwest Jemez, the Bandelier Tuff here sits directly on Permian redbeds. The Bandelier flows were channeled through the low ground between the La Grulla Plateau and the Tschicoma highlands in the northeast Jemez. Exposures of Tsherige Member begin a short distance below the caldera rim Proximal Tsherige Member. 36 00.695N 106 29.442W and fill the area between the two high plateaus. Polvadera panorama. Looking southeast from 36 06.874N 106 30.570W The mesa across Canones Canyon is Mesa del Medio. To the left one sees the mesa underlain by Santa Fe Group with a cap of Bandelier Tuff. To the right this gives way to basalt flows of the La Grulla volcanic center. Polvadera Peak is right of center and Cerro Pelon is the smaller peak just left of center. Both are dacite domes of the Tschicoma Formation. Bandelier Tuff below Polvadera Peak. Looking east from near 36 7.883N 106 32.745W Bandelier Tuff below Polvadera Peak. Looking SSE from 36 13.931N 106 28.821W The Tshicoma Highlands blocked the Tsherige flows from reaching most of El Alto, but a flow lapped onto the edge of the mesa on its west side. Tsherige Member flow on west edge of El Tsherige Member filled much of the lower terrain in the Toledo Embayment, but it is not found again outside the caldera until we reach the north rim of Santa Clara Canyon. We are then back in the finger mesas of the Pajarito Plateau. We finish up with the intracaldera tuffs, formed within the caldera itself. These underlie Redondo Peak and some of the hilly terrain to its north. This tuff tends to be crystal-rich and densely welded, as in this outcrop on the north side of Valle Jaramillo. Intracaldera tuff near Valle Jaramillo. 54.857N 106 30.554W The white specks are crystals, probably of high-temperature alkali feldspar (sanidine). The dark patches were probably pumice caught up in the flow. Notice that these have been squashed flat, so that they are systematically elongated in the horizontal direction. These are called fiamme and are characteristic of a welded tuff, which is formed from ash so hot that the individual particles are soft and weld together after settling out on the ground. As we saw in San Diego Canyon, the Otowi Member is a relatively homogeneous unit. Though it probably erupted in several pulses, these were close enough together that there was no time for individual flows to cool significantly before being buried under the next flow. The Otow Member is therefore described as a simple cooling unit, with the entire thickness of the member sharing a common cooling history. The Tsherige Member was erupted in at least five separate pulses, with enough time between pulses for each bed to cool slightly before the next was deposited. The interval between pulses was nonetheless brief enough that only a small amount of cooling took place. The Tshirege Member is thus described as a compound cooling unit. Each simple cooling unit within a compound cooling unit characteristically has a more densely welded core, with less welded upper and lower surfaces. This is because both surfaces cool faster and have less time to weld, the lower because it is in contact with the slightly cooled surface of the previous flow. Boundaries between cooling units in a compound cooling unit are sometimes marked by a vapor gap, but this is not always present, and flows can sometimes be difficult to distinguish with certainty. The Tshirege Member shows occasional surge beds between flows. These are a good indication of a flow boundary. Surge beds in Tshirege Member along Camp May Road. Near 35 52.689N 106 20.615W Flows often form obvious ledges in the finger mesas, such as this ledge in upper Pueblo Canyon. Flow unit ledge in Tsherige Member, Bandelier Tuff. 35.892N 106.316W Further down canyon, three ledge levels are visible both in the southern canyon wall (Los Alamos Mesa): Lithostratigraphic units in Tsherige Member, Bandelier Tuff, underlying Los Alamos Mesa. Looking west from near 35.873N 106.238W A fourth unit is seen at the top the north canyon wall (Kwage Mesa): LIthostratographic units in Tsherige Member, Bandelier Tuff, underlying Los Alamos Mesa. Looking northwest from near 35.873N 106.238W These correspond to units A, B, C, and D of one classification scheme for the units making up the Bandelier Tuff. The B-C unit boundary is somewhat indistinct in Kwage Mesa, but is clearest at the right side of the photograph. Other areas in the Jemez show additional E and F units. This scheme was developed by geologist M.A. Rogers in 1995 and emphasized lithological differences between beds that could be easily mapped throughout the Jemez Mountains. Rogers' units are not necessarily individual cooling units or flows. A competing scheme was developed at about the same time by Broxton and Reneau. Their scheme is applicable mainly on the Pajarito Plateau and does attempt to divide the Tsherige Member into cooling units. Broxton and Reneau's Unit 1 corresponds roughly to Rogers' Units A and B, 2 to C, 3 to D, and 4 to E and F, though since the Tsherige Member is quite variable across the Jemez and the two schemes were devised for different purposes using different criteria, the correspondence is not exact, particularly away from the Pajarito Plateau. In both schemes, a unit may consist of one or more flows; for example, the A unit is thought to be a single flow, while the C unit may consist of three flows. The best nomenclature for the various units of the Tsherige Member is still being worked out by The south side of Los Alamos Mesa displays some of these unit boundaries. South side of Los Alamos Mesa. 35.871N 106.234W The densely welded tuff at the top of the mesa is the C unit. Immediately beneath this unit is the B unit, which is characterized by a high content of pumice in its upper beds. This produces a characteristic spongy texture. South side of Los Alamos Mesa. Near 35.871N 106.234W Each pit marks where a pumice clast was present when the tuff was laid down. The clasts exposed on the surface were subsequently preferentially eroded to produce the pits, which often contain remnants of the original pumice clast. Such zones rich in pumice beds typically mark the upper portion of a flow, while the lower portion of a flow tends to be enriched in lithic fragments and in crystals of quartz and sanidine. The pumice content drops further down, but not abruptly. However, there is a very abrupt if subtle change in the color of the tuff, which becomes more pink. This can be inspected closely from the Abrupt color change, possibly marking flow boundary. Near 35.871N 106.234W\ This is also exposed on the upper surface of the mesa at a low saddle. Abrupt color change, possibly marking flow boundary. Near 35.871N 106.234W The photograph shows an area about six inches across. You can see that there is no change at all in the texture of the rock — just a color change. This may be a flow boundary within the B unit. Two unit boundaries, probably corresponding to those dividing the A, B, and C units, can be examined at close range at Tsankawi Mesa. of the mesa on which the Tsankawi ruins sit. The more prominent is the boundary between the A and B units, which forms a very distinctive notch or shelf. Above the notch, the lower part of the B unit tends to have prominent columnar fractures. Several geologists have concluded that the A-B boundary, as distinctive as it is, was formed after the tuff was deposited through a peculiarity in the way the inner part of the flow was welded while giving off hot vapors. There is no change in the chemistry of the minerals across this vapor notch nor any change in the size and content of lithic and pumice clasts. Tsankawi Mesa. 35 51.608N 106 13.542W Tsankawi Mesa is the location of Tsankawi ruins, which are administered by Bandelier National Monument. The Park Service maintains a trail to the mesa top, which comes to a ladder that takes one to the upper surface of the A unit. Upper surface of A unit. 35 51.640N 106 13.320W This surface extends along much of the south face of the mesa, providing a natural footpath. The next photograph looks back along this natural terrace towards Pajarito Mountain. Ledge formed on surface of A unit. 35 51.649N 106 13.300W Here is an example of the surface becoming a vapor notch. Boundary between A and B 51.659N 106 13.247N You can see that the ledge becomes a notch in the side of the South of Tsankawi Mesas, this boundary can be seen as a ledge on the nearby mesas. Mesas south of Tsankawi showing unit boundary. Looking southwest from 35 51.663N 106 13.125N This ledge is prominent in the cliffs north of White Rock, and can be traced to Water Canyon south of White Rock. Water Canyon. 35 48.167N 106 14.041W The boundary is most pronounced along the south face of the canyon in frames 2 through 5. This is the boundary between the A and B units, corresponding with Qbt 1g and Qbt 1v. In the latter scheme, the boundary is not thought to be a flow boundary, but a devitrification front. There is no change in the composition, size and number of clasts, or other obvious depositional features across the boundary. However, the A/Qbt 1g unit is composed of tuff in which the glassy shards of the ash and pumice are unchanged, while the glass shards in the B/Qbt 1v have been devitrified. The transition is The lower part of the B/Qbt 1v unit is relatively resistant and often shows columnar cooling joints. This transitions higher in this unit to soft tuff that generally forms slopes. The boundary between the B and C units is also accessible at Tsankawi Mesa. it is marked by a surge bed between the flows Surge beds at bottom of unit C on Tsankawi Mesa. 35 51.662N 106 13.001W It is more common for the boundary to be marked by a notch. Notch at bottom of unit C on Tsankawi 51.662N 106 13.001W Here’s a nice shot from just a little further down the trail. Notch at bottom of unit C on Tsankawi 51.671N 106 12.838W The notch between units is clearly visible. These units are also prominent further south, in Ancho Canyon. Tsherige Member in Ancho Canyon. 35 47.143N 106 15.543W The ledge atop Unit A/Qbt 1g is very distinct. The resistant lower bed of Unit B/Qbt 1v is particularly clear at left, and above is the softer upper portion of unit B/Qbt 1v. On top are the resistant beds of Unit C/Qbt 2, which seems to form a double layer Unit D (Rogers classification) or 3 (Broxton and Reneau classification) appears to consist of multiple thin flows that are particularly rich in lithic clasts. There is a remnant of this unit on the south rim of Water Canyon. Unit D on south rim of Water Canyon. 35 47.434N 106 13.696W Lithic clasts weathered out of the tuff litter the surface of the tuff. Unit D on south rim of Water Canyon. 35 47.446N 106 13.632W This unit probably marks the actual collapse of the caldera. Units D, E, and F are visible in upper Los Alamos Canyon. We saw this photograph earlier. Faults in Tshirege Member in Los Alamos Canyon. 35 52.689N 106 20.615W Unit F is the upper rim of the canyon. Unit E forms the slopes beneath, and the cliffs are Unit D. Tuffs often have a distinctive chemical signature that identifies them as the product of a single batch of lava. Thus, the Peralta Tuff discussed earlier in the book is significantly poorer in iron and richer in calcium than the Bandelier Tuffs, and the Tsherige Member is significantly richer in iron than the Otowi Member. However, there are systematic differences in composition even within the Tsherige Member. Chemical analysis shows that the first flows erupted during the supereruption were less rich in mafic minerals than those erupted later in the supereruption. This compositional zoning is interpreted as the gradual emptying of a magma chamber that is richer in mafic minerals towards its bottom than its top. The high-silica magma in the chamber was strongly differentiated, and the heavier mafic minerals had crystallized and begun to settle to the bottom. (With a magma as viscous as rhyolite magma, the settling process can be extremely slow.) The later flows also tend to be more densely welded than earlier flows, indicating that the lower part of the magma chamber was also hotter. While the Tsankawi Pumice erupted at a temperature of about 700 C, the topmost E and F units erupted at a temperature of about 850 C, and had a silica content of about 73% versus 77% for the earlier magmas. The compositional zoning is not obvious to the casual observer in the more distant parts of the Valles outflow sheet, but it becomes more obvious near the caldera where the differences are more extreme. For example, some of the exposures of the Tsherige Member just west of the caldera rim are quite dark in color. Mafic tuff along Forest Service Road 376 in the Jemez Plateau. 35 52.121N 106 41.964W Highly mafic Tsherige Member from near eastern caldera rim. 35.833N 106.364W Besides the dark color, suggesting high mafic content, the rock is much harder and noticeably denser than most samples of Tsherige Member. It is also particularly rich in crystals of quartz, feldspar, and biotite. These likely had settled towards the bottom of the magma chamber before the eruption. The Tsherige Member is generally more densely welded than the Otowi Member, and it seems to be much less prone to forming tent rocks. However, there are some impressive vapor phase pipes in lower San Juan Canyon. Vapor phase pipes in the Bandelier Tuff. 35.727N 106.622W These are formed where rising vapors within the fresh pyroclastic flow followed particular paths and hardened the tuff in a cylindrical shell around each path. The pipes are hard on the surface and soft within. Or so I'm told; I didn't want to disturb them enough to confirm this for myself. The Valles Caldera at the heart of the Jemez Mountains is a splendid example of a giant caldera.Though not the largest caldera known, it is relatively young, exceptionally well-preserved, and show most of the characteristic features of a giant caldera. This satellite photograph shows the Valles Caldera as it is today. The caldera measures about 12 miles (19 km) across and was formed by the Tsherige event. This emptied a magma chamber with a volume of 75 cubic miles (300 km3). The caldera has a number of mountains within it, which many writers have likened to a bear claw. The pad is the large area in the center, the toes are the smaller mountains north and northeast of the pad, and the thumb is South Mountain. We'll learn about the significance of these mountains in the next chapter. The topographic rim of the caldera is highlighted below. Topograhic rim of Valles Caldera The topographic rim is poorly defined to the northeast and southwest. In the southwest, the rim has been heavily eroded by the Jemez River in San Diego Canyon. In the northwest, erosion in upper Santa Clara Canyon has combined with subsidence to obscure the exact caldera rim. The area of subsidence here is known as the Toledo Embayment, and, as we've seen, it is something of a puzzle. The Toledo Embayment may be part of a fault zone connecting the Jemez Fault Zone in San Diego Canyon with the Embudo Fault Zone north of Espanola. Here is a map of precaldera rocks around the caldera. Relief map of the Jemez with pre-Toledo exposures highlighted in red and Toledo Interval exposures highlighted in yellow. Red areas are rocks older than the Toledo event. The exposures around the rim of the Valles caldera show the maximum possible extent of the Toledo caldera. Yellow areas are exposures older than the Valles event but no older than the Toledo event. These mark the maximum possible extent of the Valles caldera. The two are nearly identical except in the Toledo embayment. Let's take a tour now of the caldera rim as it exists today. We'll begin with the eastern caldera rim. The light area making up the southeast portion of the caldera in the satellite photos is the Valle Grande. It is easily accessible from State Road 4, from which the visitor sees this panorama. Valle Grande. 35 51.096N 106 27.305W Valle Grande was probably the deepest part of the caldera when it first formed. Earlier I showed a reconstruction of the basement beneath the caldera, which show that the floor of the caldera dropped considerably further on its east side than on the west, almost like a trap door hinged to the west. On the west side of the caldera, the floor dropped about 1500 meters (5000'), while the floor dropped about 4600m (15000') on the east side. The entire caldera floor was deeply blanketed by the Bandelier Tuff, which tended to level out the visible surface. The right side of the panorama shows the eastern rim of the caldera, from the Toledo Embayment to Cerro Grande. This is the Sierra de los Valles, which also forms the western skyline of the city of Los Alamos. From left to right,the three peaks are Caballo Peak, Pajarito Mountain, and Cerro Grande. Switching our viewpoint now to the north side of the Valle Grande, we show the southeastern rim of the caldera. Caldera southeast rim from Cerro Pinon. 53.485N 106 29.538W At left in the middle distance is Cerro del Medio, a ring fracture dome. Beyond and to the right is Cerro Grande in the southeast caldera rim. Just right of center is Sawyer Dome, the southernmost major dome of the Tschicoma Formation. Between Cerro Grande and Sawyer Dome is a low point in the rim through which State Road 4 enters the caldera, and through which pyroclastic flows of the Valles event flooded the Bandelier area and surroundings. Rabbit Mountain is the irregular mass at right, while relatively low hills form the rim between Rabbit Mountain and Conchas Peak at far right. South Mountain obscures the south rim west of Conchas Peak. Here's this stretch of the rim viewed from the south. Panorama from east side of Aspen Ridge. 47.888N 106 30.012W The panorama begins to the west, and Aspen Ridge extends across the left side of the panorama, turning east to form the south rim of the caldera. Conchas Peak is on the skyline at left. Redondo Peak forms the skyline behind the eastward leg of Aspen Ridge, and Cerros del Abrigo and Cerro del Medio are visible to the right of Redondo Peak, with the north caldera wall behind. Rabbit Mountain dominates the left center of the panorama, just left of the clump of foreground trees. On the other side of the foreground trees, we see the San Miguel Mountains in the distance, with mesas of Bandelier Tuff in the nearer distance. Further to the right, we almost directly down Bland Canyon. The far right of the panorama looks down the southern part of Aspen Ridge. We move now to the tent rocks in Valle de los Indios on the south rim. Vallecitos de los Indios from south rim. 35 48.670N 106 37.247W This panorama is a bit more awkward than most, because there was simply not a fully clear spot on the rim. I finally moved a few feet to replace a frame which was blocked by a clump of large trees. The panorama begins with the caldera rim immediately to the west, which is Otowi Formation forming tent rocks in the foreground. The distant mesa with prominent cliffs of Tshirege Member, Bandelier Tuff, is Virgin Mesa on the west side of Cañon de San Diego The red beds beneath, at the confluence of Vallecitos de los Indios and Cañon de San Diego, are Permian red beds of the Abo Formation. The grey outcrop just visible at the bottom of the canyon at the confluence is Battleship Rock. The plateau extending across much of the panorama is the Banco Bonito obsidian flow, which is the youngest volcanic flow in the Jemez at about 55,000 years. At center is Redondo Border and to its right is Redondo Peak, the second highest point in the Jemez. Redondo Border and Redondo Peak are a resurgent dome, formed after the last supervolcano eruption 1.2 million years ago, when fresh magma injected into the old magma chamber pushed the floor of the caldera back up. Way up. The valley between Redondo Border and Redondo Peak is the Redondo Graben, a medial graben formed when the dome began to split apart as it was stretched upwards. The knob right of Redondo Peak is South Mountain, the second youngest eruption center at about 520,000 years age. Like the Banco Bonito, it erupted through the ring fracture that defines the area that collapsed to form the Valles Caldera. The caldera rim here is well outside the ring fracture. The walls of the caldera were unstable and collapsed into the caldera in massive avalanches. The knob to the right of South Mountain, which is actually beyond Redondo Peak, is Cerro del Abrigo, another ring dome on the northeast side of the caldera. It has been dated to almost exactly a million years old. The two peaks dominating the right side of the panorama are Los Griegos and Cerro Pelado, on the south rim of the caldera. Both are underlain by andesites of the Paliza Canyon Formation, dating back from 8.8 to 9.4 million years in age. Los Griegos has a slightly younger cap of dacite, which is the highest point on the south rim of the caldera. The rim becomes much lower but also much more even to their west (hidden by foreground vegetation.) The rim continues west towards Battleship Rock. We've seen this Here we are looking south at the rim. The white knob capped with dark beds at the top of the rim is the approximate location from which the previous panorama was taken. Cañon de San Diego cuts across the caldera rim here, which we pick up again near La Cueva. West caldera rim near La Cueva. 35 52.396N 106 38.644W Notice that the west rim is as low as any part of the rim we have seen so far. This reflects the trapdoor geometry of the caldera. Further north, the rim forms the west side of the narrow San Antonio Canyon. Caldera rim across from San Antonio 56.387N 106 38.741W Here Paleozoic rocks of the Abo Formation are present at the base of the rim, which is capped mostly with Otowi Member with a few isolated patches of Tsherige Member. Looking further up canyon. Upper San Antonio Canyon. 35 56.524N 106 38.782W We pick up the rim again in westernmost Valle San Antonio. Here the rim is lower than any other point outside Cañon de San Diego. Northwest caldera rim. 35 58.369N 106 35.744W The hill in the foreground (steeper to the left) and the two rugged outcrops behind them are giant slabs of Paliza Canyon Formation andesite that originally sat on the caldera rim behind them, but which broke off and slid onto the caldera floor immediately after caldera collapse. The remainder of the north rim is covered by this panorama from Warm Springs dome. Warm Springs. 35 58.298N 106 33.728W From this point, one can see almost the entire length of the northern moat of the caldera. The panorama begins looking down the north moat to the west, and at center is the point on the caldera wall directly north of Warm Springs. The lower cliffs exposed in the rim are Paliza Canyon Formation andesites while the top of the rim is La Grulla formation andesite. Cerro de la Garita, the high point of the north rim, is to the right. The right end of the panorama looks down Pipeline Road and the north moat towards the Toledo Embayment. We get a better view of the northwest rim and Toledo Embayment from a point further west. Valle Toledo. 35 57.461N 106 28.890W In the middle distance at left is the northernmost part of the Cerro Santa Rosa complex and at right is Cerro Trasquilar, as known to geologists. (The Forest Service topographic map reverses these, confusingly.) Between in the distance is Cerro de la Garita. To the right of Cerro Trasquilar is more of the northeast rim, an area underlain by Santa Fe Group sediments intruded by both Paliza Canyon Formation and Bearhead Rhyolite. The right side of the panorama is dominated by Turkey Ridge, which lies across the mouth of the Toledo Embayment, while the dome at far right is Cerros de los Posos West, interpreted as a ring dome of the Toledo caldera. Cerros de los Posos is visible in the first photograph of this section, northeast of Cerro del Medio, and so we have completed our tour of the caldera rim. Relief map of the Jemez with caldera fill outcroppings highlighted in red. The topographic rim does not actually mark the ring fracture where the roof of the magma chamber collapsed. The steep walls left by collapse along the ring fracture were unstable, and subsequent landslides moved the rim outwards while partially burying the ring fracture. These early landslides are mostly buried by later sediments, but in a few places, particularly around Redondo Peak, the original caldera fill is exposed. Some of the best exposures are in the Redondo Graben, where megabreccias show where entire beds of older rock slid back into the caldera. Relief map of the Jemez with megabreccia outcroppings highlighted in red. The most accessible of the megabreccia blocks is located north of State Road 4 on the east side of Sulfur Creek. Megabreccia slab at Sulfur Creek. 35 52.516N 106 37.807 This low ridge is essentially a single large slab of Paliza Canyon Formation lava that broke off the caldera rim as the caldera collapsed, sliding down onto the caldera floor where we see it today. A somewhat more spectacular megabreccia is exposed below the northwest caldera wall, at the west end of Valle San Antonio. Caldera collapse landslide. Looking west from 35 58.357N 106 35.644W We saw this photograph earlier. Some of the largest megabreccia blocks are located near the center of the caldera, on what is now the northern slopes of Redondo Peak and Redondo Border. These form part of the ridges bounding Valle Jaramillo. Megabreccia along Valle Jaramillo. 35 54.628N 106 33.035W The ridge on the skyline is another chunk of andesite from the caldera wall that slid down, more or less intact, into the center of the caldera almost immediately after it formed. A spur from the ridge is underlain by a relatively small block of megabreccia, only a couple of hundred feet across. Megabreccia spur. 35 54.664N 106 32.069W The geologic map shows the entire ridge as megabreccia, but it looks like this broke into two blocks, with a smaller block at left and a much larger block at right. The road cut exposes part of the megabreccia Megabreccia in road cut. 35 54.582N 106 32.056W This rock is definitely andesite, though with some signs of hydrothermal alteration. It’s also a more or less solid block. It’s just the toe end of a very large block of andesite. Here is the main ridge, seen from the south: Megabreccia ridge north of Valle 54.285N 106 33.057W Notwithstanding the foreground trees, this shows just how large the megabreccia block is. Though now faulted in several places, the original block was over 1.6 km (1 mile) in length. There is no doubt the block is composed of the kind of andesites typical of the Paliza Canyon Formation. Paliza Canyon Formation andesite from megabreccia block. 35 54.447N 106 33.432W These boulders were weathered off the andesite megabreccia block. This block is one of the two largest megabreccia blocks in the caldera. The other is located just east of Redondito. Redondito with megabreccia block to its east (left). 35 54.093N 106 33.048W The knob east of Redondito is composed of Tschicoma Formation Another large megabreccia block is found on the northern portion of Redondo Border. Northern part of Redondo Border. 35 54.447N 106 33.432W This ridge is the northern end of Redondo Border. The knob at top is a megabreccia block of Paliza Canyon Formation tuff over The presence of large and seemingly intact exposures of precaldera rock in the center of the caldera contributed to the early perception that there was once a very large central volcano where the caldera is now located. The precaldera rock was interpreted as the summit of this foundered volcano, poking through the surrounding beds of Bandelier Tuff and postcaldera sediments. However, drilling, careful mapping, and other geophysical data have conclusively shown that these are rootless beds of rock originally from the caldera rim. Along with massive intact slabs of caldera rim, considerable amounts of shattered rock slid back onto the caldera floor immediately after it formed. The still-jagged caldera rim then experienced rapid erosion which often took the form of debris flows, like those we've already seen in the Puye Formation. One such debris flow is found in lower Water Canyon west of San Antonio Campground, north of La Cueva. Debris flow at mouth of Water Canyon. 35 53.206N 106 38.695W The debris flow here takes the form of very large boulders of every composition from Abo sandstone to Tschicoma dacite. These form a jumble at the canyon mouth. Lag deposit on debris flow at mouth of Water Canyon. 35 53.200N 106 38.608W This is likely an example of lag deposit, where the larger clasts eroded out of an unsorted bed are left behind when the finer clasts are washed away. Debris flows underlie much of the terrain north of Redondo Peak. Since these formed later, as Redondo Peak was pushed up, we'll take a look at them later on. The eruption of the Tshirege Member and the collapse of the Valles Caldera were not the end of volcanism in the Jemez. In the next chapter, we'll see how volcanism resumed almost immediately after the Valles event and has continued from time to time ever Next page: Resurgence &c Copyright © 2015 Kent G. Budge. All rights reserved.
Solving Equations And Inequalities Worksheet. Refer Linear Inequations and Linear Inequalities Worksheet to get options for all inequation issues. Geared towards eighth-grade math learners, this algebra worksheet provides students follow finding the number of options in a linear equation. These worksheet will produce twelve issues per page. These Algebra 1 Equations Worksheets will produce two step issues containing integers. These math worksheets must be practiced often and are free to download in PDF codecs. Algebra is usually taught abstractly with little or no emphasis on what algebra is or how it can be used to unravel real problems. Just as English could be translated into different languages, word problems may be “translated” into the maths language of algebra and easily solved. Real World Algebra explains this process in an easy to understand format using cartoons and drawings. - Teachers, be happy to print the included pdf files for use within the classroom. - Designed for children in grades 4-9 with higher math capacity and interest however could be utilized by older college students and adults as nicely. - We also provide a separate answer book to make checking your answers easier! - Equations and inequalities are each mathematical sentences fashioned by relating two expressions to every other. - Solving Inequalities has been eliminated out of your saved topics. - Algebra 1 or elementary algebra offers with solving the algebraic expressions for a viable answer. It exhibits the info which isn’t equal in graph type. This is a fantastic bundle which includes everything you have to find out about Understanding and Solving One-Variable Inequalities across 15+ in-depth pages. These are ready-to-use Common core aligned Grade 6 Math worksheets. The guidelines of equations and inequalities are nearly alike. There are simply few rules that we must be reminded of when dealing with inequalities. Click the button under to get instant entry to those premium worksheets to be used within the classroom or at a home. Graph the 2 equations and establish the factors of intersection. These factors will have x-values that produce the same y-values for each expressions. - 1 Probability Of Random Events Carnival Themed Worksheets - 2 Download Inequalities Word Issues Worksheet Pdfs - 3 Related posts of "Solving Equations And Inequalities Worksheet" Probability Of Random Events Carnival Themed Worksheets Get Worksheet on Linear Inequations from this web page. Free Linear Inequalities Worksheet has completely different questions and solutions together with the detailed clarification. Refer Linear Inequations and Linear Inequalities Worksheet to get solutions for all inequation problems. Solving Inequalities has been added to your saved subjects. Revise for your GCSE maths examination using the most complete maths revision cards obtainable. These GCSE Maths revision cards are relevant for all major exam boards including AQA, OCR, Edexcel and WJEC. The revenue from every pack is reinvested into making free content material on MME, which benefits hundreds of thousands of learners throughout the country. You could select the range of numbers to work with in addition to entire quantity or decimal numbers. You might specify what quantity of decimal points to around the solutions. This worksheet will produce ten issues per page. These Algebra 1 Equations Worksheets will produce absolute value problems with monomials and polynomials expressions. To clear up the inequality, we have to determine the values of x that make the value of the expression -3x + 20 greater than 5. As we have done within the above downside, write two new equations by setting y equal to each within the original equation. Notice the subtle variations between the 4 number traces, work out the solution represented by every. Solve the multi-step inequality, scout out the graph that best expresses the answer to the inequality. Inequalities Follow Questions Clear parentheses and fractions, rewrite the multi-step inequality with the variable on one side. Find the answer and visualize the answer by expressing it on a number line. These are the solutions to Author’s Tone Worksheet 1. This is a unbelievable bundle which incorporates every thing you want to know about Linear Equations & Inequalities throughout 21 in-depth pages. These are ready-to-use Common core aligned Grade 7 Math worksheets. To have an equation by graphing, write two new equations by setting y equal to every in the original equation. This has the benefit that you can save the worksheet instantly out of your browser (choose File → Save) after which edit it in Word or different word processing program. Helping with Math is among the largest providers of math worksheets and turbines on the web. We present high-quality math worksheets for more than 10 million teachers and homeschoolers every year. While we proceed to grow our in depth math worksheet library, you can get all editable worksheets obtainable now and in the future. We add 100+ K-8, frequent core aligned worksheets each month. Download Inequalities Word Issues Worksheet Pdfs Our word issues cover one step, two step, distance, rate and time issues, mixture, and work issues. Linear equations and inequalities worksheets give children an concept of how to clear up linear equations and discover the solutions to inequalities. The questions embrace easy questions to find the value of a variable and can move on to more durable graphical or word problems. The attainable values of x are 10, 11, 12, and so on. Below you will discover many Maze Solving Equations Worksheets to make use of along with your Algebra 1 class. Be sure to scroll down and examine them out after studying the lesson. Click the image to be taken to that Equations Worksheets. These Algebra 1 Equations Worksheets will produce distance, rate, and time word problems with ten issues per worksheet. To have an inequality by graphing, write two new equations by setting y equal to every in the authentic inequality. Equations and inequalities are both mathematical sentences fashioned by relating two expressions to one another. An equation is a press release that asserts the equality of two expressions and a linear inequality is an inequality which entails a linear perform.
This article was co-authored by wikiHow Staff. Our trained team of editors and researchers validate articles for accuracy and comprehensiveness. wikiHow's Content Management Team carefully monitors the work from our editorial staff to ensure that each article is backed by trusted research and meets our high quality standards. This article has been viewed 269,265 times. Dividing by a decimal number can look difficult at first. After all, no one taught you the "0.7 times tables." The secret is to change the division problem into a format that only uses whole numbers. Once you've rewritten the problem in this way, it becomes a regular long division problem. X Research source Part 1 of 2:Writing the Problem as an Ordinary Division Problem 1Write out your division problem. Use pencil in case you want to revise your work. - Example: What is 3 ÷ 1.2? 2Write the whole number as a decimal. Write a decimal point after the whole number, than write zeroes after the decimal point. Do this until both numbers have the same number of places to the right of the decimal point. This does not change the value of the whole number. X Research source Write the whole number as a decimal - Example: In the problem 3 ÷ 1.2, our whole number is 3. Since 1.2 has one place to the right of the decimal point, rewrite 3 as 3.0, so it also has one place after the decimal. Now our problem is 3.0 ÷ 1.2. - Warning: do not add zeros to the left of the decimal point! 3 is the same as 3.0 or 3.00, but it is not the same as 30 or 300. 3Move the decimal points to the right until you have whole numbers. In division problems, you're allowed to move the decimal points, but only if you move them by the same amount for each number. This lets you turn the problem into whole numbers. X Research source - Example: To turn 3.0 ÷ 1.2 into whole numbers, move the decimal points one space to the right. 3.0 becomes 30, and 1.2 becomes 12. Now our problem is 30 ÷ 12. 4Write the problem using long division. Put the dividend (usually the larger number) under the long division symbol. Write the divisor outside it. Now you have an ordinary long division problem using whole numbers. If you want a reminder of how to do long division, read the next section. Part 2 of 2:Solving the Long Division Problem 1Find the first digit of the answer. Start solving this just as you would normally, by comparing the divisor to the first digit of the dividend. Calculate the number of times the divisor goes into this digit, then write this number above that digit. X Research source - Example: We're trying to fit 12 into 30. Compare 12 to the first digit of the divisor, 3. Since 12 is larger than 3, it goes into it 0 times. Write 0 above the 3, on the answer line. 2Multiply that digit by the divisor. Write the product (the answer to the multiplication problem) down below the dividend. Write it directly below the first digit of the dividend, since this is the digit you just looked at. - Example: Since 0 x 12 = 0, write 0 underneath the 3. 3Subtract to find what's left over. Subtract the product you just found from the digit directly above it. Write the answer on a new line below. - Example: 3 - 0 = 3, so write 3 directly below the 0. 4Bring down the next digit. Bring the next digit of the dividend down next to the number you just wrote. - Example: Our dividend is 30. We've already looked at the 3, so the next digit to bring down is 0. Bring this down next to your 3 to make 30. 5Try to fit the divisor into the new number. Now repeat the first step of this section to find the second digit of your answer. This time, compare the divisor to the number you just wrote down on the lowest line. - Example:' How many times does 12 fit into 30? The closest we can get is 2, since 12 x 2 = 24. Write 2 in the second spot of the answer line. - If you're not sure what the answer is, try some multiplication problems until you find the largest answer that fits. For example, if it seems like 3 is about write, multiply out 12 x 3 and you'll get 36. This is too big, since we're trying to fit within 30. Try the next one down, 12 x 2 = 24. This does fit, so 2 is the correct answer. 6Repeat the steps above to find the next number. This is the same long division process used above, and for any long division problem: - Multiply the new digit on your answer line by the divisor: 2 x 12 = 24. - Write the product on a new line below your dividend: Write 24 directly underneath 30. - Subtract the lowest line from the one above it: 30 - 24 = 6, so write 6 on a new line underneath. 7Continue until you reach the end of the answer line. If there's still another digit left in your dividend, bring it down and continue solving the problem the same way. If you've reached the end of the answer line, go to the next step. X Research source - Example: We just wrote 2 at the end of the answer line. Go to the next step. 8Add a decimal to extend the dividend if necessary. If the numbers divided evenly, your last subtraction problem has "0" as the answer. That means you're done, and you have a whole number as the answer to your problem. But if you've reached the end of the answer line and there's still something left to divide, you'll need to extend the dividend by adding a decimal point followed by a 0. Remember, this does not change the value of the number. - Example: We're at the end of the answer line but the answer to our last subtraction problem is "6." Extend the "30" under the long division symbol by adding ".0" to the end. Write a decimal point at the same spot on the answer line as well, but don't write anything after it yet. 9Repeat the same steps to find the next digit. The only difference here is that you must bring the decimal point up to the same spot on the answer line. Once you've done that, finding the remaining digits of the answer is exactly the same. - Example: Bring down the new 0 down to the last line to make "60." Since 12 goes into 60 exactly 5 times, write 5 as the last digit on our answer line. Don't forget that we put a decimal on our answer line, so 2.5 is the final answer to our problem. QuestionHow do I divide a whole number and a decimal if the whole number is larger than the decimal?Community AnswerYou have to keep adding zeros to the end of the decimal until the whole number can fit inside it. QuestionWhy do we need to move the zero so it will disappear?Top AnswererIf you're asking about the zero in the quotient above, it's because 02.5 is the same as 2.5, and it's just easier to write it without the zero. QuestionWhen did the 6 on the bottom become 5 on the top?Top AnswererIt didn't. Instead, a zero was brought down to the bottom line, turning the 6 into a 60. Then 12 was divided into 60, giving 5, which was entered on the top. QuestionWhere does the decimal point go after dividing them?Community AnswerYou move the decimal point straight up from the dividend into the quotient. QuestionWhat if the dividend is smaller than the divisor?Top AnswererDivide as described above. You'll wind up with a quotient less than 1. QuestionDo I do the same thing if it is only one decimal and one whole number?Top AnswererYes. QuestionWhat if the numbers keep repeating?Top AnswererWhen you encounter a repeating decimal, you can choose to ignore the later numbers after they start repeating if you'd like. Your accuracy increases when you include later, repeating numbers, but the increase in accuracy is minor. QuestionHow do I solve 63/1.8 in long division?Top AnswererTransform the problem to 630 ÷ 18. The answer will be correct. QuestionHow do I rewrite a problem so that the divisor is a whole number?Top AnswererMove the decimal point enough places to the right to make the divisor a whole number. You must move the decimal point in the dividend the same number of places to the right. QuestionIf a dividend has a decimal point, is it the same operation as above?Top AnswererYes. Just put a decimal point in the quotient right above the one in the dividend. - If you follow the long division method correctly, you'll always end up with the decimal point in the right position, or no decimal point at all if the numbers divide evenly. Don't try to guess where the decimal goes; it's often different than where the decimal is in the numbers you started with. - You can write this as a remainder instead (so the answer to 3 ÷ 1.2 would be "2 remainder 6"). But now that you're working with decimals, your teacher probably expects you to solve the decimal part of the answer as well. - If the long division problem goes on for a long time, you can stop at some point and round to a nearby number. For example, to solve 17 ÷ 4.20, just calculate to 4.047... and round your answer to "about 4.05." - Remember, 30 ÷ 12 will give exactly the same answer as 3 ÷ 1.2. Don't try to "correct" your answer afterward by moving the decimals back. X Research source - ↑ https://www.khanacademy.org/math/arithmetic/arith-decimals/arith-review-dividing-decimals/v/dividing-decimals - ↑ Write the whole number as a decimal - ↑ https://www.montereyinstitute.org/courses/DevelopmentalMath/TEXTGROUP-1-8_RESOURCE/U03_L2_T2_text_final.html - ↑ https://www.mathsisfun.com/long_division.html - ↑ https://www.mathsisfun.com/long_division.html - ↑ http://www.mathsisfun.com/dividing-decimals.html About This Article To divide a whole number by a decimal, write out the division problem with both numbers represented as decimals with the same number of places to the right of the decimal point. For example, you would write 3.0 with one decimal place divided by 1.2. Then, move the decimal places to the right until you have 2 whole numbers. In this case, you would have 30 divided by 12. Solve this problem as a normal long division equation to get your final answer, which would be 2.5 in this case. Remember to include any decimal places in your answer. If you want to learn how to work through the long division problem, keep reading the article!
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work! Meteorite crater, depression that results from the impact of a natural object from interplanetary space with Earth or with other comparatively large solid bodies such as the Moon, other planets and their satellites, or larger asteroids and comets. For this discussion, the term meteorite crater is considered to be synonymous with impact crater. As such, the colliding objects are not restricted by size to meteorites as they are found on Earth, where the largest known meteorite is a nickel-iron object less than 3 metres (10 feet) across. Rather, they include chunks of solid material of the same nature as comets or asteroids and in a wide range of sizes—from small meteoroids (see meteor and meteoroid) up to comets and asteroids themselves. Meteorite crater formation is arguably the most important geologic process in the solar system, as meteorite craters cover most solid-surface bodies, Earth being a notable exception. Meteorite craters can be found not only on rocky surfaces like that of the Moon but also on the surfaces of comets and ice-covered moons of the outer planets. Formation of the solar system left countless pieces of debris in the form of asteroids and comets and their fragments. Gravitational interactions with other objects routinely send this debris on a collision course with planets and their moons. The resulting impact from a piece of debris produces a surface depression many times larger than the original object. Although all meteorite craters are grossly similar, their appearance varies substantially with both size and the body on which they occur. If no other geologic processes have occurred on a planet or moon, its entire surface is covered with craters as a result of the impacts sustained over the past 4.6 billion years since the major bodies of the solar system formed. On the other hand, the absence or sparseness of craters on a body’s surface, as is the case for Earth’s surface, is an indicator of some other geologic process (e.g., erosion or surface melting) occurring during the body’s history that is eliminating the craters.
About This Chapter Correlation & Regression in Statistics - Chapter Summary and Learning Objectives Use the fun and flexible videos in this chapter to learn about simple linear regression, the correlation coefficient and scatterplots. These videos break down large concepts into easy-to-understand chunks that can be viewed in about 5-6 minutes. Each lesson is taught by a subject expert who utilizes animations, illustrations and examples to bring correlation and regression topics to life. By watching these videos, you'll be able to: - Interpret scatterplots and linear relationships - Find the coefficient of determination - Analyze residuals - Interpret slope of a linear model - Transform nonlinear data |Creating & Interpreting Scatterplots: Process & Examples||Outline the steps used to create scatterplots and learn to interpret them.| |Simple Linear Regression: Definition, Formula & Examples||Understand the formula used in simple linear regression.| |Problem Solving Using Linear Regression: Steps, Examples & Quiz||Practice solving problems by using linear regression.| |Analyzing Residuals: Process, Examples & Quiz||Study methods for finding violations of regression assumptions using residual analysis.| |Interpreting the Slope & Intercept of a Linear Model: Lesson & Quiz||Discover how to predict statistical info by interpreting slope and intercept.| |The Correlation Coefficient: Definition, Formula & Example||Learn to use a formula to find the correlation coefficient.| |How to Interpret Correlations in Research Results||Evaluate the purpose of correlations and learn how to interpret correlations that are a part of research results.| |Correlation vs. Causation: Differences, Lesson & Quiz||Determine ways to identify correlations and causations and find out how they differ.| |Interpreting Linear Relationships Using Data: Practice Problems, Lesson & Quiz||Practice interpreting linear relationships with examples and sample problems.| |Transforming Nonlinear Data: Steps, Examples & Quiz||Recognize the steps used to transform nonlinear data to allow for the use of linear models.| |Coefficient of Determination: Definition, Formula & Example||Review ways to find the coefficient of determination and describe its relationship with variation.| 1. Creating & Interpreting Scatterplots: Process & Examples Scatterplots are a great visual representation of two sets of data. In this lesson, you will learn how to interpret bivariate data to create scatterplots and understand the relationship between the two variables. 2. Simple Linear Regression: Definition, Formula & Examples Simple linear regression is a great way to make observations and interpret data. In this lesson, you will learn to find the regression line of a set of data using a ruler and a graphing calculator. 3. Problem Solving Using Linear Regression: Steps & Examples Linear regression can be a powerful tool for predicting and interpreting information. Learn to use two common formulas for linear regression in this lesson. 4. Analyzing Residuals: Process & Examples Can you tell what's normal or independent, and what's not? Sometimes we need to figure this out in the world of statistics. This lesson shows you how as it explains residuals and regression assumptions in the context of linear regression analysis. 5. Interpreting the Slope & Intercept of a Linear Model You've probably seen slope and intercept in algebra. These concepts can also be used to predict and understand information in statistics. Take a look at this lesson! 6. The Correlation Coefficient: Definition, Formula & Example The correlation coefficient is an equation that is used to determine the strength of the relationship between two variables. This lesson helps you understand it by breaking the equation down. 7. How to Interpret Correlations in Research Results Perhaps the most common statistic you'll see from psychology is a correlation. Do you know how to correctly interpret correlations when you see them? This lesson covers everything you need to know. 8. Correlation vs. Causation: Differences & Definition When conducting experiments and analyzing data, many people often confuse the concepts of correlation and causation. In this lesson, you will learn the differences between the two and how to identify one over the other. 9. Interpreting Linear Relationships Using Data: Practice Problems Understanding linear relationships is an important part of understanding statistics. This lesson will help you review linear relationships and will go through three practice problems to help you retain your knowledge. When you are finished, test out your knowledge with a short quiz! 10. Transforming Nonlinear Data: Steps & Examples Sometimes we have data sets that we need to analyze and interpret, but it's difficult because the data is nonlinear. This lesson will teach you how to transform nonlinear data sets into more linear graphs. 11. Coefficient of Determination: Definition, Formula & Example The coefficient of determination is an important quantity obtained from regression analysis. In this lesson, we will show how this quantity is derived from linear regression analysis, and subsequently demonstrate how to compute it in an example. Earning College Credit Did you know… We have over 95 college courses that prepare you to earn credit by exam that is accepted by over 2,000 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you. Other chapters within the DSST Principles of Statistics: Study Guide & Test Prep course - Data Types & Measurements in Statistics - Sampling Methods in Statistics - Descriptive Statistics of Data Sets - Visual Representations in Statistics - Probability: Rules for Events - Probability Combinations, Permutations & Expected Values - Probability: Discrete & Continuous Distributions - Sampling Distributions in Statistics - Hypothesis Testing in Inferential Statistics
Logarithms - Napier's Wonderful Invention! Lesson 9 of 14 Objective: SWBAT use logarithms to solve a simple exponential equation. I won't mention this to the students, of course, until they're finished. Since they've seen "trick" questions on these sprints before (such as 0^-1 = x), some of them may write something like "undefined" or "impossible" for x. That's fine for now. The Need for Logarithms When the sprint is over, it's time to introduce logarithms. I use the problem from the sprint as a concrete example to show that we need some way to "find x if b^x = a". In other words, we need to define the inverse of the exponentiation operation, just like division is the inverse of multiplication. By experimenting with a calculator, we find that if 95 is regarded as a power of 4, then "the logarithm of 95 to the base 4 is approximately 3.2849278". I show the students how to write and say this. I also emphasize that as strange as the word "logarithm" seems, it can simply be thought of as a synonym for "exponent". The difference is that we tend to use the word logarithm when we're trying to solve for an unknown exponent. I explain it this way: For good measure, I'll ask the students to try another one on their own: 4^x = 572. (This number will come up later in the lesson.) They should approximate it with their calculators, and I'll confirm the true value out to seven decimal places: 4.5799357. (For these initial problems, I've intentionally used a base other than 10 or e because I want my students to first "find the unknown exponent" via approximation. By experimenting on their own, some may figure out what the log and ln keys are for on their calculators, and I applaud them for it. I also do not introduce the logarithm notation yet, because I want my students to get comfortable with the word first. Later they'll see the notation as an abbreviation of the verbal expression.) So, where did logarithms come from? It's time to launch into a little history lecture about Sir John Napier, Baron of Merchiston (1550 - 1617). (See my notes. For much more detail check out this essay. Also, I highly recommend Eli Maor's e: The Story of a Number, Chapters 1 & 2. This is probably the best place to begin learning more about logarithms.) Napier was a bit of a renaissance man, and mathematics was one of his passions. He was particularly interested in developing more efficient methods of calculating. One of these inventions - the logarithm - was introduced and explained in his 1614 work Mirifici Logarithmorum Canonis Constructio, just three years before his death... As I tell the story, I emphasize these points: 1. Napier postulated that every number may be regarded as a power of any other number. (Recall that a postulate is an unproved assumption. Is Napier's postulate reasonable?) 2. Napier's logarithms are exactly the same as exponents. Napier picked a single number to use consistently as the base (1 - 10^-7) and then found the exponent/logarithm for other numbers with reference to this base. Knowing the exponents made computations easier. Which is Easier? Now I say, "We've established what a logarithm is (a new name for something familiar) and who invented them. But why? Why were mathematicians and scientists so excited about them? What motivated them to immediately begin computing logarithms for all of the numbers from 1 - 100,000 ... to 10 decimal places ... by hand?! Why did mathematicians keep using these tables of logarithms for the next 350 years?" With this question hanging in the air, I pass out these problems (each student gets just one), face down. It's going to be a race, so no peeking! The slips of paper look the same, and the students don't know that half the class gets a subtraction problem, while the other half gets a division problem. Remember, Napier didn't have a calculator, so you don't get one either. Ready, set, go! (Check out this video to see how the race ends.) The point is that addition/subtraction is much easier (and typically more error-free) than multiplication/division. After it's all over, I'll write the two problems on the board and ask the students to use a calculator to solve them. Then, I'll ask the students to all take the difference of the subtraction problem and raise 4 to that power. Lo, and behold, it's the quotient of the division problem! How Napier Used Logarithms To summarize, I'll say: "The takeaway from the race is two-fold. First, subtraction is easier than division, so it would be really nice if we could find a way to turn division problems into subtraction problems. Second, the the numbers used in the subtraction problem were the logarithms of the numbers in the division problem. And, as we saw, the difference was the logarithm of the quotient." At this point, I'm going to pause expectantly. As they mull over what they've just seen, I expect the light will begin to dawn for a number of students. "Of course! If you divide two powers, then you find the difference of the exponents! 527 and 95 are both powers of 4, so if we find the difference of the exponents - I mean, logarithms - then that should give us the exponent (or logarithm) for the quotient. If we raise 4 to that power, we'll get the actual quotient." I'll do my best to stay quiet now and let the students do the explaining. I might ask if they could illustrate with a simpler division problem, such as 32 / 8. Once it seems like everyone understands what we're seeing, I'll use this focus on division to introduce the meaning of the word logarithm - "ratio number" - as an aside. Finally, I'll summarize by saying that if Napier wanted to divide two numbers, he'd look up their logarithms in his handy-dandy table and then find the difference of the logs. (Students are always shocked to hear about these tables: hundreds of thousands of logs calculated by hand to many decimal places!) Next, he'd find the number whose logarithm is equal to this difference, that number would be the quotient. Along the way, I'd be sure to illustrate how this is identical to the familiar property of exponents. This is a great point in the lesson to reiterate how important logarithms were to scientists and mathematicians in Napier's day. His invention made their computations much easier and much less prone to error! The students have seen and done a lot today, but I want to leave them with one last thing to think about. I'll write the following homework problems on the board: 1. Find an approximate value for x, correct to three decimal places: 3^x = 17 2. Find an approximate value for x, correct to three decimal places: 3^x = 23 3. How might Napier use your x-values to multiply 17 by 23? A task like this is also a great one for a 3-2-1 Exit Ticket. It'll be useful to see the students' responses before the next lesson!
Turning points in Physics in a Box Five key turning points in Physics are illustrated in our Lab in a Box scheme. Schools and Colleges can borrow five experiments to illustrate key discoveries in the study of Physics. Supporting the A level Option “ Turning Points in Physics” 1. Electron Diffraction The wave-particle duality concept is central to understanding quantum physics.The A level Specification introduces the DeBroglie equation and this experiment uses it along with the diffraction equation to find the spacing between carbon atoms in graphite. 2. Measuring the specific charge of an electron (or the e/m ratio) The measuring the e/m ratio of an electron was very important so Physics at the beginning of the 1900s could gain a better understanding of this newly discovered particle. The experiment uses equations from circular motion and forces on electrons in magnetic fields which is part of the A2 physics syllabus as well as being in the Turning Points in Physics Module. The experiment needs a fully darkened space. 3. Microwave Interferometer One of the most important turning points in Physics was the Mitchelson Morely Experiment which implied that light did not need a medium to propagate The microwave interferometer instructs students on the basics and the BOX has support material which discusses the famous experiment. Also in the box is a double slit apparatus which can be used to find the wavelength of the microwaves using Young’s Slits equation. 4. Millikan’s Oil Drop Experiment It was always important to measure the charge of an electron. Millikan’s ingenious experiment is available here for students to do themselves. They must find a drop, then find a voltage which will cause it to hover. The students must then measure the terminal velocity when it falls freely. 5. Planck's Constant Planck’s Constant must be one of the most used in modern physics. This experiment uses the photoelectric effect and Einstein's equation to measure the constant h. The box also contains a class set of LED boxes where the constant can bemeasured using a voltmeter and an ammeter using the equation E = hf. To book Turning Points in Physics in a Box please contact firstname.lastname@example.org
Individual differences | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | The activation energy in chemistry and biology is the threshold energy, or the energy that must be overcome in order for a chemical reaction to occur. Activation energy may otherwise be denoted as the minimum energy necessary for a specific chemical reaction to occur. The activation energy of a reaction is usually denoted by Ea. Basically, activation energy is the height of the potential barrier separating two minima of potential energy (of the reactants and of the products of reaction). For chemical reaction to have noticeable rate, there should be noticeable number of molecules with the energy equal or greater than the activation energy. - Main article: Collision theory Known as the "collisional model", there are three necessary requirements in order for a reaction to take place: - 1. the molecules must collide to react. If two molecules simply collide, however, they will not always react; therefore, the occurrence of a collision is not enough. The second requirement is that: - 2. there must be enough energy (energy of activation) for the two molecules to react. This is the idea of a transition state; if two slow molecules collide, they might bounce off one another because they do not contain enough energy to reach the energy of activation and overcome the transition state (the highest energy point). Lastly, the third requirement is: - 3. the molecules must be oriented with respect to each other correctly. For the reaction to occur between two colliding molecules, they must collide in the correct orientation, and possess a certain, minimum, amount of energy. As the molecules approach each other, their electron clouds repel each other. Overcoming this repulsion requires energy (activation energy), which is typically provided by the heat of the system; i.e., the translational, vibrational, and rotational energy of each molecule, although sometimes by light (photochemistry) or electrical fields (electrochemistry). If there is enough energy available, the repulsion is overcome and the molecules get close enough for attractions between the molecules to cause a rearrangement of bonds. At low temperatures for a particular reaction, most (but not all) molecules will not have enough energy to react. However there will nearly always be a certain number with enough energy at any temperature because temperature is a measure of the average energy of the system — individual molecules can have more or less energy than the average. Increasing the temperature increases the proportion of molecules with more energy than the activation energy, and consequently the rate of reaction increases. Typically the activation energy is given as the energy in kilojoules needed for one mole of reactants to react. Mathematical formulation Edit The Arrhenius equation gives the quantitative basis of the relationship between the activation energy and the rate at which a reaction proceeds. From the Arrhenius equation, the activation energy can be expressed as where A is the frequency factor for the reaction, R is the universal gas constant, and T is the temperature (in kelvins). The higher the temperature, the more likely the reaction will be able to overcome the energy of activation. A is a steric factor, which expresses the probability that the molecules contain a favorable orientation and will be able to proceed in a collision. In order for the reaction to proceed and overcome the activation energy, the temperature, orientation, and energy of the molecules must be substantial; this equation manages to sum up all of these things. Because Ea for most chemical reactions is in few electronvolt range (as chemical reactions only involve exchange of outermost electrons between atoms), then raising the temperature by 10 kelvins (at room temperature kT~0.04 eV) approximately doubles the rate of a reaction (in the absence of any other temperature dependent effects) due to an increase in the number of molecules that have the activation energy (as given by Boltzmann distribution equation) . Transition states Edit The transition state along a reaction coordinate is the point of maximum free energy, where bond-making and bond-breaking are balanced. Transition states are only in existence for extremely brief (10-15 s) periods of time. The energy required to reach the transition state is equal to the activation energy for that reaction. Multi-stage reactions involve a number of transition points, here the activation energy is equal to the one requiring the most energy. After this time either the molecules move apart again with original bonds reforming, or the bonds break and new products form. This is possible because both possibilities result in the release of energy (shown on the enthalpy profile diagram, Fig-1, as both positions lie below the transition state). A substance that modifies the transition state to lower the activation energy is termed a catalyst; a biological catalyst is termed an enzyme. It is important to note that a catalyst increases the rate of reaction without being consumed by it. In addition, while the catalyst lowers the activation energy, it does not change the energies of the original reactants nor products. Rather, the reactant energy and the product energy remain the same and only the activation energy is altered (lowered). To further enhance this idea, see this page or the image to the right. See also Edit es:Energía de activación fr:Énergie d'activationhe:אנרגיית שפעול nl:Activeringsenergiept:Energia de ativação ru:Энергия активации sk:Aktivačná energia sl:Aktivacijska energija fi:Aktivointienergia sv:Aktiveringsenergi th:พลังงานกระตุ้น |This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Examining “Necessary and Proper,” the Jefferson-Hamilton duel, and Federalism The story of the post-Declaration of Independence (1776) Articles of Confederation (1784) is well known. Bluntly put, the Articles did not work. Not only was there no provision for one chief executive for the entire nation, trade barriers erected by some states against others and other political problems threatened to kill our new nation in its cradle. The Constitution of the United States of America was designed to correct those problems, among others, by creating a federal union. The preamble of the Constitution of the United States of America explicitly states its goal: to “form a more perfect Union. . . .” Article I, Section 2, provides for state-based elections for members of the federal Congress, state residency for election to federal office, tax apportionment “among the several states,” at least one Representative from each state, and vacancies in state representation. Article I, Section 3, for two senators from each state, and state residency. Article I, Section 8, provides that “The Congress shall have Power To . . . regulate Commerce . . . among the several States,” Article II provides for state-appointed electors to choose the president and vice president of the United States. Article IV provides that “Full Faith and Credit shall be given in each State to the public Acts, Records, and judicial Proceedings of every other State,” that “The Citizens of each State shall be entitled to all Privileges and Immunities of Citizens in the several States,” that alleged criminals can be extradited from one state to another, that new states may be admitted “into this Union,” and that “The United States [the federal government] shall guarantee to every State in this Union a Republican Form of Government, and shall protect each of them against Invasion; and . . . domestic violence.” Article V provides for state participation in amendment of the federal Constitution. Article VI provides that “This Constitution, and the Laws of the United States [the federal government] which shall be made in Pursuance thereof; and all Treaties . . . shall be the supreme Law of the Land; and the judges in every State shall be bound thereby, any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.” Article VII provides for state ratification of the federal Constitution. Although one can argue that there are some ambiguities in the Constitution, the Preamble’s expressed intent to create a federal union, and the various examples of it just referenced, could not be clearer. And after the Constitution’s ratification in 1787, if there was any lingering doubt the new nation was intended to be, and had become, a federal republic — consisting of a national government made up of constituent states, each of which possessed its own residual powers — the Tenth Amendment provided “[t]he powers not delegated to the United States [the federal government] by the Constitution, nor prohibited by it to the States, are reserved to the States, respectively, or to the people.” Clearly, there was to be in our constitutional system, a division of power. Whether it was to be equally divided, is another matter. Especially in light of Article I, Section 8: “The Congress shall have Power . . . [t]o make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the [federal] Government of the United States, or in any Department or Officer thereof.” (My italics.) What are we to make of all these provisions? As I wrote in The Supreme Court Opinions of Clarence Thomas (1991 – 2011) (2d ed.) . . . the Constitution of the United (i.e., combined into one federal Union) States expressly affirms the existence of reserved powers in the states and in the people, respectively. Just as the first nine amendments are an assurance that individual rights were to be protected from the newly formed federal government, the Tenth Amendment is a guarantee that states and their citizens would retain their powers as against the national government—except as to powers expressly granted in the Constitution to the federal government, or expressly denied to the states. Former Attorney General of the United States Edwin Meese III has written that “[t]he institutional design [of the Constitution] was to divide sovereignty between two different levels of political entities, the nation and the states. This would prevent an unhealthy concentration of power in a single government. It would provide, as Madison said in The Federalist No. 51, a ‘double security . . . to the rights of the people.’ Federalism, along with separation of powers, the Framers thought, would be the basic principled matrix of American constitutional liberty. ‘The different governments,’ Madison concluded, ‘will control each other; at the same time that each will be controlled by itself’.” It is believed by some constitutional law scholars that the most important opinion of the scores written by John Marshall during his more than thirty years as Chief Justice was M'Culloch v. Maryland, the first case to rule on the meaning and scope of the “Necessary and Proper” Clause. At the Constitutional Convention of 1787, the delegates were faced with the task of providing the government-to-be with specifically enumerated, delegated powers. As to those of Congress, Article I, Section 8, lists dozens. For example, Clause 8 delegates to Congress the power “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” All well and good. But how was Congress supposed to accomplish that? Indeed, how was Congress supposed to organize the new government and implement the many powers and tasks delegated to it? The question was of crucial importance because under the earlier, no longer acceptable, Articles of Confederation, it had been provided that “Each state retains its sovereignty, freedom and independence, and every Power, Jurisdiction and right, which is not by this confederation expressly delegated to the United States, in Congress assembled." The Constitutional Convention’s Committee on Detail considered the question. One idea was simply to vest Congress with the power to “organize the government.” Another was what became the Necessary and Proper Clause. Congress was empowered: To make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the [federal] Government of the United States, or in any Department or Officer thereof. These 39 words made many Americans extremely nervous and unhappy. They had good reason. In the heated controversy over ratification of the Constitution, there was vociferous opposition to the provision, mainly because it was seen as negating the principle of enumerated, expressly delegated powers which conceptually underlay the Constitution generally and Article I, Section 8 in particular. There were pro-ratification Federalists, however, who read the Necessary and Proper Clause broadly, among them co-author with James Madison (and John Jay) of the pro-ratification essays called The Federalist, Alexander Hamilton. Accordingly, as Secretary of the Treasury, in 1790 Hamilton lobbied Congress to charter a national bank, concededly not an enumerated power of Congress under Article I, Section 8, of the Constitution, nor indisputably a “let’s get organized” power such as building post offices. Hamilton wanted the bank for the purpose of dealing with the nation’s monetary and economic systems. Hamilton’s bill passed Congress in February 1791, but President Washington had reservations about its constitutionality. He asked Hamilton and Jefferson (and Attorney General Edmund Randolph) to provide written opinions. Hamilton was for the bank. Jefferson was opposed (as was Randolph). Predictably, Hamilton in his Opinion supporting the bank argued that “every power vested in a government is in its nature sovereign, and includes, by force of the term a right to employ all the means requisite and fairly applicable to the attainment of the ends of such power, and which are not precluded by restrictions and exceptions specified in the Constitution, or not immoral or not contrary to the essential ends of political society. . . .” (Italics in original; my bold.) As we shall see, the bolded words are a “switcheroo,” reversing the fundamental promise of the pre-Bill of Rights —limited government, possessing only enumerated, expressly delegated powers—much like bait-and-switch in advertising. Jefferson’s opposition is essentialized in this passage from his Opinion: “I consider the foundation of the Constitution as laid on this ground—that all powers not delegated to the United States, by the Constitution, nor prohibited by it to the states, are reserved to the states or to the people . . . . To take a single step beyond the boundaries thus specially drawn around the powers of Congress, is to take possession of a boundless field of power, no longer susceptible of any definition.” (Italics in original.) Note the constitutional difference between Jefferson and Hamilton. Adverting to “the foundation of the Constitution,” our President-to-be asked “where is the enumerated, delegated power for the federal Congress to charter a bank?” Hamilton, was asking “where in the Constitution’s Article I, Section 8 (or anywhere else) is a “restriction” or prohibition for the federal Congress to charter a bank? The issue was squarely framed, and although it would take two decades more for the ultimate constitutional battle to resolve it and undermine Americans’ individual rights, the principle of limited government and free market capitalism, in the short term Hamilton’s Opinion prevailed. The bank was chartered, eventually its charter lapsed, and was not renewed. However, in 1816 Congress chartered a second Bank of the United States. It established branches in several states, and thus the stars were aligned for one of the worst Supreme Court decisions/opinions, the case of M'Culloch v. Maryland. In 1818, the State of Maryland enacted a law that taxed the notes of all banks that were not chartered by the state—i.e., the second Bank of the United States. The Maryland branch refused to pay the tax, the state sued, and eventually the case ended up in the Supreme Court of the United States. While ostensibly the case was about the tax, the threshold issue for the Court was whether the Congressional legislation creating the bank was constitutional. The answer to that in turn depended on whether, under Article I, Section 8, of the Constitution Congress possessed the power to charter the bank. Indeed, the second paragraph of Chief Justice Marshall’s opinion in M’Culloch v. Maryland begins: “The first question made in the cause [case] is—has congress power to incorporate a bank?” Marshall began his opinion by noting there was a legislative precedent for the bank — the first Bank of the United States — though of course that said nothing about its constitutionality, let alone the constitutionality of the second bank. Next, after some irrelevant musings about the Constitution’s origins, Marshall had to admit that everyone agreed the federal government is “one of enumerated powers.” If one was unaware that staunch federalist Marshall believed in a strong central government, it might have seemed that Congress’s bank legislation was on its way to being held unconstitutional. But that was not to be. Far from it. After considerable discursiveness, Marshall finally got to the Necessary and Proper Clause which, after all, was what the case was all about. Focusing on the word “necessary,” Marshall opined that: If reference be had to its use, in the common affairs of the world, or in approved authors, we find that it frequently imports no more than that one thing is convenient, or useful, or essential to another. * * * The word ‘necessary’ . . . has not a fixed character, peculiar to itself. It admits of all degrees of comparison; and is often connected with other words, which increase or diminish the impression the mind receives of the urgency it imports. A thing may be necessary, very necessary, absolutely or indispensably necessary. To no mind would the same idea be conveyed by these several phrases. (Italics in original.) Here, Marshall’s subjective, “there-are-no-absolutes,” mind-reading linguistic analysis was attributing to the Framers an intent to provide in the Necessary and Proper Clause a roaming Congressional commission in to legislate on virtually any subject it chose. Although paying lip service to the principle that “the powers of the government are limited, and that its limits are not to be transcended,” Marshall issued the further pronouncement that more than any other tersely synthesized his views of the nature and scope of the Necessary and Proper Clause: Let the end be legitimate, let it be within the scope of the constitution, and all means which are appropriate, which are plainly adapted to that end, which are not prohibited, but consist with the letter and spirit of the constitution, are constitutional. (My Italics.) Hamilton won again. The reach of the Constitution’s Necessary and Proper Clause was henceforth to authorize Congress to enact laws so long as they were “not prohibited,” and consistent “with the letter (which the bank legislation surely was not) and spirit (the enumerated, expressly delegated spirit?) of the Constitution. There is much to criticize in John Marshall’s opinion for the Supreme Court in M'Culloch v. Maryland: His unabashed allegiance to federalist principles, his rambling detours into constitutional history, his use of non sequiturs, his begging of questions, his tortured linguistic parsing of “necessary,” his failure satisfactorily to come to grips with the Necessary and Proper Clause’s other requirement, “Proper” (which, also, would doubtless have been susceptible of many meanings). But the worst aspect of M'Culloch is Marshall’s too-slick reversal (“not prohibited”) of the Necessary and Proper Clause’s meaning. Article I, Section 8, contains the bulk of Congress’s delegated, limited powers. The Necessary and Proper Clause allows Congress to implement those powers. Yet — in construing what he might have more honestly called the “Convenient, or Useful” Clause — Marshall turned the tables. No longer was the scope of Congress’s power that which was enumerated and expressly delegated to Congress in Article I, Section 8. Now, the virtually, if not actually, unlimited scope of that power was to be whatever was “not prohibited” to Congress by the Constitution. And what does the Constitution expressly prohibit to Congress? Importation of slaves, and a tax on them of more than $10 each. Enactment of bills of attainder and ex post facto laws. Certain kinds of capitation, direct, and export taxes. Port preferences and withdrawal of money from the treasury without appropriate legislative approval. And, lest we forget, the granting of titles of nobility. Thanks to Chief Justice John Marshall’s M'Culloch opinion in 1819, virtually every conceivable subject has since been grist for Congress’s Article I mill—with severe consequences for both republican institutions, limited government, and individual rights. “Union” is defined as “a combining, joining, or grouping together of nations, states, political groups, etc. for some specific purpose.” Webster’s New World Dictionary of the American Language. For example, Article I, Section 10, Par. 1, provides that “[n]o state shall . . . pass any Bill of Attainder, ex post facto law, of Law impairing the Obligation of Contracts.” The Supreme Court Opinions of Clarence Thomas, 1991-2011, p. 36. * * * This blog may be freely reproduced and forwarded, if in its entirety and its source identified. Comments, though not solicited, are welcome, though usually they will not be answered.
NASA’s Curiosity Rover The Curiosity Rover is a car-sized rover that was launched by NASA from Cape Canaveral on November 26, 2011. It was then landed inside Gale crater on August 6, 2012. This rover has a range of instruments to conduct research on Mars, and is a part of the Mars Science Laboratory mission. When NASA’s Curiosity Rover arrived on Mars, it was a big deal. After nine years of engineering, the rover finally landed. It’s a mighty feat for a rover weighing nearly two pounds. The rover itself has a lot to offer scientists, including a laser that can measure a chemical compound’s composition and a rotary drill that can collect rock powder for analysis. But one thing’s for sure: Curiosity’s landing is only the beginning of a long trek to the Red Planet. In the nine months since Curiosity’s arrival, the rover has made some big discoveries. It has uncovered evidence of an ancient streambed and the presence of chemical compounds necessary for life. What’s more, the rover also has discovered an ancient Martian habitable environment. The crater is called Gale Crater, and it’s located near the equator. If it’s inhabited in the past, it could offer an understanding of how our planet’s climate changed. The rover has already sent back a bunch of images from space. These show the mineral signatures of water, and some hints at microbial life on the Red Planet. For now, the rover is simply exploring the environment around its landing site. One of the most challenging parts of the landing process was lowering the rover. Using a tether and a JPL-patented sky crane, the rover was lowered to the surface. During the descent, a pyrotechnically released bolt was used to attach the heat shield. A power system on Curiosity rover will allow it to last for years on Mars. The system includes two lithium-ion batteries and a radioisotope thermoelectric generator. It also has a floating bus design, which ensures that it can tolerate voltage differences between its chassis and power lines. The battery system can store 140 watts of power, enough to run a desktop computer, monitor, or a camera. If the rover needs extra power, it can be recharged through power from the cruise-stage solar array. However, the batteries will need to be drained and refilled several times during a Martian day. The rover’s energy management system allows it to predict future power needs. In addition to predicting when the rover will need to recharge, it can also diagnose any problems with its power system. During the rover’s cruise, it will be able to maintain its Li-ion rechargeable batteries at 70 percent charge. When the rover first landed on Mars, it needed to have a reliable power source. Previously, landers used Photo-Voltaic (PV) panels and batteries. But the distance to the sun reduced the effectiveness of the panels. Additionally, the PV panels became covered with dust over time. Several previous space missions, including Voyager, Cassini, Ulysses, and Pioneer used similar power systems. These systems are based on the heat generated by the decay of plutonium-238. Plutonium-238 is an isotope of plutonium, which has a half life of 87 years. The NASA Curiosity Rover has 10 science instruments on board. These instruments are used to analyze the materials that make up the Martian surface. This will help researchers determine whether there is potential for habitability. Ultimately, this mission is a way for scientists to learn more about the history of Mars. The Sample Analysis at Mars (SAM) is the most important instrument on the rover. It is comprised of several scientific instruments that can analyze soil and determine whether it contains organic compounds. In addition, it can double check the presence of organic material by looking at a sample that has been processed. Another science instrument on the rover is the Alpha Particle X-ray Spectrometer. This spectrometer uses x-rays to measure the composition of rocks. Aside from detecting chemical elements in the Martian rocks, the APXS can also provide information on the radiation on the planet. An important scientific instrument on the rover is the Dynamic Albedo of Neutrons (DAN). This instrument can detect water on the Martian surface. Water absorbs neutrons differently than other materials. Using this instrument, scientists will be able to identify layers of water as high as six feet below the surface. One of the more complex science instruments on the rover is the Mastcam-Z instrument. This camera is capable of taking panoramic pictures and stereoscopic images. They can also be used to identify targets on the Martian surface. Mars rover cameras are used by engineers to navigate the rover. Originally, these cameras were designed to have a stereo pair of 15x zoom lenses. Then, as dust covers were removed, these cameras were able to take better pictures. Now, they’re capable of taking color photographs and panoramas. Each camera is specially designed to do its job. For example, the MAHLI camera is mounted on the robotic arm. This allows scientists to take photos of rocks and regolith close up. It also has a 39.4-degree field of view at the macro position, which is similar to the human eye. These engineering cameras are smaller than modern smartphones. Their resolution is lower. However, it’s easier to transmit these images back to Earth. There are two main uses for these cameras: to collect samples and to monitor the hardware on the rover. The cameras on Curiosity are capable of taking panoramas. These images are stitched together in a series. But, at the end of the video, the panorama is a bit blurry. They’ll be able to provide a full-resolution video within a couple days. Mars rover cameras can be divided into two types: monochromatic and multicolor. Monochromatic images are easier to transmit to Earth. Colored photographs are used to analyze materials and help scientists determine which features to zoom on with spectrometers. The Curiosity rover has a 2-megapixel sensor. The sensor is an interline CCD. Interline CCD sensors are similar to CMOS, but have smaller pixels. And they’re fast enough to record images at 720p. Remote sensing mast The Remote Sensing Mast on Curiosity Rover provides fast, high resolution geological surveys of the surrounding landscape. It also helps to quickly prioritize science targets. With the help of Laser Induced Breakdown Spectroscopy (LIBS), scientists can remotely determine the element composition of the materials in the surroundings. The Mastcam cameras on Curiosity Rover take 2 megapixel color images. These images are 1600 pixels wide by 1200 pixels tall. They are then compressed into JPEG lossy format. Depending on the resolution and zoom level of the camera, they can store thousands of full-color images. Each Mastcam has two camera heads. One has a 34-mm lens and the other has a 100-mm lens. A full-color video can be taken with each one. There is a mechanical focus on each camera, which allows it to focus between infinity and 2.1 meters. The mast unit includes an IR spectrometer, RMI imager, laser and telescope. Light is reflected from the telescope to the body unit, which is then transmitted through a 6-m optical fiber. This translates to 10 cm per pixel at a distance of one km. To provide a wide field of view, the science objectives required that the optical axis point down. The mast has a 180 deg elevation field of regard. The Mastcam cameras have a filter wheel for studying geological targets. They also have a CMOS complementary metal-oxide semiconductor detector to give color images. The communication subsystem of the Mars Curiosity rover includes a high-gain antenna, which is used to transmit to Earth. This system also includes a X-band radio system, which is used to communicate with orbiters. X-band is a higher frequency than FM radio waves. It is a frequency that is reserved for deep space research. Several antennas on various parts of the spacecraft will be used for this purpose. The X-band system also features an antenna on the lander, which will be used for receiving signals. In addition, a relay orbiter will pass over the rover in the afternoon, which will bring priority data down to the rover. In addition to the high-gain antenna, the X-band radio system has several other antennas on different parts of the spacecraft. These antennas are used to communicate with orbiters and other deep space spacecraft. The communications system is divided into three main components: the receiver, the transmitter, and the transceiver. Each component works together to generate signals for transmission. Once the signal has been generated, it is sent through a feed horn and then a diplexer to the antenna. Antennas are selected for the specific application. They must have a small surface area, while minimizing the power requirements. Moreover, the mass of the rover will affect the design. The transmission rate for the Mars Curiosity rover is 0.004 MB/s. This is equivalent to 1% of the capacity of a CD-ROM. If you like what you read, check out our other science articles here.
Relative density, or specific gravity, is the ratio of the density (mass of a unit volume) of a substance to the density of a given reference material. Specific gravity for liquids is nearly always measured with respect to water at its densest (at 4 °C or 39.2 °F); for gases, the reference is air at room temperature (20 °C or 68 °F). The term "relative density" is often preferred in scientific usage. If a substance's relative density is less than 1 then it is less dense than the reference; if greater than 1 then it is denser than the reference. If the relative density is exactly 1 then the densities are equal; that is, equal volumes of the two substances have the same mass. If the reference material is water, then a substance with a relative density (or specific gravity) less than 1 will float in water. For example, an ice cube, with a relative density of about 0.91, will float. A substance with a relative density greater than 1 will sink. Temperature and pressure must be specified for both the sample and the reference. Pressure is nearly always 1 atm (101.325 kPa). Where it is not, it is more usual to specify the density directly. Temperatures for both sample and reference vary from industry to industry. In British brewing practice, the specific gravity, as specified above, is multiplied by 1000. Specific gravity is commonly used in industry as a simple means of obtaining information about the concentration of solutions of various materials such as brines, must weight (syrups, juices, honeys, brewers wort, must, etc.) and acids. Relative density ( ) or specific gravity ( ) is a dimensionless quantity, as it is the ratio of either densities or weights The reference material can be indicated using subscripts: which means "the relative density of substance with respect to reference". If the reference is not explicitly stated then it is normally assumed to be water at 4 °C (or, more precisely, 3.98 °C, which is the temperature at which water reaches its maximum density). In SI units, the density of water is (approximately) 1000 kg/m3 or 1 g/cm3, which makes relative density calculations particularly convenient: the density of the object only needs to be divided by 1000 or 1, depending on the units. The relative density of gases is often measured with respect to dry air at a temperature of 20 °C and a pressure of 101.325 kPa absolute, which has a density of 1.205 kg/m3. Relative density with respect to air can be obtained by Those with SG greater than 1 are denser than water and will, disregarding surface tension effects, sink in it. Those with an SG less than 1 are less dense than water and will float on it. In scientific work, the relationship of mass to volume is usually expressed directly in terms of the density (mass per unit volume) of the substance under study. It is in industry where specific gravity finds wide application, often for historical reasons. True specific gravity of a liquid can be expressed mathematically as: The apparent specific gravity is simply the ratio of the weights of equal volumes of sample and water in air: It can be shown that true specific gravity can be computed from different properties: where g is the local acceleration due to gravity, V is the volume of the sample and of water (the same for both), ρsample is the density of the sample, ρH2O is the density of water, WV represents a weight obtained in vacuum, is the mass of the sample and is the mass of an equal volume of water. The density of water varies with temperature and pressure as does the density of the sample. So it is necessary to specify the temperatures and pressures at which the densities or weights were determined. It is nearly always the case that measurements are made at 1 nominal atmosphere (101.325 kPa ± variations from changing weather patterns). But as specific gravity usually refers to highly incompressible aqueous solutions or other incompressible substances (such as petroleum products), variations in density caused by pressure are usually neglected at least where apparent specific gravity is being measured. For true (in vacuo) specific gravity calculations, air pressure must be considered (see below). Temperatures are specified by the notation (Ts/Tr), with Ts representing the temperature at which the sample's density was determined and Tr the temperature at which the reference (water) density is specified. For example, SG (20 °C/4 °C) would be understood to mean that the density of the sample was determined at 20 °C and of the water at 4 °C. Taking into account different sample and reference temperatures, we note that, while SGH2O = 1.000000 (20 °C/20 °C), it is also the case that SGH2O = 0.998203⁄0.999840 = 0.998363 (20 °C/4 °C). Here, temperature is being specified using the current ITS-90 scale and the densities used here and in the rest of this article are based on that scale. On the previous IPTS-68 scale, the densities at 20 °C and 4 °C are 0.9982071 and 0.9999720 respectively, resulting in an SG (20 °C/4 °C) value for water of 0.9982343. As the principal use of specific gravity measurements in industry is determination of the concentrations of substances in aqueous solutions and as these are found in tables of SG versus concentration, it is extremely important that the analyst enter the table with the correct form of specific gravity. For example, in the brewing industry, the Plato table lists sucrose concentration by weight against true SG, and was originally (20 °C/4 °C) i.e. based on measurements of the density of sucrose solutions made at laboratory temperature (20 °C) but referenced to the density of water at 4 °C which is very close to the temperature at which water has its maximum density, ρH2O equal to 999.972 kg/m3 in SI units (0.999972 g/cm3 in cgs units or 62.43 lb/cu ft in United States customary units). The ASBC table in use today in North America, while it is derived from the original Plato table is for apparent specific gravity measurements at (20 °C/20 °C) on the IPTS-68 scale where the density of water is 0.9982071 g/cm3. In the sugar, soft drink, honey, fruit juice and related industries, sucrose concentration by weight is taken from a table prepared by A. Brix, which uses SG (17.5 °C/17.5 °C). As a final example, the British SG units are based on reference and sample temperatures of 60 °F and are thus (15.56 °C/15.56 °C). Given the specific gravity of a substance, its actual density can be calculated by rearranging the above formula: Occasionally a reference substance other than water is specified (for example, air), in which case specific gravity means density relative to that reference. The density of substances varies with temperature and pressure so that it is necessary to specify the temperatures and pressures at which the densities or masses were determined. It is nearly always the case that measurements are made at nominally 1 atmosphere (101.325 kPa ignoring the variations caused by changing weather patterns) but as relative density usually refers to highly incompressible aqueous solutions or other incompressible substances (such as petroleum products) variations in density caused by pressure are usually neglected at least where apparent relative density is being measured. For true (in vacuo) relative density calculations air pressure must be considered (see below). Temperatures are specified by the notation (Ts/Tr) with Ts representing the temperature at which the sample's density was determined and Tr the temperature at which the reference (water) density is specified. For example, SG (20 °C/4 °C) would be understood to mean that the density of the sample was determined at 20 °C and of the water at 4 °C. Taking into account different sample and reference temperatures we note that while SGH2O = 1.000000 (20 °C/20 °C) it is also the case that RDH2O = 0.998203/0.998840 = 0.998363 (20 °C/4 °C). Here temperature is being specified using the current ITS-90 scale and the densities used here and in the rest of this article are based on that scale. On the previous IPTS-68 scale the densities at 20 °C and 4 °C are, respectively, 0.9982071 and 0.9999720 resulting in an RD (20 °C/4 °C) value for water of 0.9982343. The temperatures of the two materials may be explicitly stated in the density symbols; for example: where the superscript indicates the temperature at which the density of the material is measured, and the subscript indicates the temperature of the reference substance to which it is compared. Relative density can also help to quantify the buoyancy of a substance in a fluid or gas, or determine the density of an unknown substance from the known density of another. Relative density is often used by geologists and mineralogists to help determine the mineral content of a rock or other sample. Gemologists use it as an aid in the identification of gemstones. Water is preferred as the reference because measurements are then easy to carry out in the field (see below for examples of measurement methods). As the principal use of relative density measurements in industry is determination of the concentrations of substances in aqueous solutions and these are found in tables of RD vs concentration it is extremely important that the analyst enter the table with the correct form of relative density. For example, in the brewing industry, the Plato table, which lists sucrose concentration by mass against true RD, were originally (20 °C/4 °C) that is based on measurements of the density of sucrose solutions made at laboratory temperature (20 °C) but referenced to the density of water at 4 °C which is very close to the temperature at which water has its maximum density of ρ(H 2O) equal to 0.999972 g/cm3 (or 62.43 lb·ft−3). The ASBC table in use today in North America, while it is derived from the original Plato table is for apparent relative density measurements at (20 °C/20 °C) on the IPTS-68 scale where the density of water is 0.9982071 g/cm3. In the sugar, soft drink, honey, fruit juice and related industries sucrose concentration by mass is taken from this work which uses SG (17.5 °C/17.5 °C). As a final example, the British RD units are based on reference and sample temperatures of 60 °F and are thus (15.56 °C/15.56 °C). Relative density can be calculated directly by measuring the density of a sample and dividing it by the (known) density of the reference substance. The density of the sample is simply its mass divided by its volume. Although mass is easy to measure, the volume of an irregularly shaped sample can be more difficult to ascertain. One method is to put the sample in a water-filled graduated cylinder and read off how much water it displaces. Alternatively the container can be filled to the brim, the sample immersed, and the volume of overflow measured. The surface tension of the water may keep a significant amount of water from overflowing, which is especially problematic for small samples. For this reason it is desirable to use a water container with as small a mouth as possible. For each substance, the density, ρ, is given by When these densities are divided, references to the spring constant, gravity and cross-sectional area simply cancel, leaving Relative density is more easily and perhaps more accurately measured without measuring volume. Using a spring scale, the sample is weighed first in air and then in water. Relative density (with respect to water) can then be calculated using the following formula: This technique cannot easily be used to measure relative densities less than one, because the sample will then float. Wwater becomes a negative quantity, representing the force needed to keep the sample underwater. Another practical method uses three measurements. The sample is weighed dry. Then a container filled to the brim with water is weighed, and weighed again with the sample immersed, after the displaced water has overflowed and been removed. Subtracting the last reading from the sum of the first two readings gives the weight of the displaced water. The relative density result is the dry sample weight divided by that of the displaced water. This method allows the use of scales which cannot handle a suspended sample. A sample less dense than water can also be handled, but it has to be held down, and the error introduced by the fixing material must be considered. The relative density of a liquid can be measured using a hydrometer. This consists of a bulb attached to a stalk of constant cross-sectional area, as shown in the adjacent diagram. First the hydrometer is floated in the reference liquid (shown in light blue), and the displacement (the level of the liquid on the stalk) is marked (blue line). The reference could be any liquid, but in practice it is usually water. The hydrometer is then floated in a liquid of unknown density (shown in green). The change in displacement, Δx, is noted. In the example depicted, the hydrometer has dropped slightly in the green liquid; hence its density is lower than that of the reference liquid. It is necessary that the hydrometer floats in both liquids. The application of simple physical principles allows the relative density of the unknown liquid to be calculated from the change in displacement. (In practice the stalk of the hydrometer is pre-marked with graduations to facilitate this measurement.) In the explanation that follows, Since the floating hydrometer is in static equilibrium, the downward gravitational force acting upon it must exactly balance the upward buoyancy force. The gravitational force acting on the hydrometer is simply its weight, mg. From the Archimedes buoyancy principle, the buoyancy force acting on the hydrometer is equal to the weight of liquid displaced. This weight is equal to the mass of liquid displaced multiplied by g, which in the case of the reference liquid is ρrefVg. Setting these equal, we have Exactly the same equation applies when the hydrometer is floating in the liquid being measured, except that the new volume is V − AΔx (see note above about the sign of Δx). Thus, This equation allows the relative density to be calculated from the change in displacement, the known density of the reference liquid, and the known properties of the hydrometer. If Δx is small then, as a first-order approximation of the geometric series equation (4) can be written as: This shows that, for small Δx, changes in displacement are approximately proportional to changes in relative density. A pycnometer (from Greek: πυκνός (puknos) meaning "dense"), also called pyknometer or specific gravity bottle, is a device used to determine the density of a liquid. A pycnometer is usually made of glass, with a close-fitting ground glass stopper with a capillary tube through it, so that air bubbles may escape from the apparatus. This device enables a liquid's density to be measured accurately by reference to an appropriate working fluid, such as water or mercury, using an analytical balance. If the flask is weighed empty, full of water, and full of a liquid whose relative density is desired, the relative density of the liquid can easily be calculated. The particle density of a powder, to which the usual method of weighing cannot be applied, can also be determined with a pycnometer. The powder is added to the pycnometer, which is then weighed, giving the weight of the powder sample. The pycnometer is then filled with a liquid of known density, in which the powder is completely insoluble. The weight of the displaced liquid can then be determined, and hence the relative density of the powder. A gas pycnometer, the gas-based manifestation of a pycnometer, compares the change in pressure caused by a measured change in a closed volume containing a reference (usually a steel sphere of known volume) with the change in pressure caused by the sample under the same conditions. The difference in change of pressure represents the volume of the sample as compared to the reference sphere, and is usually used for solid particulates that may dissolve in the liquid medium of the pycnometer design described above, or for porous materials into which the liquid would not fully penetrate. When a pycnometer is filled to a specific, but not necessarily accurately known volume, V and is placed upon a balance, it will exert a force If we subtract the force measured on the empty bottle from this (or tare the balance before making the water measurement) we obtain. This is called the apparent relative density, denoted by subscript A, because it is what we would obtain if we took the ratio of net weighings in air from an analytical balance or used a hydrometer (the stem displaces air). Note that the result does not depend on the calibration of the balance. The only requirement on it is that it read linearly with force. Nor does RDA depend on the actual volume of the pycnometer. Further manipulation and finally substitution of RDV, the true relative density (the subscript V is used because this is often referred to as the relative density in vacuo), for ρs/ρw gives the relationship between apparent and true relative density. In the usual case we will have measured weights and want the true relative density. This is found from Since the density of dry air at 101.325 kPa at 20 °C is 0.001205 g/cm3 and that of water is 0.998203 g/cm3 we see that the difference between true and apparent relative densities for a substance with relative density (20 °C/20 °C) of about 1.100 would be 0.000120. Where the relative density of the sample is close to that of water (for example dilute ethanol solutions) the correction is even smaller. The pycnometer is used in ISO standard: ISO 1183-1:2004, ISO 1014–1985 and ASTM standard: ASTM D854. Hydrostatic Pressure-based Instruments: This technology relies upon Pascal's Principle which states that the pressure difference between two points within a vertical column of fluid is dependent upon the vertical distance between the two points, the density of the fluid and the gravitational force. This technology is often used for tank gaging applications as a convenient means of liquid level and density measure. Vibrating Element Transducers: This type of instrument requires a vibrating element to be placed in contact with the fluid of interest. The resonant frequency of the element is measured and is related to the density of the fluid by a characterization that is dependent upon the design of the element. In modern laboratories precise measurements of relative density are made using oscillating U-tube meters. These are capable of measurement to 5 to 6 places beyond the decimal point and are used in the brewing, distilling, pharmaceutical, petroleum and other industries. The instruments measure the actual mass of fluid contained in a fixed volume at temperatures between 0 and 80 °C but as they are microprocessor based can calculate apparent or true relative density and contain tables relating these to the strengths of common acids, sugar solutions, etc. Ultrasonic Transducer: Ultrasonic waves are passed from a source, through the fluid of interest, and into a detector which measures the acoustic spectroscopy of the waves. Fluid properties such as density and viscosity can be inferred from the spectrum. Radiation-based Gauge: Radiation is passed from a source, through the fluid of interest, and into a scintillation detector, or counter. As the fluid density increases, the detected radiation "counts" will decrease. The source is typically the radioactive isotope caesium-137, with a half-life of about 30 years. A key advantage for this technology is that the instrument is not required to be in contact with the fluid—typically the source and detector are mounted on the outside of tanks or piping. Buoyant Force Transducer: the buoyancy force produced by a float in a homogeneous liquid is equal to the weight of the liquid that is displaced by the float. Since buoyancy force is linear with respect to the density of the liquid within which the float is submerged, the measure of the buoyancy force yields a measure of the density of the liquid. One commercially available unit claims the instrument is capable of measuring relative density with an accuracy of ± 0.005 RD units. The submersible probe head contains a mathematically characterized spring-float system. When the head is immersed vertically in the liquid, the float moves vertically and the position of the float controls the position of a permanent magnet whose displacement is sensed by a concentric array of Hall-effect linear displacement sensors. The output signals of the sensors are mixed in a dedicated electronics module that provides a single output voltage whose magnitude is a direct linear measure of the quantity to be measured. The relative density a measure of the current void ratio in relation to the maximum and minimum void rations, and applied effective stress control the mechanical behavior of cohesionless soil. Relative density is definetd by in which , and are the maximum, minimumi and actual void rations. (Samples may vary, and these figures are approximate.) Substances with a relative density of 1 are neutrally buoyant, those with RD greater than one are denser than water, and so (ignoring surface tension effects) will sink in it, and those with an RD of less than one are less dense than water, and so will float.
IF WE look deep into the universe, we see stars and galaxies of all shapes and sizes. What we do not see, however, is that the universe is filled with particles called neutrinos. These particles have no charge and have little or no mass created less than one second after the Big Bang, and large numbers of these primordial low-energy neutrinos remain in the universe today because they interact very weakly with matter. Indeed, every cubic centimetre of space contains about 300 of these uncharged relics. |Ground-based telescopes, like the Anglo-Australian Observatory, saw the light from supernova 1987A several hours after the Kamiokande and IMB experiments had already detected the neutrinos that were emitted.| Trillions of neutrinos pass through our bodies every second almost all of these are produced in fusion reactions in the Sun's core. However, neutrino production is not just confined to our galaxy. When massive stars die, most of their energy is released as neutrinos in violent supernova explosions. Even though supernovas can appear as bright as galaxies when viewed with optical telescopes, this light represents only a small fraction of the energy released (see figure). Physicists detected the first neutrinos from a supernova in 1987 when a star collapsed some 150 000 light-years away in the Large Magellanic Cloud, the galaxy nearest to the Milky Way. Two huge underground experiments the Kamiokande detector in Japan and the IMB experiment near Cleveland in Ohio, USA detected neutrinos from supernova 1987A a full three hours before light from the explosion reached Earth. The event marked the birth of neutrino astronomy. New neutrino telescopes were built soon after, including the AMANDA experiment in Antarctica, and plans are under way to build an even larger experiment called ICECUBE to detect neutrinos from gamma-ray bursters billions of lightyears away. However, neutrinos are still the least understood of the fundamental particles. For half a century physicists thought that neutrinos, like photons, had no mass. But recent data from the SuperKamiokande experiment in Japan overturned this view and confirmed that the Standard Model of particle physics is incomplete. To extend the Standard Model so that it incorporates massive neutrinos in a natural way will require far-reaching changes. For example, some theorists argue that extra spatial dimensions are needed to explain neutrino mass, while others argue that the hitherto sacred distinction between matter and antimatter will have to be abandoned. The mass of the neutrino may even explain our existence. Read the rest of the story what we know about neutrinos and what we are learning about them right now. This homepage is based on Feature Article "Origin of Neutrino mass" in Physics World, May 2002, by Hitoshi Murayama. The whole article can be download as a PDF file.
The word "rounding" for a numerical value means replacing it by another value that is approximately equal but has a shorter, simpler, or more explicit form. For example, US$23.74 could be rounded to US$24, the fraction 312/937 could be rounded to 1/3, and the expression could be rounded to 1.41. Rounding is often done on purpose to obtain a value that is easier to write and handle than the original. It may also be done to indicate the accuracy of a computed number; for example, a quantity that was computed as 123,456, but is known to be accurate only to within a few hundred units, is better stated as "about 123,500". On the other hand, rounding can introduce some round-off error as a result. Rounding is almost unavoidable in many computations, especially when dividing two numbers in integer or doing fixed-point arithmetic; when computing mathematical functions such as square roots, logarithms, and sines; or when using a floating point representation with a fixed number of significant digits. In a sequence of calculations, these rounding errors generally accumulate, and in certain "ill-conditioned" cases, they may make the result meaningless. Accurate rounding of transcendental mathematical functions is difficult, because the number of extra digits that need to be calculated to resolve whether to round up or down cannot be known in advance. This problem is known as "the table-maker's dilemma" (below). Types of roundingEdit Typical rounding problems can include: - Approximating an irrational number by a fraction. For example, π by 22/7. - Approximating a fraction with periodic decimal expansion by a finite decimal fraction. For example, 5/3 by 1.6667. - Replacing a rational number by a fraction with smaller numerator and denominator. For example, 3122/9417 by 1/3. - Replacing a fractional decimal number by one with fewer digits. For example, 2.1784 dollars by 2.18 dollars. - Replacing a decimal integer by an integer with more trailing zeros. For example. 23,217 people by 23,200 people. - Replacing a value by a multiple of a specified amount. For example. 27.2 seconds by 30 seconds (a multiple of 15). Rounding to a specified incrementEdit The most common type of rounding is to round to an integer; or, more generally, to an integer multiple of some increment—such as rounding to whole tenths of seconds, hundredths of a dollar, to whole multiples of 1/2 or 1/8 inch, to whole dozens or thousands, etc.. In general, rounding a number x to a multiple of some specified increment m entails the following steps: - Divide x by m, let the result be y; - Round y to an integer value, call it q; - Multiply q by m to obtain the rounded value z. For example, rounding x = 2.1784 dollars to whole cents (that is, to a multiple of 0.01) entails computing y = x/m = 2.1784/0.01 = 217.84, then rounding y to the integer q = 218, and finally computing z = q×m = 218×0.01 = 2.18. The increment m is normally a finite fraction in whatever numeral system that is used to represent the numbers. For display to humans, that usually means the decimal numeral system (that is, m is an integer times a power of 10, like 1/1000 or 25/100). For intermediate values stored in digital computers, it often means the binary numeral system (m is an integer times a power of 2). The abstract single-argument "round()" function that returns an integer from an arbitrary real value has at least a dozen distinct concrete definitions presented in the rounding to integer section. The abstract two-argument "round()" function is formally defined here, but in many cases it is used with the implicit value m = 1 for the increment and then reduces to the equivalent abstract single-argument function, with also the same dozen distinct concrete definitions. Rounding to integerEdit The most basic form of rounding is to replace an arbitrary number by an integer. All the following rounding modes are concrete implementations of the abstract single-argument "round()" function presented and used in the previous sections. There are many ways of rounding a number y to an integer q. The most common ones are - Round down (or take the floor, or round towards minus infinity): q is the largest integer that does not exceed y. - Round up (or take the ceiling, or round towards plus infinity): q is the smallest integer that is not less than y. - Round towards zero (or truncate, or round away from infinity): q is the integer part of y, without its fraction digits. - Round away from zero (or round towards infinity): if y is an integer, q is y; else q is the integer that is closest to 0 and is such that y is between 0 and q. - Round to nearest: q is the integer that is closest to y. This is sometimes written as (see below for tie-breaking rules). The first four methods are called directed rounding, as the displacements from the original number y to the rounded value q are all directed towards or away from the same limiting value (0, +∞, or −∞). If y is positive, round-down is the same as round-towards-zero, and round-up is the same as round-away-from-zero. If y is negative, round-down is the same as round-away-from-zero, and round-up is the same as round-towards-zero. In any case, if y is integer, q is just y. The following table illustrates these rounding methods: |+23.50||+23||+24||+23||+24||+23 or +24| |−23.50||−24||−23||−23||−24||−23 or −24| Where many calculations are done in sequence, the choice of rounding method can have a very significant effect on the result. A famous instance involved a new index set up by the Vancouver Stock Exchange in 1982. It was initially set at 1000.000, and after 22 months had fallen to about 520 — whereas stock prices had generally increased in the period. The problem was caused by the index being recalculated thousands of times daily, and always being rounded down to 3 decimal places, in such a way that the rounding errors accumulated. Recalculating with better rounding gave an index value of 1098.892 at the end of the same period. Rounding a number y to the nearest integer requires some tie-breaking rule for those cases when y is exactly half-way between two integers — that is, when the fraction part of y is exactly 0.5. Round half upEdit The following tie-breaking rule, called round half up (or round half towards plus infinity), is widely used in many disciplines. That is, half-way values y are always rounded up. - If the fraction of y is exactly 0.5, then q = y + 0.5. For example, by this rule the value 23.5 gets rounded to 24, but −23.5 gets rounded to −23. This is one of two rules generally taught in US elementary mathematics classes.[source?] If it were not for the 0.5 fractions, the roundoff errors introduced by the round to nearest method would be quite symmetric: for every fraction that gets rounded up (such as 0.268), there is a complementary fraction (namely, 0.732) that gets rounded down, by the same amount. When rounding a large set of numbers with random fractional parts, these rounding errors would statistically compensate each other, and the expected (average) value of the rounded numbers would be equal to the expected value of the original numbers. However, the round half up tie-breaking rule is not symmetric, as the fractions that are exactly 0.5 always get rounded up. This asymmetry introduces a positive bias in the roundoff errors. For example, if the fraction of y consists of three random decimal digits, then the expected value of q will be 0.0005 higher than the expected value of y. For this reason, round-to-nearest with the round half up rule is also (ambiguously) known as asymmetric rounding. One reason for rounding up at 0.5 is that only one digit need be examined. When seeing 17.50000..., for example, the first three figures, 17.5, determines that the figure would be rounded up to 18. If the opposite rule were used (round half down), then all the zero decimal places would need to be examined to determine if the value is exactly 17.5. Round half downEdit One may also use round half down (or round half towards minus infinity) as opposed to the more common round half up (the round half up method is a common convention, but is nothing more than a convention). - If the fraction of y is exactly 0.5, then q = y − 0.5. For example, 23.5 gets rounded to 23, and −23.5 gets rounded to −24. The round half down tie-breaking rule is not symmetric, as the fractions that are exactly 0.5 always get rounded down. This asymmetry introduces a negative bias in the roundoff errors. For example, if the fraction of y consists of three random decimal digits, then the expected value of q will be 0.0005 lower than the expected value of y. For this reason, round-to-nearest with the round half down rule is also (ambiguously) known as asymmetric rounding. Round half away from zeroEdit The other tie-breaking method commonly taught and used is the round half away from zero (or round half towards infinity), namely: - If the fraction of y is exactly 0.5, then q = y + 0.5 if y is positive, and q = y − 0.5 if y is negative. For example, 23.5 gets rounded to 24, and −23.5 gets rounded to −24. This method treats positive and negative values symmetrically, and therefore is free of overall bias if the original numbers are positive or negative with equal probability. However, this rule will still introduce a positive bias for positive numbers, and a negative bias for the negative ones. It is often used for currency conversions and price roundings (when the amount is first converted into the smallest significant subdivision of the currency, such as cents of a euro) as it is easy to explain by just considering the first fractional digit, independently of supplementary precision digits or sign of the amount (for strict equivalence between the paying and recipient of the amount). Round half towards zeroEdit One may also round half towards zero (or round half away from infinity) as opposed to the more common round half away from zero (the round half away from zero method is a common convention, but is nothing more than a convention). - If the fraction of y is exactly 0.5, then q = y − 0.5 if y is positive, and q = y + 0.5 if y is negative. For example, 23.5 gets rounded to 23, and −23.5 gets rounded to −23. This method also treats positive and negative values symmetrically, and therefore is free of overall bias if the original numbers are positive or negative with equal probability. However, this rule will still introduce a negative bias for positive numbers, and a positive bias for the negative ones. Round half to evenEdit A tie-breaking rule that is even less biased is round half to even, namely - If the fraction of y is 0.5, then q is the even integer nearest to y. Thus, for example, +23.5 becomes +24, +22.5 becomes +22, −22.5 becomes −22, and −23.5 becomes −24. This method also treats positive and negative values symmetrically, and therefore is free of overall bias if the original numbers are positive or negative with equal probability. In addition, for most reasonable distributions of y values, the expected (average) value of the rounded numbers is essentially the same as that of the original numbers, even if the latter are all positive (or all negative). However, this rule will still introduce a positive bias for even numbers (including zero), and a negative bias for the odd ones. This variant of the round-to-nearest method is also called unbiased rounding (ambiguously, and a bit abusively), convergent rounding, statistician's rounding, Dutch rounding, Gaussian rounding, or bankers' rounding. This is widely used in bookkeeping. This is the default rounding mode used in IEEE 754 computing functions and operators. Round half to oddEdit Another tie-breaking rule that is very similar to round half to even, namely - If the fraction of y is 0.5, then q is the odd integer nearest to y. Thus, for example, +22.5 becomes +23, +21.5 becomes +21, −21.5 becomes −21, and −22.5 becomes −23. This method also treats positive and negative values symmetrically, and therefore is free of overall bias if the original numbers are positive or negative with equal probability. In addition, for most reasonable distributions of y values, the expected (average) value of the rounded numbers is essentially the same as that of the original numbers, even if the latter are all positive (or all negative). However, this rule will still introduce a negative bias for even numbers (including zero), and a positive bias for the odd ones. This variant is almost never used in most computations, except in situations where one wants to avoid rounding 0.5 or −0.5 to zero, or to avoid increasing the scale of numbers represented as floating point (with limited ranges for the scaling exponent), so that a non infinite number would round to infinite, or that a small denormal value would round to a normal non-zero value (these could occur with the round half to even mode). Effectively, this mode prefers preserving the existing scale of tie numbers, avoiding out of range results when possible. Another unbiased tie-breaking method is stochastic rounding: - If the fractional part of y is .5, choose q randomly among y + 0.5 and y − 0.5, with equal probability. Like round-half-to-even, this rule is essentially free of overall bias; but it is also fair among even and odd q values. On the other hand, it introduces a random component into the result; performing the same computation twice on the same data may yield two different results. Also, it is open to unconscious bias if humans (rather than computers or devices of chance) are "randomly" deciding in which direction to round. One method, more obscure than most, is round half alternatingly. - If the fractional part is 0.5, alternate round up and round down: for the first occurrence of a 0.5 fractional part, round up; for the second occurrence, round down; so on so forth. This suppresses the random component of the result, if occurrences of 0.5 fractional parts can be effectively numbered. But it can still introduce a positive or negative bias according to the direction of rounding assigned to the first occurrence, if the total number of occurrences is odd. In some contexts, all the rounding methods above may be unsatisfactory. For example, suppose that y is an accurate measurement of an audio signal, which is being rounded to an integer q in order to reduce the storage or transmission costs. If y changes slowly with time, any of the rounding method above will result in q being completely constant for long intervals, separated by sudden jumps of ±1. When the q signal is played back, these steps will be heard as a very disagreeable noise, and any variations of the original signal between two integer values will be completely lost. One way to avoid this problem is to round each value y upwards with probability equal to its fraction, and round it downwards with the complement of that probability. For example, the number 23.17 would be rounded up to 24 with probability 0.17, and down to 23 with probability 1 - 0.17 = 0.83. (This is equivalent to rounding down y + s, where s is a random number uniformly distributed between 0 and 1.) With this special rounding, known as dithering, the sudden steps get replaced by a less objectionable noise, and even small variations in the original signal will be preserved to some extent. Like the stochastic approach to tie-breaking, dithering has no bias: if all fraction values are equally likely, rounding up by a certain amount is as likely as rounding down by that same amount; and the same is true for the sum of several rounded numbers. On the other hand, dithering introduces a random component in the result, much greater than that of stochastic tie-breaking. More precisely, the roundoff error for each dithered number will be a uniformly distributed random variable with mean value of zero, but with a standard deviation , which is better than the 1/2 standard deviation with the simple predictive methods, but slightly higher than with the simpler stochastic method. However, the sum of n rounded numbers will be a random variable with expected error zero, but with standard deviation (the total remaining noise) which diverges semi-quadratically and may become easily perceptible, even if the standard deviation of the roundoff error per sample will be which slowly converges semi-quadratically to zero. So, this random distribution may still be too high for some applications that are rounding a lot of data. This variant of the simple dithering method still rounds values with probability equal to its fraction. However, instead of using a random distribution for rounding isolated samples, the roundoff error occurring at each rounded sample is totalled for the next surrounding elements to sample or compute; this accumulated value is then added to the value of these next sampled or computed values to round, so that the modified values will take into account this difference using a predictive model (such as Floyd–Steinberg dithering). The modified values are then rounded with any one of the above rounding methods, the best ones being with stochastic or dithering methods: in this last case, the sum of n rounded numbers will still be a random variable with expected error zero but with an excellent constant standard deviation of , instead of diverging semi-quadratically when dithering isolated samples; and the overall average roundoff error deviation per rounded sample will be that will converge hyperbolically to zero, faster than with the semi-hyperbolic convergence when dithering isolated samples. In practice, when rounding large sets of sampled data (such as audio, image and video rendering), the accumulation of roundoff errors is most frequently used with a simple predictive rounding of the modified values (such as rounding towards zero), because it will still preserve the hyperbolic convergence towards zero of the overall mean roundoff error bias and of its standard deviation. This enhancement is frequently used in image and audio processing (notably for accurate rescaling and antialiasing operations, where the simple probabilistic dithering of isolated values may still produce perceptible noise, sometimes even worse than the moiré effects occurring with simple non-probabilistic rounding methods applied to isolated samples). The effective propagation of accumulated roundoff errors may depend on the discrete dimension of the sampled data to round: when sampling bidimensional images, including colored images (that add the discrete dimension of color planes), or tridimensional videos (that add a discrete time dimension), or on polyphonic audio data (using time and channel discrete dimensions), it may still be preferable to propagate this error into a preferred direction, or equally into several orthogonal dimensions, such as vertically vs. horizontally for bidimensional images, or into parallel color channels at the same position and/or timestamp, and depending on other properties of these orthogonal discrete dimensions (according to a perception model). In those cases, several roundoff error accumulators may be used (at least one for each discrete dimension), or a (n-1)-dimension vector (or matrix) of accumulators. In some of these cases, the discrete dimensions of the data to sample and round may be treated non orthogonally: for example, when working with colored images, the trichromatic color planes data in each physical dimension (height, width and optionally time) could be remapped using a perceptive color model, so that the roundoff error accumulators will be designed to preserve lightness with a higher probability than hue or saturation, instead of propagating errors into each orthogonal color plane independently; and in stereophonic audio data the two rounded data channels (left and right) may be rounded together to preserve their mean value in priority to their effective difference that will absorb most of the remaining roundoff errors, in a balanced way around zero. Rounding to simple fractionsEdit In some contexts it is desirable to round a given number x to a "neat" fraction — that is, the nearest fraction z = m/n whose numerator m and denominator n do not exceed a given maximum. This problem is fairly distinct from that of rounding a value to a fixed number of decimal or binary digits, or to a multiple of a given unit m. This problem is related to Farey sequences, the Stern-Brocot tree, and continued fractions. This type of rounding, which is also named rounding to a logarithmic scale, is a variant of Rounding to a specified increment but with an increment that is modified depending on the scale and magnitude of the result. Concretely, the intent is to limit the number of significant digits, rounding the value so that non-significant digits will be dropped. This type of rounding occurs implicitly with numbers computed with floating-point values with limited precision (such as IEEE-754 float and double types), but it may be used more generally to round any real values with any positive number of significant digits and any strictly positive real base. For example it can be used in engineering graphics for representing data with a logarithmic scale with variable steps (for example wave lengths, whose base is not necessarily an integer measure), or in statistical data to define classes of real values within intervals of exponentially growing widths (but the most common use is with integer bases such as 10 or 2).[source?] This type of rounding is based on a logarithmic scale defined by a fixed non-zero real scaling factor s (in most frequent cases this factor is s=1) and a fixed positive base b>1 (not necessarily an integer and most often different from the scaling factor), and a fixed integer number n>0 of significant digits in that base (which will determine the value of the increment to use for rounding, along with the computed effective scale of the rounded number). The primary argument number (as well as the resulting rounded number) is first represented in exponential notation x = s·a·m·bc, such that the sign s is either +1 or −1, the absolute mantissa a is restricted to the half-open positive interval [1/b,1), and the exponent c is any (positive or negative) integer. In that representation, all significant digits are in the fractional part of the absolute mantissa whose integer part is always zero. If the source number (or rounded number) is 0, the absolute mantisssa a is defined as 0, the exponent c is fixed to an arbitrary value (0 in most conventions, but some floating-point representations cannot use a null absolute mantissa but reserve a specific maximum negative value for the exponent c to represent the number 0 itself), and the sign s may be arbitrarily chosen between −1 or +1 (it is generally set to +1 for simple zero, or it is set to the same sign as the argument in the rounded value if the number representation allows to differentiate positive and negative zeroes, even if they finally represent the same numeric value 0). A scaled exponential representation as x = a·s·bc may also be used equivalently, with a signed mantissa a either equal to zero or within one of the two half-open intervals (−1,−1/b] and [+1/b,+1), and this will be the case in the algorithm below. The steps to compute this scaled rounding are generally similar to the following: - if x equals zero, simply return x; otherwise: - convert x into the scaled exponential representation, with a signed mantissa: - let x’ be the unscaled value of x, by dividing it by the scaling factor s: - let the scaling exponent c be one plus the base-b logarithm of the absolute value of x’, rounded down to an integer (towards minus infinity): - let the signed mantissa a be product of x’ divided by b to the power c: - let x’ be the unscaled value of x, by dividing it by the scaling factor s: - compute the rounded value in this representation: - let c’ be the initial scaling exponent c of x’: - let m be the increment for rounding the mantissa a according to the number of significant digits to keep: - let a’ be the signed mantissa a rounded according to this increment m and the selected rounding mode: - if the absolute value of a’ is not lower than b, then decrement n (multiply the increment m by b), increment the scaling exponent c’, divide the signed mantissa a by b, and restart the rounding of the new signed mantissa a into a’ with the same formula; this step may be avoided only if the abtract "round()" function is always rounding a towards 0 (i.e. when it is a simple truncation), but is necessary if it may be rounding a towards infinity, because the rounded mantissa may have a higher scaling exponent in this case, leaving an extra digit of precision. - let c’ be the initial scaling exponent c of x’: - return the rounded value: For the abstract "round()" function, this type of rounding can use any one of the rounding to integer modes described more completely in the next section, but it is most frequently the round to nearest mode (with tie-breaking rules also described more completely below). - the scaled rounding of 1.234 with scaling factor 1 in base 10 and 3 significant digits (maximum relative precision=1/1000), when using any round to nearest mode, will return 1.23; - similar scaled rounding of 1.236 will return 1.24; - similar scaled rounding of 21.236 will return 21.2; - similar scaled rounding of 321.236 will return 321; - the scaled rounding of 1.234 scaling factor 1 in base 10 and 3 significant digits (maximum relative precision=1/1000), when using the round down mode, will return 1.23; - similar scaled rounding of 1.236 will also return 1.23; - the scaled rounding of with scaling factor in base 2 and 3 significant digits (maximum relative precision=1/8), when using the round down mode, will return ; - similar scaled rounding of will return ; - similar scaled rounding of will return . - similar scaled rounding of will also return . - similar scaled rounding of will return . Round to available valueEdit Many design procedures describe how to calculate an approximate value, and then "round" to some standard size using phrases such as "round down to nearest standard value", "round up to nearest standard value", or "round to nearest standard value". When a set of preferred values is equally spaced on a logarithmic scale, Choosing the closest preferred value to any given value can be seen as a kind of scaled rounding. Such "rounded" values can be directly calculated. In floating-point arithmetic, rounding aims to turn a given value x into a value z with a specified number of significant digits. In other words, z should be a multiple of a number m that depends on the magnitude of z. The number m is a power of the base (usually 2 or 10) of the floating-point form. Apart from this detail, all the variants of rounding discussed above apply to the rounding of floating-point numbers as well. The algorithm for such rounding is presented in the Scaled rounding section above, but with a constant scaling factor s=1, and an integer base b>1. For results where the rounded result would overflow the result for a directed rounding is either the appropriate signed infinity, or the highest representable positive finite number (or the lowest representable negative finite number if x is negative), depending on the direction of rounding. The result of an overflow for the usual case of round to even is always the appropriate infinity. In addition, if the rounded result would underflow, i.e. if the exponent would exceed the lowest representable integer value, the effective result may be either zero (possibly signed if the representation can maintain a distinction of signs for zeroes), or the smallest representable positive finite number (or the highest representable negative finite number if x is negative), possibly a denormal positive or negative number (if the mantissa is storing all its significant digits, in which case the most significant digit may still be stored in a lower position by setting the highest stored digits to zero, and this stored mantissa does not drop the most significant digit, something that is possible when base b=2 because the most significant digit is always 1 in that base), depending on the direction of rounding. The result of an underflow for the usual case of round to even is always the appropriate zero. Rounding a number twice in succession to different precisions, with the latter precision being coarser, is not guaranteed to give the same result as rounding once to the final precision except in the case of directed rounding. For instance rounding 9.46 to one decimal gives 9.5, and then 10 when rounding to integer using rounding half to even, but would give 9 when rounded to integer directly. Some computer languages and the IEEE 754-2008 standard dictate that in straightforward calculations, the result should not be rounded twice. This has been a particular problem with Java as it is designed to be run identically on different machines, special programming tricks have had to be used to achieve this with x87 floating point. The Java language was changed to allow different results where the difference does not matter and require a "strictfp" qualifier to be used when the results have to conform accurately. Exact computation with rounded arithmeticEdit It is possible to use rounded arithmetic to evaluate the exact value of a function with a discrete domain and range. For example, if we know that an integer n is a perfect square, we can compute its square root by converting n to a floating-point value x, computing the approximate square root y of x with floating point, and then rounding y to the nearest integer q. If n is not too big, the floating-point roundoff error in y will be less than 0.5, so the rounded value q will be the exact square root of n. In most modern computers, this method may be much faster than computing the square root of n by an all-integer algorithm. The table-maker's dilemmaEdit William Kahan coined the term "The Table-Maker's Dilemma" for the unknown cost of rounding transcendental functions: - "Nobody knows how much it would cost to compute y^w correctly rounded for every two floating-point arguments at which it does not over/underflow. Instead, reputable math libraries compute elementary transcendental functions mostly within slightly more than half an ulp and almost always well within one ulp. Why can't Y^W be rounded within half an ulp like SQRT? Because nobody knows how much computation it would cost... No general way exists to predict how many extra digits will have to be carried to compute a transcendental expression and round it correctly to some preassigned number of digits. Even the fact (if true) that a finite number of extra digits will ultimately suffice may be a deep theorem." The IEEE floating point standard guarantees that add, subtract, multiply, divide, square root, and floating point remainder will give the correctly rounded result of the infinite precision operation. However, no such guarantee is given for more complex functions and they are typically only accurate to within the last bit at best. Using the Gelfond–Schneider theorem and Lindemann–Weierstrass theorem, many of the standard elementary functions can be proved to return transcendental results when given rational non-zero arguments; therefore it is always possible to correctly round such functions. However determining a limit for a given precision on how accurate results needs to be computed before a correctly rounded result can be guaranteed may demand a lot of computation time. There are some packages around now that offer full accuracy. The MPFR package gives correctly rounded arbitrary precision results. IBM has written a package for fast and accurate IEEE elementary functions and in the future the standard libraries may offer such precision. It is possible to devise well-defined computable numbers which it may never be possible to correctly round no matter how many digits are calculated. For instance, if Goldbach's conjecture is true but unprovable, then it is impossible to correctly round down 0.5 + 10-n where n is the first even number greater than 4 which is not the sum of two primes, or 0.5 if there is no such number. This can however be approximated to any given precision even if the conjecture is unprovable. The concept of rounding is very old, perhaps older even than the concept of division. Some ancient clay tablets found in Mesopotamia contain tables with rounded values of reciprocals and square roots in base 60. Rounded approximations to π, the length of the year, and the length of the month are also ancient. The Round-to-even method has served as the ASTM (E-29) standard since 1940. The origin of the terms unbiased rounding and statistician's rounding are fairly self-explanatory. In the 1906 4th edition of Probability and Theory of Errors Robert Simpson Woodward called this "the computer's rule" indicating that it was then in common use by human computers who calculated mathematical tables. Churchill Eisenhart's 1947 paper "Effects of Rounding or Grouping Data" (in Selected Techniques of Statistical Analysis, McGrawHill, 1947, Eisenhart, Hastay, and Wallis, editors) indicated that the practice was already "well established" in data analysis. The origin of the term "bankers' rounding" remains more obscure. If this rounding method was ever a standard in banking, the evidence has proved extremely difficult to find. To the contrary, section 2 of the European Commission report The Introduction of the Euro and the Rounding of Currency Amounts suggests that there had previously been no standard approach to rounding in banking; and it specifies that "half-way" amounts should be rounded up. Until the 1980s, the rounding method used in floating-point computer arithmetic was usually fixed by the hardware, poorly documented, inconsistent, and different for each brand and model of computer. This situation changed after the IEEE 754 floating point standard was adopted by most computer manufacturers. The standard allows the user to choose among several rounding modes, and in each case, specifies precisely how the results should be rounded. These features made numerical computations more predictable and machine-independent, and made possible the efficient and consistent implementation of interval arithmetic. Rounding functions in programming languagesEdit Most programming languages provide functions or special syntax to round fractional numbers in various ways. The earliest numeric languages, such as FORTRAN and C, would provide only one method, usually truncation (towards zero). This default method could be implied in certain contexts, such as when assigning a fractional number to an integer variable, or using a fractional number as an index of an array. Other kinds of rounding had to be programmed explicitly; for example, rounding a positive number to the nearest integer could be implemented by adding 0.5 and truncating. In the last decades, however, the syntax and/or the standard libraries of most languages have commonly provided at least the four basic rounding functions (up/ceiling, down/floor, to nearest, and towards zero). The tie-breaking method may vary depending the language and version, and/or may be selectable by the programmer. Several languages follow the lead of the IEEE-754 floating-point standard, and define these functions as taking a double precision float argument and returning the result of the same type, which then may be converted to an integer if necessary. Since the IEEE double precision format has 52 fraction bits, this approach may avoid spurious overflows in languages have 32-bit integers. Some languages, such as PHP, provide functions that round a value to a specified number of decimal digits, e.g. from 4321.5678 to 4321.57 or 4300. In addition, many languages provide a "printf" or similar string formatting function, which allows one to convert a fractional number to a string, rounded to a user-specified number of decimal places (the precision). On the other hand, truncation (round to zero) is still the default rounding method used by many languages, especially for the division of two integer values. Other rounding standardsEdit Some disciplines or institutions have issued standards or directives for rounding. U.S. Weather ObservationsEdit In a guideline issued in mid-1966, the U.S. Office of the Federal Coordinator for Meteorology determined that weather data should be rounded to the nearest round number, with the "round half up" tie-breaking rule. For example, 1.5 rounded to integer should become 2, and −1.5 should become −1. Prior to that date, the tie-breaking rule was "round half away from zero". Negative zero in meteorologyEdit Some meteorologists may write "-0" to indicate a temperature between 0.0 and -0.5 degrees (exclusive) that was rounded to integer. This notation is used when the negative sign is considered important, no matter how small is the magnitude; for example, when rounding temperatures in the Celsius scale, where below zero indicates freezing.[source?] An introduction to different rounding algorithms that is accessible to a general audience but especially useful to those studying computer science and electronics. - How To Implement Custom Rounding Procedures by Microsoft - "Comprehensive List of Algebra Symbols". Math Vault. 2020-03-25. Retrieved 2020-10-12. - Nicholas J. Higham (2002). Accuracy and stability of numerical algorithms. p. 54. ISBN 978-0898715217. - ""Zener Diode Voltage Regulators"" (PDF). Archived from the original (PDF) on 2011-07-13. Retrieved 2011-06-11. - "Electronics 2000 | Frequently Asked Questions". www.electronics2000.co.uk. Retrieved 2020-10-12. - "Stellafane ATM: Build a Foucault & Ronchi Tester Page 3". stellafane.org. Retrieved 2020-10-12. - Bruce Trump, Christine Schneider. "Excel Formula Calculates Standard 1%-Resistor Values". Electronic Design, January 21, 2002, web: - Samuel A. Figueroa (July 1995). "When is double rounding innocuous?". ACM SIGNUM Newsletter. ACM. 30 (3): pp. 21–25. |pages=has extra text (help) - Roger Golliver (October 1998). "Efficiently producing default orthogonal IEEE double results using extended IEEE hardware" (PDF). Intel. - Kahan, William. "A Logarithm Too Clever by Half". Retrieved 2008-11-14. - Handbook of Floating-Point Arithmetic, J.-M. Muller et al., Chapter 12 Solving the Table Maker's Dilemma, 2011. - "An accurate elementary mathematical library for the IEEE floating point standard". - "Duncan J. Melville. "YBC 7289 clay tablet". 2006". Archived from the original on 2012-08-13. Retrieved 2011-06-11. - "ECMA-262 ECMAScript Language Specification" (PDF). - OFCM, 2005: Federal Meteorological Handbook No. 1 Archived 1999-04-20 at the Wayback Machine, Washington, DC., 104 pp.
Water: Monitoring & Assessment What are nitrates and why are they important? Nitrates are a form of nitrogen, which is found in several different forms in terrestrial and aquatic ecosystems. These forms of nitrogen include ammonia (NH3), nitrates (NO3), and nitrites (NO2). Nitrates are essential plant nutrients, but in excess amounts they can cause significant water quality problems. Together with phosphorus, nitrates in excess amounts can accelerate eutrophication, causing dramatic increases in aquatic plant growth and changes in the types of plants and animals that live in the stream. This, in turn, affects dissolved oxygen, temperature, and other indicators. Excess nitrates can cause hypoxia (low levels of dissolved oxygen) and can become toxic to warm-blooded animals at higher concentrations (10 mg/L) or higher) under certain conditions. The natural level of ammonia or nitrate in surface water is typically low (less than 1 mg/L); in the effluent of wastewater treatment plants, it can range up to 30 mg/L. Sources of nitrates include wastewater treatment plants, runoff from fertilized lawns and cropland, failing on-site septic systems, runoff from animal manure storage areas, and industrial discharges that contain corrosion inhibitors. Sampling and equipment considerations Nitrates from land sources end up in rivers and streams more quickly than other nutrients like phosphorus. This is because they dissolve in water more readily than phosphates, which have an attraction for soil particles. As a result, nitrates serve as a better indicator of the possibility of a source of sewage or manure pollution during dry weather. Water that is polluted with nitrogen-rich organic matter might show low nitrates. Decomposition of the organic matter lowers the dissolved oxygen level, which in turn slows the rate at which ammonia is oxidized to nitrite (NO2) and then to nitrate (NO3). Under such circumstances, it might be necessary to also monitor for nitrites or ammonia, which are considerably more toxic to aquatic life than nitrate. (See Standard Methods section 4500-NH3 and 4500-NO2 for appropriate nitrite methods; APHA, 1992) Water samples to be tested for nitrate should be collected in glass or polyethylene containers that have been prepared by using Method B in the introduction. Volunteer monitoring programs usually use two methods for nitrate testing: the cadmium reduction method and the nitrate electrode. The more commonly used cadmium reduction method produces a color reaction that is then measured either by comparison to a color wheel or by use of a spectrophotometer. A few programs also use a nitrate electrode, which can measure in the range of 0 to 100 mg/L nitrate. A newer colorimetric immunoassay technique for nitrate screening is also now available and might be applicable for volunteers. Cadmium Reduction Method The cadmium reduction method is a colorimetric method that involves contact of the nitrate in the sample with cadmium particles, which cause nitrates to be converted to nitrites. The nitrites then react with another reagent to form a red color whose intensity is proportional to the original amount of nitrate. The red color is then measured either by comparison to a color wheel with a scale in milligrams per liter that increases with the increase in color hue, or by use of an electronic spectrophotometer that measures the amount of light absorbed by the treated sample at a 543-nanometer wavelength. The absorbance value is then converted to the equivalent concentration of nitrate by using a standard curve. Methods for making standard solutions and standard curves are presented at the end of this section. This curve should be created by the program advisor before each sampling run. The curve is developed by making a set of standard concentrations of nitrate, reacting them and developing the corresponding color, and then plotting the absorbance value for each concentration against concentration. A standard curve could also be generated for the color wheel. Use of the color wheel is appropriate only if nitrate concentrations are greater than 1 mg/L. For concentrations below 1 mg/L, a spectrophotometer should be used. Matching the color of a treated sample at low concentrations to a color wheel (or cubes) can be very subjective and can lead to variable results. Color comparators can, however, be effectively used to identify sites with high nitrates. This method requires that the samples being treated are clear. If a sample is turbid, it should be filtered through a 0.45-micron filter. Be sure to test whether the filter is nitrate-free. If copper, iron, or other metals are present in concentrations above several mg/L, the reaction with the cadmium will be slowed down and the reaction time will have to be increased. The reagents used for this method are often prepackaged for different ranges, depending on the expected concentration of nitrate in the stream. For example, the Hach Company provides reagents for the following ranges: low (0 to 0.40 mg/L), medium (0 to 4.5 mg/L), and high (0 to 30 mg/L). You should determine the appropriate range for the stream being monitored. Nitrate Electrode Method A nitrate electrode (used with a meter) is similar in function to a dissolved oxygen meter. It consists of a probe with a sensor that measures nitrate activity in the water; this activity affects the electric potential of a solution in the probe. This change is then transmitted to the meter, which converts the electric signal to a scale that is read in millivolts. The millivolts are then converted to mg/L of nitrate by plotting them from a standard curve (see above). The accuracy of the electrode can be affected by high concentrations of chloride or bicarbonate ions in the sample water. Fluctuating pH levels can also affect the reading by the meter. Nitrate electrodes and meters are expensive compared to field kits that employ the cadmium reduction method. (The expense is comparable, however, if a spectrophotometer is used rather than a color wheel.) Meter/probe combinations run between $700 and $1,200 including a long cable to connect the probe to the meter. If the program has a pH meter that displays readings in millivolts, it can be used with a nitrate probe and no separate nitrate meter is needed. Results are read directly as milligrams per liter. Although nitrate electrodes and spectrophotometers can be used in the field, they have certain disadvantages. These devices are more fragile than the color comparators and are therefore more at risk of breaking in the field. They must be carefully maintained and must be calibrated before each sample run and, if you are doing many tests, between samplings. This means that samples are best tested in the lab. Note that samples to be tested with a nitrate electrode should be at room temperature, whereas color comparators can be used in the field with samples at any temperature. How to collect and analyze samples The procedures for collecting and analyzing samples for nitrate consist of the following tasks: TASK 1 Prepare the sample containers If factory-sealed, disposable Whirl-pak® bags are used for sampling, no preparation is needed. Reused sample containers (and all glassware used in this procedure) must be cleaned before the first run and after each sampling by following the method described on page 128 under Method B. Remember to wear latex gloves. TASK 2 Prepare before leaving for the sampling site Refer to section 2.3 - Safety Considerations for details on confirming sampling date and time, safety considerations, checking supplies, and checking weather and directions. In addition to the standard sampling equipment and apparel, the following equipment is needed when analyzing nitrate nitrogen in the field: - Color comparator or field spectrophotometer with sample tubes (for reading absorbance of the sample) - Reagent powder pillows (reagents to turn the water red) - Deionized or distilled water to rinse the sample tubes between uses - Wash bottle to hold rinse water - Waste bottle with secure lid to hold used cadmium particles, which should be clearly labeled and returned to the lab, where the cadmium will be properly disposed of - Mixing container with a mark at the sample volume (usually 25 mL) to hold and mix the sample - Clean, lint-free wipes to clean and dry the sample tubes TASK 3 Collect the sample Refer to Task 2 in Chapter 5 - Water Quality Conditions for details on collecting a sample using screw-cap bottles or Whirl-pak® bags. TASK 4 Analyze the sample in the field Cadmium Reduction Method With a Spectrophotometer The following is the general procedure to analyze a sample using the cadmium reduction method with a spectrophotometer. However, this should not replace the manufacturer's directions if they differ from the steps provided below: - Pour the first field sample into the sample cell test tube and insert it into the sample cell of the spectrophotometer. - Record the bottle number on the lab sheet. - Place the cover over the sample cell. Read the absorbance or concentration of this sample and record it on the field data sheet. - Pour the sample back into the waste bottle for disposal at the lab. Cadmium Reduction Method With a Color Comparator To analyze a sample using the cadmium reduction method with a color comparator, follow the manufacturer's directions and record the concentration on the field data sheet. TASK 5 Return the samples and the field data sheets to the lab/drop-off point for analysis Samples being sent to a lab for analysis must be tested for nitrates within 48 hours of collection. Keep samples in the dark and on ice or refrigerated. TASK 6 Determine results (for spectrophotometer absorbance or nitrate electrode) in lab Preparation of Standard Concentrations Cadmium Reduction Method With a Spectrophotometer First determine the range you will be testing (low, medium, or high). For each range you will need to determine the lower end, which will be determined by the detection limit of your spectrophotometer. The high end of the range will be the endpoint of the range you are using. Use a nitrate nitrogen standard solution of appropriate strength for the range in which you are working. A 1-mg/L nitrate nitrogen (NO3-N) solution would be suitable for low-range (0 to 1.0 mg/L) tests. A 100-mg/L standard solution would be appropriate for medium- and high-range tests. In the following example, it is assumed that a set of standards for a 0 to 5.0 mg/L range is being prepared. - Set out six 25-mL volumetric flasks (one for each standard). Label the flasks 0.0, 1.0, 2.0, 3.0, 4.0, and 5.0. - Pour 30 mL of a 25-mg/L nitrate nitrogen standard solution into a 50-mL beaker. - Use 1-, 2-, 3-, 4-, and 5-mL Class A volumetric pipets to transfer corresponding volumes of nitrate nitrogen standard solution to each 25-mL volumetric flask as follows: mL of Nitrate Nitrogen 0.0 0 1.0 1 2.0 2 3.0 3 4.0 4 5.0 5 Analysis of the Cadmium Reduction Method Standard Concentrations Use the following procedure to analyze the standard concentrations. - Add reagent powder pillows to the nitrate nitrogen standard concentrations. - Shake each tube vigorously for at least 3 minutes. - For each tube, wait at least 10 minutes but not more than 20 minutes to proceed. - "Zero" the spectrophotometer using the 0.0 standard concentration and following the manufacturer's directions. Record the absorbance as "0" in the absorbance column on the lab sheet. Rinse the sample cell three times with distilled water. - Read and record the absorbance of the 1.0-mg/L standard concentration. - Rinse the sample cell test tube three times with distilled or deionized water. Avoid touching the lower part of the sample cell test tube. Wipe with a clean, lint-free wipe. Be sure that the lower part of the sample cell test tube is clean and free of smudges or water droplets. - Repeat steps 3 and 4 for each standard. - Prepare a calibration curve and convert absorbance to mg/L as follows: - Make an absorbance versus concentration graph on graph paper: (a) Make the vertical (y) axis and label it "absorbance." Mark this axis in 1.0 increments from 0 as high as the graph paper will allow. (b) Make the horizontal (x) axis and label it "concentration: mg/L as nitrate nitrogen." Mark this axis with the concentrations of the standards: 0.0, 1.0, 2.0, 3.0, 4.0, and 5.0. - Plot the absorbance of the standard concentrations on the graph. - Draw a "best fit" straight line through these points. The line should touch (or almost touch) each of the points. If it doesn't, the results of this procedure are not valid. - For each sample, locate the absorbance on the "y" axis, read over horizontally to the line, and then move down to read the concentration in mg/L as nitrate nitrogen. - Record the concentration on the lab sheet in the appropriate column. For Nitrate Electrode Standards are prepared using nitrate standard solutions of 100 and 10 mg/L as nitrate nitrogen (NO3-N). All references to concentrations and results in this procedure will be expressed as mg/L as NO3-N. Eight standard concentrations will be prepared: 100.0 mg/L 0.40 mg/L 10.0 mg/L 0.32 mg/L 1.0 mg/L 0.20 mg/L 0.8 mg/L 0.12 mg/L Use the following procedure: - Set out eight 25-mL volumetric flasks (one for each standard). Label the flasks 100.0, 10.0, 1.0, 0.8, 0.4, 0.32, 0.2, and 0.12. - To make the 100.0-mg/L standard, pour 25 mL of the 100-mg/L nitrate standard solution into the flask labeled 100.0. - To make the 10.0-mg/L standard, pour 25 mL of the 10-mg/L nitrate standard solution into the flask labeled 10.0. - To make the 1.0-mg/L standard, use a 10- or 5-mL pipet to measure 2.5 mL of the 10-mg/L nitrate standard solution into the flask labeled 1.0. Fill the flask with 22.5 mL distilled, deionized water to the fill line. Rinse the pipet with deionized water. - To make the 0.8-mg/L standard, use a 10- or 5-mL pipet or a 2-mL volumetric pipet to measure 2 mL of the 10-mg/L nitrate standard solution into the flask labeled 0.8. Fill the flask with about 23 mL distilled, deionized water to the fill line. Rinse the pipet with deionized water. 6. To make the 0.4-mg/L standard, use a 10- or 5-mL pipet or a 1-mL volumetric pipet to measure 1 mL of the 10-mg/L nitrate standard solution into the flask labeled 0.4. Fill the flask with about 24 mL distilled, deionized water to the fill line. Rinse the pipet with deionized water. - To make the 0.32-, 0.2-, and 0.12-mg/L standards, follow step 4 to make a 25-mL volume of 1.0 mg/L standard solution. Transfer this to a beaker. Pipet the following volumes into the appropriately labeled volumetric flasks: mL of Nitrate Nitrogen 0.32 8 0.20 5 0.12 3 Analysis of the Nitrate Electrode Standard Concentrations Use the following procedure to analyze the standard concentrations. - List the standard concentrations (100.0, 10.0, 1.0, 0.8, 0.4, 0.32, 0.2, and 0.12) under "bottle #" on the lab sheet. - Prepare a calibration curve and convert to mg/L as follows: - Plot absorbance or mV readings for the 100-, 10-, and 1-mg/L standards on semi-logarithmic graph paper, with concentration on the logarithmic (x) axis and the absorbance or millivolts (mV) on the linear (y) axis. For the nitrate electrode curve, a straight line with a slope of 58 ñ 3 mV/decade at 25 C should result. That is, measurements of 10- and 100-mg/L standard solutions should be no more than 58 ± 3 mV apart. - Plot absorbance or mV readings for the 1.0-, 0.8-, 0.4-, 0.32-, 0.2-, and 0.12-mg/L standards on semi-logarithmic graph paper, with concentration on the logarithmic (x) axis and the millivolts (mV) on the linear (y) axis. For the nitrate electrode, the result here should be a curved line since the response of the electrode at these low concentrations is not linear. - For the nitrate electrode, recalibrate the electrodes several times daily by checking the mV reading of the 10-mg/L and 0.4-mg/L standards and adjusting the calibration control on the meter until the reading plotted on the calibration curve is displayed again. APHA. 1992. Standard methods for the examination of water and wastewater. 18th ed. American Public Health Association, Washington, DC.
Common Core Math Standards 350+ math concepts in Kindergarten to grade 5 aligned to your child's school curriculum Common Core State Standards © Copyright 2010. National Governors Association Center for Best Practices and Council of Chief State School Officers. All rights reserved. Operations & Algebraic Thinking - 3rd Grade Common Core Math Represent and solve problems involving multiplication and division. 3.OA.1 Interpret products of whole numbers, e.g., interpret 5 × 7 as the total number of objects in 5 groups of 7 objects each. 3.OA.2 Interpret whole-number quotients of whole numbers, e.g., interpret 56 ÷ 8 as the number of objects in each share when 56 objects are partitioned equally into 8 shares, or as a number of shares when 56 objects are partitioned into equal shares of 8 objects each. 3.OA.3 Use multiplication and division within 100 to solve word problems in situations involving equal groups, arrays, and measurement quantities, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem. 3.OA.4 Determine the unknown whole number in a multiplication or division equation relating three whole numbers. Understand properties of multiplication and the relationship between multiplication and division. 3.OA.5 Apply properties of operations as strategies to multiply and divide. 3.OA.6 Understand division as an unknown-factor problem. Multiply and divide within 100. 3.OA.7 Fluently multiply and divide within 100, using strategies such as the relationship between multiplication and division (e.g., knowing that 8 × 5 = 40, one knows 40 ÷ 5 = 8) or properties of operations. By the end of Grade 3, know from memory all products of two one-digit numbers. Solve problems involving the four operations, and identify and explain patterns in arithmetic. 3.OA.8 Solve two-step word problems using the four operations. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding. 3.OA.9 Identify arithmetic patterns (including patterns in the addition table or multiplication table), and explain them using properties of operations.
What is Congress Congress is the legislative branch of the U.S. government. It is responsible for making laws and helps to balance out the power of the executive and judicial branches of government. Congress has enumerated powers established by the U.S. Constitution, including laying and collecting taxes, borrowing money, regulating commerce and declaring war. BREAKING DOWN Congress Congress consists of the Senate and the House of Representatives. Each state elects a number of representatives in proportion to that state's population. The representatives serve two-year terms. Each state also elects two senators who serve six-year terms. The political power in Congress impacts the financial world directly. For this reason, almost every large industry has many lobbyists in Washington pushing their agendas. Both the House and Senate utilize committees to get a majority of work done in Congress. Committees consist of members of both parties, with a majority of members coming from the majority party. The committee chairperson sets the number of committee members. This panel decides what legislation goes to the full House or Senate for consideration. Committees decide how to word legislation thanks to written recommendations from executive cabinet departments and testimony from expert witnesses. The committee then decides on the language of a bill, a process called perfection, before sending the bill to the full chamber. The Senate has 20 committees and 68 subcommittees. Specific panels oversee financial issues important to Americans. Committees That Impact Finance The House Financial Services Committee oversees banks and banking legislation. It also proposes the monetary policy for the federal government. In addition, this committee decides on financial aid packages to industries other than transportation, while providing policies on economic stabilization. The House Financial Services Committee deals with issues regarding securities, credit and insurance. The Senate Finance Committee considers legislation for the federal debt, tariffs, Social Security, Medicaid and foreign trade agreements, among other duties. This committee oversees the trade of durable goods and assistance for needy families. Another powerful panel in Congress is the House Appropriations Committee. This body decides how to fund the federal government every fiscal year. This committee sets the budget for the federal government, funds various programs and decides how to spend tax revenue. How Congress Changes the Financial Industry Congress passes laws that affect the financial industry in big and small ways. One law, the Sarbanes-Oxley Act, passed Congress in 2002 after scandals at Enron and WorldCom. The law said that ultimately the CEO, other executives and management staff of a company are responsible for accounting practices and financial statements. The law also helps prevent abuse and fraud so investors can make more confident selections when purchasing stocks or shares of a company.
A look into Reinforcement Learning (RL) and learning new skills — how to help the brain functions more effectively Learning is defined as an act, process, or the experience of gaining knowledge or skill set. Reinforcement Learning (RL) is a theory that states how an organism or person can learn this skill set through via action and an outcome association. Learning from mistakes as one goes along a learning path that offers reinforcement and expectation can enhance the learning. High expectation offers better learning than if the expectation has a low learning value. The outcomes of the learning made while traveling along the path are coded in the brain and were suggested by midbrain dopamine neurons and increased activity with the outcomes are better than expected. The reinforcement of the learning (RL) is made to the anterior cingulate cortex and produces a measurable signal on the scalp. The signal is Feedback-Related Negativity (FRN). When the signals occur, they become indicators of the RL process and high-amplitude FRN should indicate an updating of learning from the action-outcome. The evidence up to this test was limited in showing that acquisition of the new learning was contingent or dependent on the amplitude of the negative event-relationship or potential (ERP). The RL theory suggests that an increase in FRN amplitude made during the feedback would indicate or be associated with good performance on future initiations. This study consisted of 19 individuals between the ages of 17 and 23. Eight of the subjects were men. Twelve of those tested were right-handed. The test subjects were asked to choose between 4 response buttons as soon as possible after they were shown an item on a screen that indicated one of the numbers. If they took too long to make up their mind in their selection process (1500 ms), they received feedback that indicated “too late”. If they correctly chose the button that supported what they saw, the screen would blue, then after 1000 ms would turn green indicating they could go on to the next section learning. If the subject chose incorrectly, the screen would turn red for 1000 ms and the screen would start again. The button selection was also sequence-based. They would learn the sequence by trial and error. If they chose correction they moved onto the next sequence. If they didn’t choose correctly in the sequence, the whole sequence would start again. There were 12 steps to the sequence. However, sequences were manipulated so that in the course of the 12 items of the sequence, 3 items for the response was not considered correct until it was the first, second, third, or four choices. There was a predetermination of how many attempts were needed to get positive feedback for a particular sequence. Three types of feedback performances were recorded by an electroencephalograph. It used 61 channels mounted on an electro cap. These three types of feedback were: As the results of the type of feedback that subjects received were manipulated, the lowest number that any of the test subjects could attain on a particular sequence was 18. Each person made on an average of 12.4 errors. Negative RL test failures consisted of choosing the same incorrect response button in the next go-around of the test item AND in choosing the incorrect response on a later encounter of the item. Positive RL failures were those that a “true” failure to correctly respond to an item while positive feedback was given during a previous encounter. The remaining failures were those which reproduced a positive reinforced response and involved mistakes by the subjects when they already received the correct response repeatedly. After calculating the data, the numbers reflected that subjects gradually made fewer errors. Negative feedback elicited a negative deflection in the negative event-relationship or potential (ERP) and was significantly different than zero. They were able to distinguish between negative feedback from a novel or unique response as to feedback that resulted from a response that had been tried previously. Also too, the Feedback-Related Negativity (FRN) in both instances was present in both types of feedback, it was however enhanced significantly when the good negative RL and bad negative RL were compared. The “learning difference wave” was not zero but floated between 150 to 500 ms indicating that a large FRN that followed negative feedback was predictive of not seeking the response that was tried previously (a “don't do that again” reaction) the subsequent time. FRN on good positive RL trials were dramatically different depending on the number of attempts and the effect was caused when a reduced positivity was present when positive feedback was given after the fourth attempt. Learning from mistakes is very important for success in future behavior. Response learning signals from the midbrain to the anterior cingulate cortex that reflect a high-amplitude FRN charge is an indicator that predicts learning after negative reinforcement. It also indicates good performance in the future when a subject is confronted with the same choices on a future occasion. The test results in the trial support the RL theory of FRN that says when receiving negative feedback is an action, the amplitude is more negative when the test subjects learned from the feedback and tried a response they had never tried (chose a different option). This amplitude also gets adjusted and updated for future correct and incorrect responses. Regardless of whether it was the first, second, or third attempt in the sequence, there was no difference in the FRNs signaled after the first attempt. That said, the negative feedback was less informative after a first attempt then after the third attempt. The signal strength was less positive on the fourth attempt than on the other attempts and was attributed to expectation feedback (the subjects ran out of viable options) than if they had more options from which to pick. This is a case study, they were able to show that FRN amplitude predicts whether an association was learned and that a rich stimulus was differentially more rewarding and rewarded than another. Although difficult to interpret, FRN reflects the outcomes that show below expectations can be positive when a subject is expecting a negative outcome but that FRN does not reflect outcomes to be worse than expected (just different). FRN amplitudes reflect the process of learning and skill acquisition after we make a mistake and are indicative of whether we learned from the mistake or will repeat it. Essentially it boils down to having buy-in in the learning process. If there is an expectation of a positive outcome, then there is the potential to try a new way of thinking. If there is no new way of thinking or solution applied, then nothing was learned and the mistake is doomed to be repeated. When all options have been exhausted, the expectation falls off as there is only one alternative/option left. Learning involves having some element of “risk” or option in the game and the ability to look at trying new attempts to find clarity or the skill set to be learned.
Thrust, drag, lift, and weight are forces that act upon all aircraft in flight. Understanding how these forces work and knowing how to control them with the use of power and flight controls are essential to flight. The four forces acting on an aircraft in straight-and-level, unaccelerated flight are thrust, drag, lift, and weight. They are defined as follows: - Thrust—the forward force produced by the powerplant/propeller or rotor. It opposes or overcomes the force of drag. As a general rule, it acts parallel to the longitudinal axis. However, this is not always the case, as explained later. - Drag—a rearward, retarding force caused by disruption of airflow by the wing, rotor, fuselage, and other protruding objects. As a general rule, drag opposes thrust and acts rearward parallel to the relative wind. - Lift—is a force that is produced by the dynamic effect of the air acting on the airfoil, and acts perpendicular to the flight path through the center of lift (CL) and perpendicular to the lateral axis. In level flight, lift opposes the downward force of weight. - Weight—the combined load of the aircraft itself, the crew, the fuel, and the cargo or baggage. Weight is a force that pulls the aircraft downward because of the force of gravity. It opposes lift and acts vertically downward through the aircraft’s center of gravity (CG). In steady flight, the sum of these opposing forces is always zero. There can be no unbalanced forces in steady, straight flight based upon Newton’s Third Law, which states that for every action or force there is an equal, but opposite, reaction or force. This is true whether flying level or when climbing or descending. It does not mean the four forces are equal. It means the opposing forces are equal to, and thereby cancel, the effects of each other. In Figure 1, the force vectors of thrust, drag, lift, and weight appear to be equal in value. The usual explanation states (without stipulating that thrust and drag do not equal weight and lift) that thrust equals drag and lift equals weight. Although true, this statement can be misleading. It should be understood that in straight, level, unaccelerated flight, it is true that the opposing lift/weight forces are equal. They are also greater than the opposing forces of thrust/drag that are equal only to each other. Therefore, in steady flight: - The sum of all upward components of forces (not just lift) equals the sum of all downward components of forces (not just weight) - The sum of all forward components of forces (not just thrust) equals the sum of all backward components of forces (not just drag) |Figure 1. Relationship of forces acting on an aircraft| This refinement of the old “thrust equals drag; lift equals weight” formula explains that a portion of thrust is directed upward in climbs and slow flight and acts as if it were lift while a portion of weight is directed backward opposite to the direction of flight and acts as if it were drag. In slow flight, thrust has an upward component. But because the aircraft is in level flight, weight does not contribute to drag. [Figure 2] |Figure 2. Force vectors during a stabilized climb| In glides, a portion of the weight vector is directed along the forward flight path and, therefore, acts as thrust. In other words, any time the flight path of the aircraft is not horizontal, lift, weight, thrust, and drag vectors must each be broken down into two components. Another important concept to understand is angle of attack (AOA). Since the early days of flight, AOA is fundamental to understanding many aspects of airplane performance, stability, and control. The AOA is defined as the acute angle between the chord line of the airfoil and the direction of the relative wind. Discussions of the preceding concepts are frequently omitted in aeronautical texts/handbooks/manuals. The reason is not that they are inconsequential, but because the main ideas with respect to the aerodynamic forces acting upon an aircraft in flight can be presented in their most essential elements without being involved in the technicalities of the aerodynamicist. In point of fact, considering only level flight, and normal climbs and glides in a steady state, it is still true that lift provided by the wing or rotor is the primary upward force, and weight is the primary downward force. By using the aerodynamic forces of thrust, drag, lift, and weight, pilots can fly a controlled, safe flight. A more detailed discussion of these forces follows. For an aircraft to start moving, thrust must be exerted and be greater than drag. The aircraft continues to move and gain speed until thrust and drag are equal. In order to maintain a constant airspeed, thrust and drag must remain equal, just as lift and weight must be equal to maintain a constant altitude. If in level flight, the engine power is reduced, the thrust is lessened, and the aircraft slows down. As long as the thrust is less than the drag, the aircraft continues to decelerate. To a point, as the aircraft slows down, the drag force will also decrease. The aircraft will continue to slow down until thrust again equals drag at which point the airspeed will stabilize. Likewise, if the engine power is increased, thrust becomes greater than drag and the airspeed increases. As long as the thrust continues to be greater than the drag, the aircraft continues to accelerate. When drag equals thrust, the aircraft flies at a constant airspeed. Straight-and-level flight may be sustained at a wide range of speeds. The pilot coordinates AOA and thrust in all speed regimes if the aircraft is to be held in level flight. An important fact related to the principal of lift (for a given airfoil shape) is that lift varies with the AOA and airspeed. Therefore, a large AOA at low airspeeds produces an equal amount of lift at high airspeeds with a low AOA. The speed regimes of flight can be grouped in three categories: lowspeed flight, cruising flight, and high-speed flight. When the airspeed is low, the AOA must be relatively high if the balance between lift and weight is to be maintained. [Figure 3] If thrust decreases and airspeed decreases, lift will become less than weight and the aircraft will start to descend. To maintain level flight, the pilot can increase the AOA an amount that generates a lift force again equal to the weight of the aircraft. While the aircraft will be flying more slowly, it will still maintain level flight. The AOA is adjusted to maintain lift equal weight. The airspeed will naturally adjust until drag equals thrust and then maintain that airspeed (assumes the pilot is not trying to hold an exact speed). |Figure 3. Angle of attack at various speeds| Straight-and-level flight in the slow-speed regime provides some interesting conditions relative to the equilibrium of forces. With the aircraft in a nose-high attitude, there is a vertical component of thrust that helps support it. For one thing, wing loading tends to be less than would be expected. In level flight, when thrust is increased, the aircraft speeds up and the lift increases. The aircraft will start to climb unless the AOA is decreased just enough to maintain the relationship between lift and weight. The timing of this decrease in AOA needs to be coordinated with the increase in thrust and airspeed. Otherwise, if the AOA is decreased too fast, the aircraft will descend, and if the AOA is decreased too slowly, the aircraft will climb. As the airspeed varies due to thrust, the AOA must also vary to maintain level flight. At very high speeds and level flight, it is even possible to have a slightly negative AOA. As thrust is reduced and airspeed decreases, the AOA must increase in order to maintain altitude. If speed decreases enough, the required AOA will increase to the critical AOA. Any further increase in the AOA will result in the wing stalling. Therefore, extra vigilance is required at reduced thrust settings and low speeds so as not to exceed the critical angle of attack. If the airplane is equipped with an AOA indicator, it should be referenced to help monitor the proximity to the critical AOA. Some aircraft have the ability to change the direction of the thrust rather than changing the AOA. This is accomplished either by pivoting the engines or by vectoring the exhaust gases. [Figure 4] |Figure 4. Some aircraft have the ability to change the direction of thrust| The pilot can control the lift. Any time the control yoke or stick is moved fore or aft, the AOA is changed. As the AOA increases, lift increases (all other factors being equal). When the aircraft reaches the maximum AOA, lift begins to diminish rapidly. This is the stalling AOA, known as CL‑MAX critical AOA. Examine Figure 5, noting how the CL increases until the critical AOA is reached, then decreases rapidly with any further increase in the AOA. |Figure 5. Coefficients of lift and drag at various angles of attack| Before proceeding further with the topic of lift and how it can be controlled, velocity must be discussed. The shape of the wing or rotor cannot be effective unless it continually keeps “attacking” new air. If an aircraft is to keep flying, the lift-producing airfoil must keep moving. In a helicopter or gyroplane, this is accomplished by the rotation of the rotor blades. For other types of aircraft, such as airplanes, weight shift control, or gliders, air must be moving across the lifting surface. This is accomplished by the forward speed of the aircraft. Lift is proportional to the square of the aircraft’s velocity. For example, an airplane traveling at 200 knots has four times the lift as the same airplane traveling at 100 knots, if the AOA and other factors remain constant. The above lift equation exemplifies this mathematically and supports that doubling of the airspeed will result in four times the lift. As a result, one can see that velocity is an important component to the production of lift, which itself can be affected through varying AOA. When examining the equation, lift (L) is determined through the relationship of the air density (ρ), the airfoil velocity (V), the surface area of the wing (S) and the coefficient of lift (CL) for a given airfoil. Taking the equation further, one can see an aircraft could not continue to travel in level flight at a constant altitude and maintain the same AOA if the velocity is increased. The lift would increase and the aircraft would climb as a result of the increased lift force or speed up. Therefore, to keep the aircraft straight and level (not accelerating upward) and in a state of equilibrium, as velocity is increased, lift must be kept constant. This is normally accomplished by reducing the AOA by lowering the nose. Conversely, as the aircraft is slowed, the decreasing velocity requires increasing the AOA to maintain lift sufficient to maintain flight. There is, of course, a limit to how far the AOA can be increased, if a stall is to be avoided. All other factors being constant, for every AOA there is a corresponding airspeed required to maintain altitude in steady, unaccelerated flight (true only if maintaining level flight). Since an airfoil always stalls at the same AOA, if increasing weight, lift must also be increased. The only method of increasing lift is by increasing velocity if the AOA is held constant just short of the “critical,” or stalling, AOA (assuming no flaps or other high lift devices). Lift and drag also vary directly with the density of the air. Density is affected by several factors: pressure, temperature, and humidity. At an altitude of 18,000 feet, the density of the air has one-half the density of air at sea level. In order to maintain its lift at a higher altitude, an aircraft must fly at a greater true airspeed for any given AOA. Warm air is less dense than cool air, and moist air is less dense than dry air. Thus, on a hot humid day, an aircraft must be flown at a greater true airspeed for any given AOA than on a cool, dry day. If the density factor is decreased and the total lift must equal the total weight to remain in flight, it follows that one of the other factors must be increased. The factor usually increased is the airspeed or the AOA because these are controlled directly by the pilot. Lift varies directly with the wing area, provided there is no change in the wing’s planform. If the wings have the same proportion and airfoil sections, a wing with a planform area of 200 square feet lifts twice as much at the same AOA as a wing with an area of 100 square feet. Two major aerodynamic factors from the pilot’s viewpoint are lift and airspeed because they can be controlled readily and accurately. Of course, the pilot can also control density by adjusting the altitude and can control wing area if the aircraft happens to have flaps of the type that enlarge wing area. However, for most situations, the pilot controls lift and airspeed to maneuver an aircraft. For instance, in straight-and-level flight, cruising along at a constant altitude, altitude is maintained by adjusting lift to match the aircraft’s velocity or cruise airspeed, while maintaining a state of equilibrium in which lift equals weight. In an approach to landing, when the pilot wishes to land as slowly as practical, it is necessary to increase AOA near maximum to maintain lift equal to the weight of the aircraft. The lift-to-drag ratio (L/D) is the amount of lift generated by a wing or airfoil compared to its drag. A ratio of L/D indicates airfoil efficiency. Aircraft with higher L/D ratios are more efficient than those with lower L/D ratios. In unaccelerated flight with the lift and drag data steady, the proportions of the coefficient of lift (CL) and coefficient of drag (CD) can be calculated for specific AOA. [Figure 5] The coefficient of lift is dimensionless and relates the lift generated by a lifting body, the dynamic pressure of the fluid flow around the body, and a reference area associated with the body. The coefficient of drag is also dimensionless and is used to quantify the drag of an object in a fluid environment, such as air, and is always associated with a particular surface area. The L/D ratio is determined by dividing the CL by the CD, which is the same as dividing the lift equation by the drag equation as all of the variables, aside from the coefficients, cancel out. The lift and drag equations are as follows (L = Lift in pounds; D = Drag; CL = coefficient of lift; ρ = density (expressed in slugs per cubic feet); V = velocity (in feet per second); q = dynamic pressure per square foot (q = 1⁄2 ρv2); S = the area of the lifting body (in square feet); and CD = Ratio of drag pressure to dynamic pressure) Typically at low AOA, the coefficient of drag is low and small changes in AOA create only slight changes in the coefficient of drag. At high AOA, small changes in the AOA cause significant changes in drag. The shape of an airfoil, as well as changes in the AOA, affects the production of lift. Notice in Figure 5 that the coefficient of lift curve (red) reaches its maximum for this particular wing section at 20° AOA and then rapidly decreases. 20° AOA is therefore the critical angle of attack. The coefficient of drag curve (orange) increases very rapidly from 14° AOA and completely overcomes the lift curve at 21° AOA. The lift/drag ratio (green) reaches its maximum at 6° AOA, meaning that at this angle, the most lift is obtained for the least amount of drag. Note that the maximum lift/drag ratio (L/DMAX) occurs at one specific CL and AOA. If the aircraft is operated in steady flight at L/DMAX, the total drag is at a minimum. Any AOA lower or higher than that for L/DMAXreduces the L/D and consequently increases the total drag for a given aircraft’s lift. Figure 6 depicts the L/DMAXby the lowest portion of the blue line labeled “total drag.” The configuration of an aircraft has a great effect on the L/D. |Figure 6. Drag versus speed| Drag is the force that resists movement of an aircraft through the air. There are two basic types: parasite drag and induced drag. The first is called parasite because it in no way functions to aid flight, while the second, induced drag, is a result of an airfoil developing lift. Parasite drag is comprised of all the forces that work to slow an aircraft’s movement. As the term parasite implies, it is the drag that is not associated with the production of lift. This includes the displacement of the air by the aircraft, turbulence generated in the airstream, or a hindrance of air moving over the surface of the aircraft and airfoil. There are three types of parasite drag: form drag, interference drag, and skin friction. Form drag is the portion of parasite drag generated by the aircraft due to its shape and airflow around it. Examples include the engine cowlings, antennas, and the aerodynamic shape of other components. When the air has to separate to move around a moving aircraft and its components, it eventually rejoins after passing the body. How quickly and smoothly it rejoins is representative of the resistance that it creates, which requires additional force to overcome. [Figure 7] |Figure 7. Form drag| Notice how the flat plate in Figure 7 causes the air to swirl around the edges until it eventually rejoins downstream. Form drag is the easiest to reduce when designing an aircraft. The solution is to streamline as many of the parts as possible. Interference drag comes from the intersection of airstreams that creates eddy currents, turbulence, or restricts smooth airflow. For example, the intersection of the wing and the fuselage at the wing root has significant interference drag. Air flowing around the fuselage collides with air flowing over the wing, merging into a current of air different from the two original currents. The most interference drag is observed when two surfaces meet at perpendicular angles. Fairings are used to reduce this tendency. If a jet fighter carries two identical wing tanks, the overall drag is greater than the sum of the individual tanks because both of these create and generate interference drag. Fairings and distance between lifting surfaces and external components (such as radar antennas hung from wings) reduce interference drag. [Figure 8] |Figure 8. A wing root can cause interference drag| Skin Friction Drag Skin friction drag is the aerodynamic resistance due to the contact of moving air with the surface of an aircraft. Every surface, no matter how apparently smooth, has a rough, ragged surface when viewed under a microscope. The air molecules, which come in direct contact with the surface of the wing, are virtually motionless. Each layer of molecules above the surface moves slightly faster until the molecules are moving at the velocity of the air moving around the aircraft. This speed is called the free-stream velocity. The area between the wing and the free-stream velocity level is about as wide as a playing card and is called the boundary layer. At the top of the boundary layer, the molecules increase velocity and move at the same speed as the molecules outside the boundary layer. The actual speed at which the molecules move depends upon the shape of the wing, the viscosity (stickiness) of the air through which the wing or airfoil is moving, and its compressibility (how much it can be compacted). The airflow outside of the boundary layer reacts to the shape of the edge of the boundary layer just as it would to the physical surface of an object. The boundary layer gives any object an “effective” shape that is usually slightly different from the physical shape. The boundary layer may also separate from the body, thus creating an effective shape much different from the physical shape of the object. This change in the physical shape of the boundary layer causes a dramatic decrease in lift and an increase in drag. When this happens, the airfoil has stalled. In order to reduce the effect of skin friction drag, aircraft designers utilize flush mount rivets and remove any irregularities that may protrude above the wing surface. In addition, a smooth and glossy finish aids in transition of air across the surface of the wing. Since dirt on an aircraft disrupts the free flow of air and increases drag, keep the surfaces of an aircraft clean and waxed. The second basic type of drag is induced drag. It is an established physical fact that no system that does work in the mechanical sense can be 100 percent efficient. This means that whatever the nature of the system, the required work is obtained at the expense of certain additional work that is dissipated or lost in the system. The more efficient the system, the smaller this loss. In level flight, the aerodynamic properties of a wing or rotor produce a required lift, but this can be obtained only at the expense of a certain penalty. The name given to this penalty is induced drag. Induced drag is inherent whenever an airfoil is producing lift and, in fact, this type of drag is inseparable from the production of lift. Consequently, it is always present if lift is produced. An airfoil (wing or rotor blade) produces the lift force by making use of the energy of the free airstream. Whenever an airfoil is producing lift, the pressure on the lower surface of it is greater than that on the upper surface (Bernoulli’s Principle). As a result, the air tends to flow from the high pressure area below the tip upward to the low pressure area on the upper surface. In the vicinity of the tips, there is a tendency for these pressures to equalize, resulting in a lateral flow outward from the underside to the upper surface. This lateral flow imparts a rotational velocity to the air at the tips, creating vortices that trail behind the airfoil. When the aircraft is viewed from the tail, these vortices circulate counterclockwise about the right tip and clockwise about the left tip. [Figure 9] As the air (and vortices) roll off the back of your wing, they angle down, which is known as downwash. Figure 10 shows the difference in downwash at altitude versus near the ground. Bearing in mind the direction of rotation of these vortices, it can be seen that they induce an upward flow of air beyond the tip and a downwash flow behind the wing’s trailing edge. This induced downwash has nothing in common with the downwash that is necessary to produce lift. It is, in fact, the source of induced drag. |Figure 9. Wingtip vortex from a crop duster| |Figure 10. The difference in wingtip vortex size at altitude versus near the ground| Downwash points the relative wind downward, so the more downwash you have, the more your relative wind points downward. That's important for one very good reason: lift is always perpendicular to the relative wind. In Figure 11, you can see that when you have less downwash, your lift vector is more vertical, opposing gravity. And when you have more downwash, your lift vector points back more, causing induced drag. On top of that, it takes energy for your wings to create downwash and vortices, and that energy creates drag. |Figure 11. The difference in downwash at altitude versus near the ground| The greater the size and strength of the vortices and consequent downwash component on the net airflow over the airfoil, the greater the induced drag effect becomes. This downwash over the top of the airfoil at the tip has the same effect as bending the lift vector rearward; therefore, the lift is slightly aft of perpendicular to the relative wind, creating a rearward lift component. This is induced drag. In order to create a greater negative pressure on the top of an airfoil, the airfoil can be inclined to a higher AOA. If the AOA of a symmetrical airfoil were zero, there would be no pressure differential, and consequently, no downwash component and no induced drag. In any case, as AOA increases, induced drag increases proportionally. To state this another way—the lower the airspeed, the greater the AOA required to produce lift equal to the aircraft’s weight and, therefore, the greater induced drag. The amount of induced drag varies inversely with the square of the airspeed. Conversely, parasite drag increases as the square of the airspeed. Thus, in steady state, as airspeed decreases to near the stalling speed, the total drag becomes greater, due mainly to the sharp rise in induced drag. Similarly, as the aircraft reaches its never-exceed speed (VNE), the total drag increases rapidly due to the sharp increase of parasite drag. As seen in Figure 6, at some given airspeed, total drag is at its minimum amount. In figuring the maximum range of aircraft, the thrust required to overcome drag is at a minimum if drag is at a minimum. The minimum power and maximum endurance occur at a different point. Gravity is the pulling force that tends to draw all bodies to the center of the earth. The CG may be considered as a point at which all the weight of the aircraft is concentrated. If the aircraft were supported at its exact CG, it would balance in any attitude. It will be noted that CG is of major importance in an aircraft, for its position has a great bearing upon stability. The allowable location of the CG is determined by the general design of each particular aircraft. The designers determine how far the center of pressure (CP) will travel. It is important to understand that an aircraft’s weight is concentrated at the CG and the aerodynamic forces of lift occur at the CP. When the CG is forward of the CP, there is a natural tendency for the aircraft to want to pitch nose down. If the CP is forward of the CG, a nose up pitching moment is created. Therefore, designers fix the aft limit of the CG forward of the CP for the corresponding flight speed in order to retain flight equilibrium. Weight has a definite relationship to lift. This relationship is simple, but important in understanding the aerodynamics of flying. Lift is the upward force on the wing acting perpendicular to the relative wind and perpendicular to the aircraft’s lateral axis. Lift is required to counteract the aircraft’s weight. In stabilized level flight, when the lift force is equal to the weight force, the aircraft is in a state of equilibrium and neither accelerates upward or downward. If lift becomes less than weight, the vertical speed will decrease. When lift is greater than weight, the vertical speed will increase. - Aerodynamics of Flight - Aircraft Design Characteristics - Helicopter Aerodynamics of Flight - High Speed Flight - Effect of Wing Planform - Helicopter Emergencies Autorotation What are the four forces on an airplane answer key? › What are the four forces acting on an airplane? The four forces acting on an airplane are lift, weight, drag, and thrust.Which of the four forces that act upon an aircraft raises the airplane? › An airplane in flight is acted on by four forces: lift, the upward acting force; gravity, the downward acting force; thrust, the forward acting force; and drag, the backward acting force (also called wind resistance). Lift opposes gravity and thrust opposes drag .What are the types of forces in aerodynamic? › The four forces of flight are lift, weight, thrust and drag. These forces make an object move up and down, and faster or slower. How much of each force there is changes how the object moves through the air.What are the 4 aerodynamic forces that act on both rockets and airplanes? › Like an aircraft, a model rocket is subjected to the forces of weight, thrust, drag, and lift.What are the two components of aerodynamic force? › By convention, the single aerodynamic force is broken into two components: the drag force which is opposed to the direction of motion, and the lift force which acts perpendicular to the direction of motion.
Stonehenge in 2014 |Official name||Stonehenge, Avebury and Associated Sites| |Criteria||i, ii, iii| |Designated||1986 (10th session)| |Region||Europe and North America| Stonehenge is a prehistoric monument located in Wiltshire, England, about 2 miles (3 km) west of Amesbury and 8 miles (13 km) north of Salisbury. One of the most famous sites in the world, Stonehenge is the remains of a ring of standing stones set within earthworks. It is in the middle of the most dense complex of Neolithic and Bronze Age monuments in England, including several hundred burial mounds. Archaeologists believe it was built anywhere from 3000 BC to 2000 BC. Radiocarbon dating in 2008 suggested that the first stones were raised between 2400 and 2200 BC, whilst another theory suggests that bluestones may have been raised at the site as early as 3000 BC. The surrounding circular earth bank and ditch, which constitute the earliest phase of the monument, have been dated to about 3100 BC. The site and its surroundings were added to the UNESCO's list of World Heritage Sites in 1986 in a co-listing with Avebury Henge. It is a national legally protected Scheduled Ancient Monument. Stonehenge is owned by the Crown and managed by English Heritage, while the surrounding land is owned by the National Trust. Archaeological evidence found by the Stonehenge Riverside Project in 2008 indicates that Stonehenge could have been a burial ground from its earliest beginnings. The dating of cremated remains found on the site indicate that deposits contain human bone from as early as 3000 BC, when the ditch and bank were first dug. Such deposits continued at Stonehenge for at least another 500 years. - 1 Etymology - 2 Early history - 2.1 Before the monument (8000 BC forward) - 2.2 Stonehenge 1 (ca. 3100 BC) - 2.3 Stonehenge 2 (ca. 3000 BC) - 2.4 Stonehenge 3 I (ca. 2600 BC) - 2.5 Stonehenge 3 II (2600 BC to 2400 BC) - 2.6 Stonehenge 3 III (2400 BC to 2280 BC) - 2.7 Stonehenge 3 IV (2280 BC to 1930 BC) - 2.8 Stonehenge 3 V (1930 BC to 1600 BC) - 2.9 After the monument (1600 BC on) - 3 Function and construction - 4 Modern history - 5 In popular culture - 6 See also - 7 References - 8 Bibliography - 9 External links The Oxford English Dictionary cites Ælfric's tenth-century glossary, in which henge-cliff is given the meaning "precipice", or stone, thus the stanenges or Stanheng "not far from Salisbury" recorded by eleventh-century writers are "supported stones". William Stukeley in 1740 notes, "Pendulous rocks are now called henges in Yorkshire...I doubt not, Stonehenge in Saxon signifies the hanging stones." Christopher Chippindale's Stonehenge Complete gives the derivation of the name Stonehenge as coming from the Old English words stān meaning "stone", and either hencg meaning "hinge" (because the stone lintels hinge on the upright stones) or hen(c)en meaning "hang" or "gallows" or "instrument of torture" (though elsewhere in his book, Chippindale cites the "suspended stones" etymology). Like Stonehenge's trilithons, medieval gallows consisted of two uprights with a lintel joining them, rather than the inverted L-shape more familiar today. The "henge" portion has given its name to a class of monuments known as henges. Archaeologists define henges as earthworks consisting of a circular banked enclosure with an internal ditch. As often happens in archaeological terminology, this is a holdover from antiquarian use, and Stonehenge is not truly a henge site as its bank is inside its ditch. Despite being contemporary with true Neolithic henges and stone circles, Stonehenge is in many ways atypical—for example, at more than 7.3 metres (24 ft) tall, its extant trilithons supporting lintels held in place with mortise and tenon joints, make it unique. Stonehenge was a place of burial from its beginning to its zenith in the mid third millennium B.C. The cremation burial dating to Stonehenge's sarsen stones phase is likely just one of many from this later period of the monument's use and demonstrates that it was still very much a domain of the dead.— Mike Parker Pearson Stonehenge evolved in several construction phases spanning at least 1,500 years. There is evidence of large-scale construction on and around the monument that perhaps extends the landscape's time frame to 6,500 years. Dating and understanding the various phases of activity is complicated by disturbance of the natural chalk by periglacial effects and animal burrowing, poor quality early excavation records, and a lack of accurate, scientifically verified dates. The modern phasing most generally agreed to by archaeologists is detailed below. Features mentioned in the text are numbered and shown on the plan, right. Before the monument (8000 BC forward) Archaeologists have found four, or possibly five, large Mesolithic postholes (one may have been a natural tree throw), which date to around 8000 BC, beneath the nearby modern tourist car-park. These held pine posts around 0.75 metres (2 ft 6 in) in diameter which were erected and eventually rotted in situ. Three of the posts (and possibly four) were in an east-west alignment which may have had ritual significance; no parallels are known from Britain at the time but similar sites have been found in Scandinavia. Salisbury Plain was then still wooded but 4,000 years later, during the earlier Neolithic, people built a causewayed enclosure at Robin Hood's Ball and long barrow tombs in the surrounding landscape. In approximately 3500 BC, a Stonehenge Cursus was built 700 metres (2,300 ft) north of the site as the first farmers began to clear the trees and develop the area. A number of other adjacent stone and wooden structures and burial mounds, previously overlooked, may date as far back as 4,000 BC. Charcoal from the ‘Blick Mead’ camp 2.4 kilometres (1.5 mi) from Stonehenge has been dated to 4000BC. The University of Buckingham's Humanities Research Institute believes that the community who built Stonehenge lived here -over a period of several millennia -making it potentially ‘one of the pivotal places in the history of the Stonehenge landscape. Stonehenge 1 (ca. 3100 BC) The first monument consisted of a circular bank and ditch enclosure made of Late Cretaceous (Santonian Age) Seaford Chalk, (7 and 8), measuring about 110 metres (360 ft) in diameter, with a large entrance to the north east and a smaller one to the south (14). It stood in open grassland on a slightly sloping spot. The builders placed the bones of deer and oxen in the bottom of the ditch, as well as some worked flint tools. The bones were considerably older than the antler picks used to dig the ditch, and the people who buried them had looked after them for some time prior to burial. The ditch was continuous but had been dug in sections, like the ditches of the earlier causewayed enclosures in the area. The chalk dug from the ditch was piled up to form the bank. This first stage is dated to around 3100 BC, after which the ditch began to silt up naturally. Within the outer edge of the enclosed area is a circle of 56 pits (13), each about a metre (3 ft 3 in) in diameter, known as the Aubrey holes after John Aubrey, the seventeenth-century antiquarian who was thought to have first identified them. The pits may have contained standing timbers creating a timber circle, although there is no excavated evidence of them. A recent excavation has suggested that the Aubrey Holes may have originally been used to erect a bluestone circle. If this were the case, it would advance the earliest known stone structure at the monument by some 500 years. A small outer bank beyond the ditch could also date to this period. In 2013 a team of archaeologists, led by Mike Parker Pearson, excavated more than 50,000 cremated bones of 63 individuals buried at Stonehenge. These remains had originally been buried individually in the Aubrey holes, exhumed during a previous excavation conducted by William Hawley in 1920, been considered unimportant by him, and subsequently re-interred together in one hole, Aubrey Hole 7, in 1935. Physical and chemical analysis of the remains has shown that the cremated were almost equally men and women, and included some children. As there was evidence of the underlying chalk beneath the graves being crushed by substantial weight, the team concluded that the first bluestones brought from Wales were probably used as grave markers. Radiocarbon dating of the remains has put the date of the site 500 years earlier than previously estimated, to around 3,000 BCE. Analysis of animal teeth found at nearby Durrington Walls, thought to be the 'builders camp', suggests that as many as 4,000 people gathered at the site for the mid-winter and mid-summer festivals; the evidence showed that the animals had been slaughtered around 9 months or 15 months after their spring birth. Strontium isotope analysis of the animal teeth showed that some had travelled from as far afield as the Scottish Highlands for the celebrations. Stonehenge 2 (ca. 3000 BC) Evidence of the second phase is no longer visible. The number of postholes dating to the early 3rd millennium BC suggest that some form of timber structure was built within the enclosure during this period. Further standing timbers were placed at the northeast entrance, and a parallel alignment of posts ran inwards from the southern entrance. The postholes are smaller than the Aubrey Holes, being only around 0.4 metres (16 in) in diameter, and are much less regularly spaced. The bank was purposely reduced in height and the ditch continued to silt up. At least twenty-five of the Aubrey Holes are known to have contained later, intrusive, cremation burials dating to the two centuries after the monument's inception. It seems that whatever the holes' initial function, it changed to become a funerary one during Phase 2. Thirty further cremations were placed in the enclosure's ditch and at other points within the monument, mostly in the eastern half. Stonehenge is therefore interpreted as functioning as an enclosed cremation cemetery at this time, the earliest known cremation cemetery in the British Isles. Fragments of unburnt human bone have also been found in the ditch-fill. Dating evidence is provided by the late Neolithic grooved ware pottery that has been found in connection with the features from this phase. Stonehenge 3 I (ca. 2600 BC) Archaeological excavation has indicated that around 2600 BC, the builders abandoned timber in favour of stone and dug two concentric arrays of holes (the Q and R Holes) in the centre of the site. These stone sockets are only partly known (hence on present evidence are sometimes described as forming ‘crescents’); however, they could be the remains of a double ring. Again, there is little firm dating evidence for this phase. The holes held up to 80 standing stones (shown blue on the plan), only 43 of which can be traced today. The bluestones (some of which are made of dolerite, an igneous rock), were thought for much of the twentieth century to have been transported by humans from the Preseli Hills, 150 miles (240 km) away in modern-day Pembrokeshire in Wales. Another theory that has recently gained support is that they were brought much nearer to the site as glacial erratics by the Irish Sea Glacier. Other standing stones may well have been small sarsens (limestones), used later as lintels. The stones, which weighed about four tons, consisted mostly of spotted Ordovician dolerite but included examples of rhyolite, tuff and volcanic and calcareous ash; in total around 20 different rock types are represented. Each monolith measures around 2 metres (6.6 ft) in height, between 1 m and 1.5 m (3.3–4.9 ft) wide and around 0.8 metres (2.6 ft) thick. What was to become known as the Altar Stone (1), is almost certainly derived from either Carmarthenshire or the Brecon Beacons and may have stood as a single large monolith. The north-eastern entrance was widened at this time, with the result that it precisely matched the direction of the midsummer sunrise and midwinter sunset of the period. This phase of the monument was abandoned unfinished, however; the small standing stones were apparently removed and the Q and R holes purposefully backfilled. Even so, the monument appears to have eclipsed the site at Avebury in importance towards the end of this phase. The Heelstone (5), a Tertiary sandstone, may also have been erected outside the north-eastern entrance during this period. It cannot be accurately dated and may have been installed at any time during phase 3. At first it was accompanied by a second stone, which is no longer visible. Two, or possibly three, large portal stones were set up just inside the north-eastern entrance, of which only one, the fallen Slaughter Stone (4), 4.9 metres (16 ft) long, now remains. Other features, loosely dated to phase 3, include the four Station Stones (6), two of which stood atop mounds (2 and 3). The mounds are known as "barrows" although they do not contain burials. Stonehenge Avenue, (10), a parallel pair of ditches and banks leading 2 miles (3 km) to the River Avon, was also added. Two ditches similar to Heelstone Ditch circling the Heelstone (which was by then reduced to a single monolith) were later dug around the Station Stones. Stonehenge 3 II (2600 BC to 2400 BC) During the next major phase of activity, 30 enormous Oligocene-Miocene sarsen stones (shown grey on the plan) were brought to the site. They may have come from a quarry, around 25 miles (40 km) north of Stonehenge on the Marlborough Downs, or they may have been collected from a "litter" of sarsens on the chalk downs, closer to hand. The stones were dressed and fashioned with mortise and tenon joints before 30 were erected as a 33 metres (108 ft) diameter circle of standing stones, with a ring of 30 lintel stones resting on top. The lintels were fitted to one another using another woodworking method, the tongue and groove joint. Each standing stone was around 4.1 metres (13 ft) high, 2.1 metres (6 ft 11 in) wide and weighed around 25 tons. Each had clearly been worked with the final visual effect in mind; the orthostats widen slightly towards the top in order that their perspective remains constant when viewed from the ground, while the lintel stones curve slightly to continue the circular appearance of the earlier monument. The inward-facing surfaces of the stones are smoother and more finely worked than the outer surfaces. The average thickness of the stones is 1.1 metres (3 ft 7 in) and the average distance between them is 1 metre (3 ft 3 in). A total of 75 stones would have been needed to complete the circle (60 stones) and the trilithon horseshoe (15 stones). It was thought the ring might have been left incomplete, but an exceptionally dry summer in 2013 revealed patches of parched grass which may correspond to the location of removed sarsens. The lintel stones are each around 3.2 metres (10 ft), 1 metre (3 ft 3 in) wide and 0.8 metres (2 ft 7 in) thick. The tops of the lintels are 4.9 metres (16 ft) above the ground. Within this circle stood five trilithons of dressed sarsen stone arranged in a horseshoe shape 13.7 metres (45 ft) across with its open end facing north east. These huge stones, ten uprights and five lintels, weigh up to 50 tons each. They were linked using complex jointing. They are arranged symmetrically. The smallest pair of trilithons were around 6 metres (20 ft) tall, the next pair a little higher and the largest, single trilithon in the south west corner would have been 7.3 metres (24 ft) tall. Only one upright from the Great Trilithon still stands, of which 6.7 metres (22 ft) is visible and a further 2.4 metres (7 ft 10 in) is below ground. The images of a 'dagger' and 14 'axeheads' have been carved on one of the sarsens, known as stone 53; further carvings of axeheads have been seen on the outer faces of stones 3, 4, and 5. The carvings are difficult to date, but are morphologically similar to late Bronze Age weapons; recent laser scanning work on the carvings supports this interpretation. The pair of trilithons in the north east are smallest, measuring around 6 metres (20 ft) in height; the largest, which is in the south west of the horseshoe, is almost 7.5 metres (25 ft) tall. This ambitious phase has been radiocarbon dated to between 2600 and 2400 BC, slightly earlier than the Stonehenge Archer, discovered in the outer ditch of the monument in 1978, and the two sets of burials, known as the Amesbury Archer and the Boscombe Bowmen, discovered 3 miles (5 km) to the west. At about the same time, a large timber circle and a second avenue were constructed 2 miles (3 km) away at Durrington Walls overlooking the River Avon. The timber circle was oriented towards the rising sun on the midwinter solstice, opposing the solar alignments at Stonehenge, whilst the avenue was aligned with the setting sun on the summer solstice and led from the river to the timber circle. Evidence of huge fires on the banks of the Avon between the two avenues also suggests that both circles were linked, and they were perhaps used as a procession route on the longest and shortest days of the year. Parker Pearson speculates that the wooden circle at Durrington Walls was the centre of a 'land of the living', whilst the stone circle represented a 'land of the dead', with the Avon serving as a journey between the two. Stonehenge 3 III (2400 BC to 2280 BC) Later in the Bronze Age, although the exact details of activities during this period are still unclear, the bluestones appear to have been re-erected. They were placed within the outer sarsen circle and may have been trimmed in some way. Like the sarsens, a few have timber-working style cuts in them suggesting that, during this phase, they may have been linked with lintels and were part of a larger structure. Stonehenge 3 IV (2280 BC to 1930 BC) This phase saw further rearrangement of the bluestones. They were arranged in a circle between the two rings of sarsens and in an oval at the centre of the inner ring. Some archaeologists argue that some of these bluestones were from a second group brought from Wales. All the stones formed well-spaced uprights without any of the linking lintels inferred in Stonehenge 3 III. The Altar Stone may have been moved within the oval at this time and re-erected vertically. Although this would seem the most impressive phase of work, Stonehenge 3 IV was rather shabbily built compared to its immediate predecessors, as the newly re-installed bluestones were not well-founded and began to fall over. However, only minor changes were made after this phase. Stonehenge 3 V (1930 BC to 1600 BC) Soon afterwards, the north eastern section of the Phase 3 IV bluestone circle was removed, creating a horseshoe-shaped setting (the Bluestone Horseshoe) which mirrored the shape of the central sarsen Trilithons. This phase is contemporary with the Seahenge site in Norfolk. After the monument (1600 BC on) The Y and Z Holes are the last known construction at Stonehenge, built about 1600 BC, and the last usage of it was probably during the Iron Age. Roman coins and medieval artefacts have all been found in or around the monument but it is unknown if the monument was in continuous use throughout British prehistory and beyond, or exactly how it would have been used. Notable is the massive Iron Age hillfort Vespasian's Camp built alongside the Avenue near the Avon. A decapitated seventh century Saxon man was excavated from Stonehenge in 1923. The site was known to scholars during the Middle Ages and since then it has been studied and adopted by numerous groups. Function and construction Stonehenge was produced by a culture that left no written records. Many aspects of Stonehenge remain subject to debate. A number of myths surround the stones. There is little or no direct evidence for the construction techniques used by the Stonehenge builders. Over the years, various authors have suggested that supernatural or anachronistic methods were used, usually asserting that the stones were impossible to move otherwise. However, conventional techniques, using Neolithic technology as basic as shear legs, have been demonstrably effective at moving and placing stones of a similar size. Proposed functions for the site include usage as an astronomical observatory or as a religious site. More recently two major new theories have been proposed. Geoffrey Wainwright MBE, FSA, a professor and president of the Society of Antiquaries of London, and Timothy Darvill, OBE of Bournemouth University have suggested that Stonehenge was a place of healing—the primeval equivalent of Lourdes. They argue that this accounts for the high number of burials in the area and for the evidence of trauma deformity in some of the graves. However they do concede that the site was probably multifunctional and used for ancestor worship as well. Isotope analysis indicates that some of the buried individuals were from other regions. A teenage boy buried approximately 1550 BC was raised near the Mediterranean Sea; a metal worker from 2300 BC dubbed the "Amesbury Archer" grew up near the alpine foothills of Germany; and the "Boscombe Bowmen" probably arrived from Wales or Brittany, France. On the other hand, Mike Parker Pearson of Sheffield University has suggested that Stonehenge was part of a ritual landscape and was joined to Durrington Walls by their corresponding avenues and the River Avon. He suggests that the area around Durrington Walls Henge was a place of the living, whilst Stonehenge was a domain of the dead. A journey along the Avon to reach Stonehenge was part of a ritual passage from life to death, to celebrate past ancestors and the recently deceased. Both explanations were first mooted in the twelfth century by Geoffrey of Monmouth (below), who extolled the curative properties of the stones and was also the first to advance the idea that Stonehenge was constructed as a funerary monument. Whatever religious, mystical or spiritual elements were central to Stonehenge, its design includes a celestial observatory function, which might have allowed prediction of eclipse, solstice, equinox and other celestial events important to a contemporary religion. There are other hypotheses and theories. According to a team of British researchers led by Mike Parker Pearson of the University of Sheffield, Stonehenge may have been built as a symbol of “peace and unity”, indicated in part by the fact that at the time of its construction, Britain's Neolithic people were experiencing a period of cultural unification. Another idea has to do with a quality of the stones themselves: Researchers from the Royal College of Art in London have discovered that some of the monument’s stones possess “unusual acoustic properties” —when they are struck they respond with a “loud clanging noise”. According to Paul Devereux, editor of the journal Time and Mind: The Journal of Archaeology, Consciousness and Culture, this idea could explain why certain bluestones were hauled nearly 200 miles — a major technical accomplishment at the time. In certain ancient cultures rocks that ring out, known as lithophones, were believed to contain mystic or healing powers, and Stonehenge has a history of association with rituals. The presence of these “ringing rocks” seems to support the hypothesis that Stonehenge was a “place for healing”, as has been pointed out by Bournemouth University archaeologist Timothy Darvill, who consulted with the researchers. Some of the stones of Stonehenge were brought from near a town in Wales called Maenclochog, a name which means “ringing rock”. "Heel Stone," "Friar’s Heel," or "Sun-Stone" The Heel Stone lies north east of the sarsen circle, beside the end portion of Stonehenge Avenue. It is a rough stone, 16 feet (4.9 m) above ground, leaning inwards towards the stone circle. It has been known by many names in the past, including "Friar's Heel" and "Sun-stone". Today it is uniformly referred to as the Heel Stone. At summer solstice an observer standing within the stone circle, looking north-east through the entrance, would see the Sun rise in the approximate direction of the heel stone, and the sun has often been photographed over it, but the centre of the stone circle and the heel stone are not in fact exactly aligned with the solstice sunrise. - The Devil bought the stones from a woman in Ireland, wrapped them up, and brought them to Salisbury plain. One of the stones fell into the Avon, the rest were carried to the plain. The Devil then cried out, "No-one will ever find out how these stones came here!" A friar replied, "That’s what you think!", whereupon the Devil threw one of the stones at him and struck him on the heel. The stone stuck in the ground and is still there. Brewer's Dictionary of Phrase and Fable attributes this tale to Geoffrey of Monmouth, but though book eight of Geoffrey's Historia Regum Britanniae does describe how Stonehenge was built, the two stories are entirely different. In the twelfth century, Geoffrey of Monmouth included a fanciful story in his work Historia Regum Britanniae that attributed the monument's construction to Merlin. Geoffrey's story spread widely, appearing in more and less elaborate form in adaptations of his work such as Wace's Norman French Roman de Brut, Layamon's Middle English Brut, and the Welsh Brut y Brenhinedd. According to Geoffrey the rocks of Stonehenge were healing rocks, called the Giant's dance, which Giants had brought from Africa to Ireland for their healing properties. The fifth-century king Aurelius Ambrosius wished to erect a memorial to 3,000 nobles slain in battle against the Saxons and buried at Salisbury, and at Merlin's advice chose Stonehenge. The king sent Merlin, Uther Pendragon (Arthur's father), and 15,000 knights, to remove it from Ireland, where it had been constructed on Mount Killaraus by the Giants. They slew 7,000 Irish but, as the knights tried to move the rocks with ropes and force, they failed. Then Merlin, using "gear" and skill, easily dismantled the stones and sent them over to Britain, where Stonehenge was dedicated. After it had been rebuilt near Amesbury, Geoffrey further narrates how first Ambrosius Aurelianus, then Uther Pendragon, and finally Constantine III, were buried inside the "Giants' Ring of Stonehenge". In another legend of Saxons and Britons, in 472 the invading king Hengist invited Brythonic warriors to a feast, but treacherously ordered his men to draw their weapons from concealment and fall upon the guests, killing 420 of them. Hengist erected the stone monument—Stonehenge—on the site to show his remorse for the deed. Sixteenth century to present Stonehenge has changed ownership several times since King Henry VIII acquired Amesbury Abbey and its surrounding lands. In 1540 Henry gave the estate to the Earl of Hertford. It subsequently passed to Lord Carleton and then the Marquess of Queensbury. The Antrobus family of Cheshire bought the estate in 1824. During World War I an aerodrome (Royal Flying Corps "No. 1 School of Aerial Navigation and Bomb Dropping") was built on the downs just to the west of the circle and, in the dry valley at Stonehenge Bottom, a main road junction was built, along with several cottages and a cafe. The Antrobus family sold the site after their last heir was killed in the fighting in France. The auction by Knight Frank & Rutley estate agents in Salisbury was held on 21 September 1915 and included "Lot 15. Stonehenge with about 30 acres, 2 rods, 37 perches [12.44 ha] of adjoining downland." Cecil Chubb bought the site for £6,600 and gave it to the nation three years later. Although it has been speculated that he purchased it at the suggestion of—or even as a present for—his wife, in fact he bought it on a whim, as he believed a local man should be the new owner. In the late 1920s a nationwide appeal was launched to save Stonehenge from the encroachment of the modern buildings that had begun to rise around it. By 1928 the land around the monument had been purchased with the appeal donations, and given to the National Trust to preserve. The buildings were removed (although the roads were not), and the land returned to agriculture. More recently the land has been part of a grassland reversion scheme, returning the surrounding fields to native chalk grassland. During the twentieth century, Stonehenge began to be revived as a place of religious significance, this time by adherents of Neopagan and New Age beliefs, particularly the Neo-druids. The historian Ronald Hutton would later remark that "it was a great, and potentially uncomfortable, irony that modern Druids had arrived at Stonehenge just as archaeologists were evicting the ancient Druids from it." The first such Neo-druidic group to make use of the megalithic monument was the Ancient Order of Druids, who performed a mass initiation ceremony there in August 1905, in which they admitted 259 new members into their organisation. This assembly was largely ridiculed in the press, who mocked the fact that the Neo-druids were dressed up in costumes consisting of white robes and fake beards. Between 1972 and 1984, Stonehenge was the site of the Stonehenge Free Festival. After the Battle of the Beanfield in 1985, this use of the site was stopped for several years and ritual use of Stonehenge is now heavily restricted. Some Druids have arranged an assembling of monuments styled on Stonehenge in other parts of the world. Setting and access When Stonehenge was first opened to the public it was possible to walk among and even climb on the stones, but the stones were roped off in 1977 as a result of serious erosion. Visitors are no longer permitted to touch the stones, but are able to walk around the monument from a short distance away. English Heritage does, however, permit access during the summer and winter solstice, and the spring and autumn equinox. Additionally, visitors can make special bookings to access the stones throughout the year. The access situation and the proximity of the two roads has drawn widespread criticism, highlighted by a 2006 National Geographic survey. In the survey of conditions at 94 leading World Heritage Sites, 400 conservation and tourism experts ranked Stonehenge 75th in the list of destinations, declaring it to be "in moderate trouble". As motorised traffic increased, the setting of the monument began to be affected by the proximity of the two roads on either side—the A344 to Shrewton on the north side, and the A303 to Winterbourne Stoke to the south. Plans to upgrade the A303 and close the A344 to restore the vista from the stones have been considered since the monument became a World Heritage Site. However, the controversy surrounding expensive re-routing of the roads has led to the scheme being cancelled on multiple occasions. On 6 December 2007, it was announced that extensive plans to build Stonehenge road tunnel under the landscape and create a permanent visitors' centre had been cancelled. On 13 May 2009, the government gave approval for a £25 million scheme to create a smaller visitors' centre and close the A344, although this was dependent on funding and local authority planning consent. On 20 January 2010 Wiltshire Council granted planning permission for a centre 2.4 km (1.5 miles) to the west and English Heritage confirmed that funds to build it would be available, supported by a £10m grant from the Heritage Lottery Fund. On 23 June 2013 the A344 was closed to begin the work of removing the section of road and replacing it with grass. The centre, designed by Denton Corker Marshall, opened to the public on 18 December 2013. Archaeological research and restoration Throughout recorded history Stonehenge and its surrounding monuments have attracted attention from antiquarians and archaeologists. John Aubrey was one of the first to examine the site with a scientific eye in 1666, and recorded in his plan of the monument the pits that now bear his name. William Stukeley continued Aubrey’s work in the early eighteenth century, but took an interest in the surrounding monuments as well, identifying (somewhat incorrectly) the Cursus and the Avenue. He also began the excavation of many of the barrows in the area, and it was his interpretation of the landscape that associated it with the Druids. Stukeley was so fascinated with Druids that he originally named Disc Barrows as Druids' Barrows. The most accurate early plan of Stonehenge was that made by Bath architect John Wood in 1740. His original annotated survey has recently been computer redrawn and published. Importantly Wood’s plan was made before the collapse of the southwest trilithon, which fell in 1797 and was restored in 1958. William Cunnington was the next to tackle the area in the early nineteenth century. He excavated some 24 barrows before digging in and around the stones and discovered charred wood, animal bones, pottery and urns. He also identified the hole in which the Slaughter Stone once stood. Richard Colt Hoare supported Cunnington's work and excavated some 379 barrows on Salisbury Plain including on some 200 in the area around the Stones, some excavated in conjunction with William Coxe. To alert future diggers to their work they were careful to leave initialled metal tokens in each barrow they opened. Cunnington's finds are displayed at the Wiltshire Museum. In 1877 Charles Darwin dabbled in archaeology at the stones, experimenting with the rate at which remains sink into the earth for his book The Formation of Vegetable Mould Through the Action of Worms. William Gowland oversaw the first major restoration of the monument in 1901 which involved the straightening and concrete setting of sarsen stone number 56 which was in danger of falling. In straightening the stone he moved it about half a metre from its original position. Gowland also took the opportunity to further excavate the monument in what was the most scientific dig to date, revealing more about the erection of the stones than the previous 100 years of work had done. During the 1920 restoration William Hawley, who had excavated nearby Old Sarum, excavated the base of six stones and the outer ditch. He also located a bottle of port in the Slaughter Stone socket left by Cunnington, helped to rediscover Aubrey's pits inside the bank and located the concentric circular holes outside the Sarsen Circle called the Y and Z Holes. Richard Atkinson, Stuart Piggott and John F. S. Stone re-excavated much of Hawley's work in the 1940s and 1950s, and discovered the carved axes and daggers on the Sarsen Stones. Atkinson's work was instrumental in furthering the understanding of the three major phases of the monument's construction. In 1958 the stones were restored again, when three of the standing sarsens were re-erected and set in concrete bases. The last restoration was carried out in 1963 after stone 23 of the Sarsen Circle fell over. It was again re-erected, and the opportunity was taken to concrete three more stones. Later archaeologists, including Christopher Chippindale of the Museum of Archaeology and Anthropology, University of Cambridge and Brian Edwards of the University of the West of England, campaigned to give the public more knowledge of the various restorations and in 2004 English Heritage included pictures of the work in progress in its book Stonehenge: A History in Photographs. In 1966 and 1967, in advance of a new car park being built at the site, the area of land immediately northwest of the stones was excavated by Faith and Lance Vatcher. They discovered the Mesolithic postholes dating from between 7000 and 8000 BC, as well as a 10-metre (33 ft) length of a palisade ditch – a V-cut ditch into which timber posts had been inserted that remained there until they rotted away. Subsequent aerial archaeology suggests that this ditch runs from the west to the north of Stonehenge, near the avenue. Excavations were once again carried out in 1978 by Atkinson and John Evans during which they discovered the remains of the Stonehenge Archer in the outer ditch, and in 1979 rescue archaeology was needed alongside the Heel Stone after a cable-laying ditch was mistakenly dug on the roadside, revealing a new stone hole next to the Heel Stone. In the early 1980s Julian Richards led the Stonehenge Environs Project, a detailed study of the surrounding landscape. The project was able to successfully date such features as the Lesser Cursus, Coneybury henge and several other smaller features. In 1993 the way that Stonehenge was presented to the public was called 'a national disgrace' by the House of Commons Public Accounts Committee. Part of English Heritage's response to this criticism was to commission research to collate and bring together all the archaeological work conducted at the monument up to this date. This two-year research project resulted in the publication in 1995 of the monograph Stonehenge in its landscape, which was the first publication presenting the complex stratigraphy and the finds recovered from the site. It presented a rephasing of the monument. More recent excavations include a series of digs held between 2003 and 2008 known as the Stonehenge Riverside Project, led by Mike Parker Pearson. This project mainly investigated other monuments in the landscape and their relationship to the stones — notably Durrington Walls, where another "Avenue" leading to the River Avon was discovered. The point where the Stonehenge Avenue meets the river was also excavated, and revealed a previously unknown circular area which probably housed four further stones, most likely as a marker for the starting point of the avenue. In April 2008 Tim Darvill of the University of Bournemouth and Geoff Wainwright of the Society of Antiquaries, began another dig inside the stone circle to retrieve dateable fragments of the original bluestone pillars. They were able to date the erection of some bluestones to 2300 BC, although this may not reflect the earliest erection of stones at Stonehenge. They also discovered organic material from 7000 BC, which, along with the Mesolithic postholes, adds support for the site having been in use at least 4,000 years before Stonehenge was started. In August and September 2008, as part of the Riverside Project, Julian Richards and Mike Pitts excavated Aubrey Hole 7, removing the cremated remains from several Aubrey Holes that had been excavated by Hawley in the 1920s, and re-interred in 1935. A licence for the removal of human remains at Stonehenge had been granted by the Ministry of Justice in May 2008, in accordance with the Statement on burial law and archaeology issued in May 2008. One of the conditions of the licence was that the remains should be reinterred within two years and that in the intervening period they should be kept safely, privately and decently. A new landscape investigation was conducted in April 2009. A shallow mound, rising to about 40 cm (16 inches) was identified between stones 54 (inner circle) and 10 (outer circle), clearly separated from the natural slope. It has not been dated but speculation that it represents careless backfilling following earlier excavations seems disproved by its representation in eighteenth- and nineteenth-century illustrations. Indeed, there is some evidence that, as an uncommon geological feature, it could have been deliberately incorporated into the monument at the outset. A circular, shallow bank, little more than 10 cm (4 inches) high, was found between the Y and Z hole circles, with a further bank lying inside the "Z" circle. These are interpreted as the spread of spoil from the original Y and Z holes, or more speculatively as hedge banks from vegetation deliberately planted to screen the activities within. On 26 November 2011, archaeologists from University of Birmingham announced the discovery of evidence of two huge pits positioned within the Stonehenge Cursus pathway, aligned in celestial position towards midsummer sunrise and sunset when viewed from the Heel Stone. The new discovery is part of the Stonehenge Hidden Landscape Project which began in the summer of 2010. The project uses non-invasive geophysical imaging technique to reveal and visually recreate the landscape. According to team leader Vince Gaffney, this discovery may provide a direct link between the rituals and astronomical events to activities within the Cursus at Stonehenge. On 18 December 2011, geologists from University of Leicester and the National Museum of Wales announced the discovery of the exact source of some of the rhyolite fragments found in the Stonehenge debitage. These fragments do not seem to match any of the standing stones or bluestone stumps. The researchers have identified the source as a 70-metre (230 ft) long rock outcrop called Craig Rhos-y-Felin ( ), near Pont Saeson in north Pembrokeshire, located 220 kilometres (140 mi) from Stonehenge. On 10 September 2014 the University of Birmingham announced findings including evidence of adjacent stone and wooden structures and burial mounds, overlooked previously, that may date as far back as 4,000 BC. An area extending to 12 square kilometres (1,200 ha) was studied to a depth of three metres with ground-penetrating radar equipment. As many as seventeen new monuments, revealed nearby, may be Late Neolithic monuments that resemble Stonehenge. The interpretation suggests a complex of numerous related monuments. Also included in the discovery is that the cursus track is terminated by two five-meter wide extremely deep pits, whose purpose is still a mystery. In popular culture - Christopher Young, Amanda Chadburn, Isabelle Bedu (July 2008). "Stonehenge World Heritage Site Management Plan". UNESCO: 18. - Morgan, James (21 September 2008). "Dig pinpoints Stonehenge origins". BBC. Retrieved 22 September 2008. - Kennedy, Maev (9 March 2013). "Stonehenge may have been burial site for Stone Age elite, say archaeologists". The Guardian (London: © 2013 Guardian News and Media Limited). Retrieved 11 March 2013. - Legge, James (9 March 2012). "Stonehenge: new study suggests landmark started life as a graveyard for the 'prehistoric elite'". The Independent (London). Retrieved 11 March 2013. - "Stonehenge builders travelled from far, say researchers". BBC News. 9 March 2013. Retrieved 11 March 2013. - "How did Stonehenge come into the care of English Heritage?". FAQs on Stonehenge. English Heritage. Archived from the original on 12 December 2007. Retrieved 17 December 2007. - "Ancient ceremonial landscape of great archaeological and wildlife interest". Stonehenge Landscape. National Trust. Retrieved 17 December 2007. - Pitts, Mike (8 August 2008). "Stonehenge: one of our largest excavations draws to a close". British Archaeology (York, England: Council for British Archaeology) (102): p13. ISSN 1357-4442. - Schmid, Randolph E. (29 May 2008). "Study: Stonehenge was a burial site for centuries". Associated Press. Archived from the original on 1 June 2008. Retrieved 29 May 2008. - "Stonehenge; henge2". Oxford English Dictionary (2 ed.). Oxford, England: Oxford University Press. 1989. - See the English Heritage definition. - Anon. "Stonehenge : Wiltshire England What is it?". Megalithic Europe. The Bradshaw Foundation. Archived from the original on 30 May 2009. Retrieved 6 November 2009. - Alexander, Caroline. "If the Stones Could Speak: Searching for the Meaning of Stonehenge". National Geographic Magazine. National Geographic Society. Retrieved 6 November 2009. - Siciliano, Leon et al. (10 September 2014). "Technology unearths 17 new monuments at Stonehenge". telegraph.co.uk. Retrieved 20 May 2015. - Sarah Knapton (19 December 2014). "Stonehenge discovery could rewrite British pre-history". Daily Telegraph. Retrieved 19 December 2014. - "The New Discoveries at Blick Mead: the Key to the Stonehenge Landscape". University of Buckingham. Retrieved 26 December 2014. - Field, David et al. (March 2010). "Introducing Stonehedge". British Archaeology (York, England: Council for British Archaeology) (111): 32–35. ISSN 1357-4442. - Parker Pearson, Mike; Richards, Julian; Pitts, Mike (9 October 2008). "Stonehenge 'older than believed'". BBC News. Retrieved 14 October 2008. - Mike Parker Pearson (20 August 2008). "The Stonehenge Riverside Project". Sheffield University. Archived from the original on 26 October 2008. Retrieved 22 September 2008. - John, Brian (26 February 2011). "Stonehenge: glacial transport of bluestones now confirmed?" (PDF) (Press release). University of Leicester. Retrieved 22 June 2012. - Banton, Simon; Bowden, Mark; Daw, Tim; Grady, Damian; Soutar, Sharon (July 2013). "Patchmarks at Stonehenge". Antiquity 88 (341): 733–739. - Pearson, Mike; Cleal, Ros; Marshall, Peter; Needham, Stuart; Pollard, Josh; Richards, Colin; Ruggles, Clive; Sheridan, Alison; Thomas, Julian; Tilley, Chris; Welham, Kate; Chamberlain, Andrew; Chenery, Carolyn; Evans, Jane; Knüsel, Chris (September 2007). "The Age of Stonehenge". Antiquity 811 (313): 617–639. - M. Parker Pearson. Bronze Age Britain. 2005. p63-67. ISBN 0-7134-8849-2 - "Skeleton unearthed at Stonehenge was decapitated", BBC News (9 June 2000), ABCE News (13 June 2000), Fox News (14 June 2000), New Scientist (17 June 2000), Archeo News (2 July 2000) - "Stonehenge a monument to unity, new theory claims - CBS News". CBS News. - news.yahoo.com, UK experts say Stonehenge was place of healing[dead link] - Maev Kennedy (23 September 2008). "The magic of Stonehenge: new dig finds clues to power of bluestones". Guardian (UK). Retrieved 1 May 2011. - "Stonehenge boy 'was from the Med'". BBC News. 28 September 2010. Retrieved 28 August 2010. - Hawkins, GS (1966). Stonehenge Decoded. ISBN 978-0-88029-147-7. - Williams, Thomas; Koriech, Hana (2012). "Interview with Mike Parker Pearson". Papers from the Institute of Archaeology 22: 39–47. doi:10.5334/pia.401. - Quenqua, Douglas. “Older than the Rolling Stones.” The New York Times. 17 June 2014. - Stevens, Edward (July 1866). "Stonehenge and Abury". The Gentleman's Magazine and Historical Review (London: Bradbury, Evans & Co) 11: 69. Retrieved 5 March 2015. - Measuring Time: Teacher's Guide. Burlington, NC: National Academy of Sciences. 1994. p. 173. ISBN 0892787074. Retrieved 5 March 2015. - Andrew Oliver, ed. (1972). "July 1776". The Journal of Samuel Curwen,loyalist 1. Harvard University Press,. p. 190. ISBN 0674483804. Retrieved 6 March 2015. - "Jeffery of Monmouth's Account of Stonehenge". A Description of Stonehenge on Salisbury Plain. Salisbury: J Easton. 1809. p. 5. Retrieved 6 March 2015. - Brewer, Ebenezer Cobham. Brewer's Dictionary of Phrase and Fable. New York: Harper and Brothers. p. 380. Retrieved 5 March 2015. - Warne, Charles, 1872, Ancient Dorset. Bournemouth. - Historia Regum Britanniae, Book 8, ch. 10. - Drawing on the writings of Nennius, the tale is noted in Spenser's Faerie Queene, and given further circulation in William Dugdale's Monasticon Anglicanum of 1655. Source: The illustrated guide to Old Sarum and Stonehenge. Salisbury, England: Brown and Company. 1868. pp. 35–39. OCLC 181860648. - Francis Stewart Briggs, S. H. Harris, "Joysticks and Fiddlesticks: (the Unofficial History of a Flying Kangaroo) Or, The Flying Kangaroo", Hutchinson & Company Limited, 1938. Retrieved 11 June 2014. - Heffernan, T. H. J. "The man who bought Stonehenge". This is Amesbury. Archived from the original on 25 June 2009. - The London Mercury Vol.XVII No.98 1927 - "The Future of Stonehenge: Public consultation" (PDF). English Heritage. 2008. p. 2. Archived from the original (PDF) on 2 December 2011. Retrieved 18 July 2011. - Hutton 2009. p. 323. - Hutton 2009. p. 321-322. - MacLeod, Nicola E.; Aitchison, Cara; Shaw, Stephen Joseph (2000). Leisure and tourism landscapes: social and cultural geographies. New York: Routledge. pp. 103–104. ISBN 0-415-27166-5. - LA air force pagans retrieved 12 October 2012 - Proposals for a tunnel at Stonehenge: an assessment of the alternatives. The World Archaeological Congress - Planning Your Visit to Stonehenge. English Heritage - Milmo, Cahal (3 November 2006). "Troubled Stonehenge 'lacks magic'". The Independent (UK). Retrieved 11 April 2009. - A303 Stonehenge Road Scheme Hansard report of proceedings in the House of Commons 6 December 2007 - "Stonehenge Centre gets Go-Ahead". BBC News. 13 May 2009. Retrieved 19 March 2010. - Morris, Steven (19 November 2010). "Stonehenge development saved by lottery’s £10m". The Guardian (UK). p. 14. - "Stonehenge permanent road closure work begins". UK: BBC. 24 June 2013. Retrieved 24 June 2013. - "End in sight after ‘decades of dithering’ as Government steps in to help secure future for Stonehenge" (Press release). Department of Culture, Media and Sport. 4 April 2011. Retrieved 5 April 2011. - "Stonehenge Visitor Centre by Denton Corker Marshall opens tomorrow". dezeen. 17 December 2013. Retrieved 18 December 2013. - Stukeley, William, 1740, Stonehenge A Temple Restor'd to the British Druids. London - Wood, John, 1747, Choir Guare, Vulgarly called Stonehenge, on Salisbury Plain. Oxford - Johnson, Anthony, Solving Stonehenge: The New Key to an Ancient Enigma. (Thames & Hudson, 2008) ISBN 978-0-500-05155-9 - Cleal, Rosamund et al. (1995). "Y and Z holes". Archaeometry and Stonehenge. English Heritage. Archived from the original on 28 February 2009. Retrieved 4 April 2008. - Young, Emma. "Concrete Evidence". New Scientist (9 January 2001). Retrieved 3 March 2008. - Taverner, Roger (8 January 2001). "How they rebuilt Stonehenge". Western Daily Press, quoted in Cosmic Conspiracies: How they rebuilt Stonehenge. Retrieved 3 March 2008. - Richards, Julian C. (2004). Stonehenge: A History in Photographs. London: English Heritage. ISBN 1-85074-895-0. - "Stonehenge execution revealed". BBC News. 9 June 2000. Retrieved 4 April 2008. - Whittle, Alasdair (1996). "Eternal stones: Stonehenge completed". Antiquity (70): 463–465. - Anon (29 September 2009). "StonehengeBones – epetition response". The prime minister's office epetitions. Crown copyright:Ministry of Justice. Archived from the original on 2 October 2009. Retrieved 6 November 2009. - Anon (April 2008). "Statement on burial law and archaeology" (PDF). Review of Burial Legislation. Crown copyright:Ministry of Justice. Retrieved 6 November 2009.[dead link] - "A new ‘henge’ discovered at Stonehenge". University of Birmingham. 22 July 2010. Retrieved 22 July 2010. - Boyle, Alan, Pits Add to Stonehgenge Mystery, msnbc.com Cosmic Log, 28 November 2011 - Discoveries Provide Evidence of a Celestial Procession at Stonehenge, University of Birmingham Press Release, 26 November 2011 - Birmingham Archaeologists Turn Back Clock at Stonehenge, University of Birmingham Press Release, 5 July 2010 - Keys, David (18 December 2011). "Scientists discover source of rock used in Stonehenge's first circle". The Independent (London). Retrieved 20 December 2011. - "New Discovery in Stonehenge Bluestone Mystery". National Museum of Wales. - "What Lies Beneath Stonehenge?". Smithsonianmag.com. Retrieved 20 October 2014. - Stonehenge on YouTube, 13 December 2013 - Atkinson, R J C, Stonehenge (Penguin Books, 1956) - Bender, B, Stonehenge: Making Space (Berg Publishers, 1998) - Burl, A, Great Stone Circles (Yale University Press, 1999) - Aubrey Burl, Prehistoric Stone Circles (Shire, 2001) (In Burl's Stonehenge (Constable, 2006), he notes, cf. the meaning of the name in paragraph two above, that "the Saxons called the ring 'the hanging stones', as though they were gibbets.") - Chippindale, C, Stonehenge Complete (Thames and Hudson, London, 2004) ISBN 0-500-28467-9 - Chippindale, C, et al., Who owns Stonehenge? (B T Batsford Ltd, 1990) - Cleal, R. M. J., Walker, K. E. & Montague, R., Stonehenge in its landscape (English Heritage, London, 1995) - Cunliffe, B, & Renfrew, C, Science and Stonehenge (The British Academy 92, Oxford University Press, 1997) - Godsell, Andrew "Stonehenge: Older Than the Centuries" in "Legends of British History" (2008) - Hall, R, Leather, K, & Dobson, G, Stonehenge Aotearoa (Awa Press, 2005) - Hawley, Lt-Col W, The Excavations at Stonehenge. (The Antiquaries Journal 1, Oxford University Press, 19–41). 1921. - Hawley, Lt-Col W, Second Report on the Excavations at Stonehenge. (The Antiquaries Journal 2, Oxford University Press, 1922) - Hawley, Lt-Col W, Third Report on the Excavations at Stonehenge. (The Antiquaries Journal 3, Oxford University Press, 1923) - Hawley, Lt-Col W, Fourth Report on the Excavations at Stonehenge. (The Antiquaries Journal 4, Oxford University Press, 1923) - Hawley, Lt-Col W, Report on the Excavations at Stonehenge during the season of 1923. (The Antiquaries Journal 5, Oxford University Press, 1925) - Hawley, Lt-Col W, Report on the Excavations at Stonehenge during the season of 1924. (The Antiquaries Journal 6, Oxford University Press, 1926) - Hawley, Lt-Col W, Report on the Excavations at Stonehenge during 1925 and 1926. (The Antiquaries Journal 8, Oxford University Press, 1928) - Hutton, R, From Universal Bond to Public Free For All (British Archaeology 83, 2005) - John, Brian, "The Bluestone Enigma: Stonehenge, Preseli and the Ice Age" (Greencroft Books, 2008) ISBN 978-0-905559-89-6 - Johnson, Anthony, Solving Stonehenge: The New Key to an Ancient Enigma (Thames & Hudson, 2008) ISBN 978-0-500-05155-9 - Legg, Rodney, "Stonehenge Antiquaries" (Dorset Publishing Company, 1986) - Mooney, J, Encyclopedia of the Bizarre (Black Dog & Leventhal Publishers, 2002) - Newall, R S, Stonehenge, Wiltshire -Ancient monuments and historic buildings- (Her Majesty's Stationery Office, London, 1959) - North, J, Stonehenge: Ritual Origins and Astronomy (HarperCollins, 1997) - Pitts, M, Hengeworld (Arrow, London, 2001) - Pitts, M W, On the Road to Stonehenge: Report on Investigations beside the A344 in 1968, 1979 and 1980 (Proceedings of the Prehistoric Society 48, 1982) - Richards, J, English Heritage Book of Stonehenge (B T Batsford Ltd, 1991) - Julian Richards Stonehenge: A History in Photographs (English Heritage, London, 2004) - Stone, J F S, Wessex Before the Celts (Frederick A Praeger Publishers, 1958) - Worthington, A, Stonehenge: Celebration and Subversion (Alternative Albion, 2004) - English Heritage: Stonehenge: Historical Background |Wikimedia Commons has media related to Stonehenge.| |Wikiquote has quotations related to: Stonehenge| |Wikisource has the text of The New Student's Reference Work article Stonehenge.| |Wikivoyage has a travel guide for Stonehenge.| - Stonehenge English Heritage official site: access and visiting information; research; future plans - 360° panoramic English Heritage: A stunning interactive view from the center. - Stonehenge Landscape The National Trust – Information about the surrounding area. - Stonehenge Today and Yesterday By Frank Stevens, at Project Gutenberg. - The History of Stonehenge BBC animation of the monument's construction. - Stonehenge, a Temple Restor'd to the British Druids By William Stukeley, at Sacred Texts. - Stonehenge, and Other British Monuments Astronomically Considered By Norman Lockyer, at Sacred Texts. - Stonehenge Laser Scans Wessex Archaeology information about the scanning of the Sarsen carvings. - Glaciers and the bluestones of Wales British Archaeology essay about the bluestones as glacial deposits. - Stonehenge Twentieth Century Excavations Databases An English Heritage commissioned report by Wessex Archaeology on the twentieth century excavations. - Stonehenge: Stones being repositioned during restoration work in 1914 - Reconstruction work in the 1950s
Ability-to-pay taxation is a fundamental concept in tax policy that aims to promote fairness and equality in the distribution of tax burdens. In this article, we will explore the key principles, advantages, and practical examples of ability-to-pay taxation. Additionally, we will delve into the ongoing debate between ability-to-pay tax and flat tax systems, as well as the potential impact of tax reforms. Let’s begin our journey into the world of ability-to-pay taxation. What is Ability-to-Pay Taxation? Ability-to-pay taxation refers to a progressive tax system that takes into account an individual’s or entity’s ability to pay taxes based on their income or wealth. Unlike flat tax systems, ability-to-pay taxation ensures that those with higher incomes contribute a larger proportion of their earnings, while those with lower incomes are burdened with a lesser tax obligation. Key Principles of Ability-to-Pay Taxation The ability-to-pay tax system is guided by several principles. Firstly, it emphasizes vertical equity, meaning that individuals with greater financial resources should bear a higher tax burden. Secondly, it upholds the concept of horizontal equity, implying that taxpayers in similar economic situations should be treated equally. Lastly, the system promotes progressivity, ensuring that tax rates increase as income or wealth rises. Advantages of Ability-to-Pay Tax System Ability-to-pay taxation offers various advantages. It helps reduce income inequality by redistributing wealth from the affluent to the less privileged. Additionally, it allows governments to generate revenue efficiently, enabling them to fund public services and infrastructure. The system also enjoys public support as it aligns with the principles of fairness and social justice. Progressive Taxation and Ability-to-Pay Concept Progressive taxation is a crucial aspect of ability-to-pay taxation. By imposing higher tax rates on higher income brackets, progressive taxation ensures a more equitable distribution of the tax burden. This approach takes into account the principle that individuals with greater financial means can contribute more to society. Ability-to-Pay Tax Fairness Explained One of the primary objectives of ability-to-pay taxation is to achieve tax fairness. This fairness is reflected in the progressive nature of the tax system, which considers an individual’s or entity’s ability to contribute based on their income or wealth. This approach recognizes that taxing the wealthy at higher rates helps bridge the income gap and supports social welfare programs. Understanding Ability-to-Pay Tax Policy Ability-to-pay tax policy involves the design and implementation of a progressive tax system. It requires careful consideration of tax brackets, marginal tax rates, and exemptions to ensure that the burden is distributed fairly. Governments must strike a balance between revenue generation and maintaining economic incentives for individuals and businesses. Examples of Ability-to-Pay Taxation in Practice Several countries have implemented ability-to-pay taxation successfully. The United States, for instance, employs a progressive income tax system where higher-income individuals face higher tax rates. Similarly, many European countries have adopted progressive tax policies to promote income equality and social welfare. How Ability-to-Pay Taxation Promotes Income Equality Ability-to-pay taxation plays a vital role in reducing income inequality. By ensuring that those who can afford to contribute more do so, it helps create a more equitable society. The additional tax revenue can be used to fund social programs, education, healthcare, and infrastructure, thereby offering opportunities to those in lower-income brackets. Debate on Ability-to-Pay Tax versus Flat Tax The debate between ability-to-pay taxation and flat tax revolves around the best approach to achieve tax fairness. Proponents of ability-to-pay taxation argue that it considers income disparities and redistributes wealth accordingly. Conversely, proponents of flat tax systems argue for simplicity and a consistent tax rate for all, promoting economic growth and reducing administrative costs. Ability-to-Pay Tax Reforms and Their Impact Tax reforms that aim to enhance ability-to-pay taxation can have far-reaching consequences. These reforms may involve adjusting tax brackets, altering marginal tax rates, or introducing new exemptions. Such changes can influence income distribution, economic incentives, and overall tax revenue, making it imperative for policymakers to carefully analyze the potential impact. Conclusion: Ability-to-pay taxation is a critical tool in achieving fairness and equality in tax systems. By implementing a progressive tax structure, governments can address income inequality, fund public services, and ensure that those with higher incomes contribute their fair share. Understanding the principles, advantages, and practical implications of ability-to-pay taxation helps foster informed discussions and facilitates the creation of effective tax policies for the betterment of society.
Mathematical inquiry processes: Extend patterns; generate examples; find relationships; generalise. Conceptual field of inquiry: The coordinate plane; gradients of parallel and perpendicular lines; coordinates and polygons. When students in lower secondary school see the prompt, they often recognise four coordinates. The prompt is designed to tackle misconceptions about coordinates, particularly those with negative values and those on the x and y axes. During the orientation phase of the inquiry, the teacher should ensure that students can plot coordinates in all four quadrants. After the class has attempted to plot the four coordinates on an x-y axis, the teacher could call on individuals to plot them on the board. If students have not already identified the points as vertices of a quadrilateral, the teacher draws lines between the coordinates. A discussion (which, when necessary, the teacher initiates and orchestrates) ensues about the properties of the shape: Is the shape a square? Are the angles right angles? Are the sides equal in length? Are the pairs of sides parallel? Students (individually or in pairs) share their reasoning, often describing how a square has been 'tilted'. The teacher can introduce the idea of right-angled triangles on the sides of the square (see illustration below) to facilitate students' thinking. Once the class has established that the shape is a square, the teacher might structure the inquiry by suggesting students follow one of the lines of inquiry below or might guide students by inviting them to choose from some or all of the lines. Alternatively, the teacher might decide to run a more open inquiry by asking for students' suggestions and, depending on the mathematical validity of the suggestions, allow individuals to pursue their own ideas. If the teacher opts for a guided or open inquiry, the regulatory cards help to ensure that each student is aware of the direction in which their inquiry is going. Lines of inquiry 1. Plot other shapes Students draw triangles, other quadrilaterals and polygons with five sides and more. The teacher could set the constraint that the origin has to be inside the shape, thereby ensuring students practise finding coordinates in the four quadrants. 2. Extend the square Students extend the pattern by drawing more squares. They use a vector instruction to move each point in a different direction: right 4, up 2 (translates each point to the right); right 2, down 4 (down); left 4, down 2 (left); left 2, up 4 (up). 3. Gradients of parallel and perpendicular lines Students calculate the gradients of the four lines using right-angled triangles. They establish that the diagram shows two pairs of parallel lines because the lines in each pair have the same gradient. They notice (with the support of the teacher if required) that the gradient of a line perpendicular to another line is the negative reciprocal of the gradient of that other line. 4. Find the relationship between x and y for each line The coordinates of the ends of the line segment at the top of the square are (-4,2) and (0,4). If the line is extended rightwards, the coordinates continue (4,6), (8,8), (12,10), (16,12) and so on. The x-coordinate increases by four each time and the y-coordinate by two. To find the y- coordinate for each value of x, halve the x-coordinate and add four. This could be presented in the general form as (n,1/2n + 4) or as a number machine or an equation (see below left). Use the coordinates in the prompt to form a generalisation (see above right). If you substitute a different value into the general form, do you still create a square? Create a different general form for four coordinates. Can you find four coordinates that create different shapes when you substitute different values into the terms? This inquiry arose from the need to have a second inquiry lesson on straight line graphs, having already used the inquiry prompt y - x = 4 during the previous ‘FIG Friday’ lesson. As we’re now following a mastery style scheme of work, we’re still on the same topic two weeks later, which feels like a really good thing as my year 9 class seem to need the time to be able to make the meaningful connections in this rich topic. Here are their questions and comments: At a basic level, two groups just noticed which coordinates were positive and negative – these groups, with a little more thinking time, could possibly have gone on to ask the question, "Would the x coordinate always be positive and the y coordinate negative?" The context within which this prompt was used meant that pupils were already familiar with the terminology associated with straight line graphs and, therefore, pupils were eager to apply this knowledge. The most common questions referred to finding the y-intercept and gradient of the line that joins the coordinates – definitely a task which stretched their ability. One group wanted to add more coordinates to those two to make a shape. This has the potential to be very easy or very challenging indeed. I was surprised when 4 out of the 8 groups in my class chose this question for their inquiry, so I encouraged them to work out the gradients of the line segments which they used for their shape in order to ensure they were still doing maths that would challenge them. (This pathway offers the potential to lead to parallel and perpendicular lines if rectangles were the shape of choice.) Another group wanted to draw a circle and did attempt this, although in hindsight I would have guided them more to join the coordinates first and use them as a diameter, so that they might get more out of it mathematically. The groups that got on well with the task plotted the coordinates on a graph and wrote down the coordinates in a table to look for the sequence in the y-coordinates. At the time of devising the prompt, Caitriona was second-in-charge of the mathematics department at St. Andrew's School, Leatherhead (UK). She introduced 'FIG Fridays' to promote functional and Inquiry Maths, as well as groupwork.
Sea levels reflect the state of the climate system. During ice ages a large volume of water is stored on land in the form of ice sheets and glaciers, leading to lower sea levels, while during warm interglacial periods, glaciers and icesheets are reduced and more water is stored in the oceans. The following provides a summary of changes in global mean sea levels: Global Mean Sea Level (GMSL) – 1880 to the end of 2014 High quality measurements of (near)-global sea level have been made since late 1992 by satellite altimeters, in particular, TOPEX/Poseidon (launched August, 1992), Jason-1 (launched December, 2001) and Jason-2 (launched June, 2008). This data has shown a more-or-less steady increase in Global Mean Sea Level (GMSL) of around 3.2 ± 0.4 mm/year over that period. This is more than 50% larger than the average value over the 20th century. Whether or not this represents a further increase in the rate of sea level rise is not yet certain. The two plots below show the GMSL measured from TOPEX/Poseidon, Jason-1 and Jason-2 and soon Jason-3. This one shows it with the seasonal signal removed: (get the data). And this shows it with the seasonal signal left in: (get the data) There are a number of changes of slope over short periods in the GMSL record. This variability is at least partly related to El Niño and La Niña (sea level rises during El Niño and falls during La Niña) and associated changes in the hydrological cycle. The above graph shows detrended GMSL (from the top graph) versus the Southern Oscillation (SOI) index, which is one of the common indices of the El Niño/La Niña cycle. Clearly sea level is higher during an El Niño event (SOI -ve) (see for example the years 1997/1998) and lower during La Niña (SOI +ve) (for example, years 2010/2011). SOI data is from the Australian Bureau of Meteorology. Data and graphs can be viewed and downloaded from the Bureau of Meterology’s web site. Sea level does not rise (or fall) uniformly over the oceans. This is illustrated by the map (below) showing sea-level trends from 1993 to 2017. There is a clear pattern of sea-level change that is also reflected in patterns of ocean heat storage. This pattern reflects interannual climate variability associated with the El Niño/La Niña cycle and the Indian Ocean Dipole, but also longer term changes such as the increase in sea levels in the Western Tropical Pacific due to changes in the Trade Winds. During El Niño years sea level rises in the eastern Pacific and falls in the western Pacific, whereas in La Niña years the opposite is true. Click on the map below to see a movie of monthly-mean sea-surface height from January 1993 to December 2015 with the seasonal signal removed. The plot at the top of the page shows the time series of the means of these fields. The data that is displayed here can be downloaded from the “ Sea Level Data” page on this site. Note the 1997/98 and the recent 2015 El Niño events! Click on the map below to see a movie of monthly-mean sea-surface height from January 1993 to December 2015. The seasonal signal has not been removed from this, so you should see the pumping as the water in each hemisphere warms and expands in Spring and Summer and cools and shrinks in Autumn and Winter. The second plot (above) shows the time series of the means of these fields. The data that is displayed here can be downloaded from the “Sea Level Data” page on this site. Note especially the 1997/98 and the recent 2015 El Niño events!
In mathematics, an average is a measure of the "middle" or "typical" value of a data set. It is thus a measure of central tendency. In the most common case, the data set is a list of numbers. The average of a list of numbers is a single number intended to typify the numbers in the list. If all the numbers in the list are the same, then this number should be used. If the numbers are not the same, the average is calculated by combining the numbers from the list in a specific way and computing a single number as being the average of the list. Many different descriptive statistics can be chosen as a measure of the central tendency of the data items. These include the arithmetic mean, the median, and the mode. Other statistics, such as the standard deviation and the range, are called measures of spread and describe how spread out the data is. The most common statistic is the arithmetic mean, but depending on the nature of the data other types of central tendency may be more appropriate. For example, the median is used most often when the distribution of the values is skewed with a small number of very high or low values, as seen with house prices or incomes. It is also used when extreme values are likely to be anomalous or less reliable than the other values (e.g. as a result of measurement error), because the median takes less account of extreme values than the mean does. Other articles related to "average, averages": ... generally occur during December-early March with an average temperature of 9 °C (48.2 °F) for elevations between 500–600 metres (1,640–1,969 ft) ... On the Plateau the average precipitation is 1,000 millimetres (39 in) with a range of about 800–1,300 millimetres (31.5–51.2 in) ... both north and south of the Alps) typically have more precipitation, with an average of 1,200–1,600 millimetres (47.2–63.0 in), while the high Alps may have over 2,500 millimetres ... 73 5.7 72 ... 4.5 68 ... 3.6 57 ... 4.8 48 ... 5.2 43 ... Average max. 18 ... 21 ... 23 ... 22 ... 20 ... 14 ... 9 ... 6 ... Average max ... 70 ... 3.9 61 ... 4.6 52 ... 4.6 46 ... Average max ... ... Temperatures vary little throughout the months, with average high temperatures of 80–90 °F (27–32 °C) and average lows of 65–75 °F (18–24 °C ... Waters off the coast of Honolulu average 81 °F (27 °C) in the summer months and 77 °F (25 °C) in the winter months ... Annual average rain is 21.1 in (540 mm), which mainly occurs during the winter months of October through early April, with very little rainfall during the summer ... ... The daily average high and low temperatures for P'yongyang in January are −3 and −13 °C (27 and 9 °F) ... On average, it snows thirty-seven days during the winter ... The daily average high and low temperatures for Pyongyang in August are 29 and 20 °C (84 and 68 °F) ... 95.2f) recorded in August 2003, though in a more average year the warmest day will only reach 29.4c (84.9f), with 13.8 days in total attaining a temperature ... All averages refer to the 30 year observation period 1971-2000. 25.0 (77.0) 17.8 (64.0) 16.4 (61.5) 35.1 (95.2) Average high °C (°F) 6.5 (43.7) 7.1 (44.8) 10.0 (50.0) 12.2 (54.0) 15.9 (60.6) 18.7 (65.7) 21.5 (70.7) 21.8 (71.2) 18.4 (65.1) 14.2 (57.6) 9.5 (49.1) 7 ... Famous quotes containing the word average: “Japanese food is very pretty and undoubtedly a suitable cuisine in Japan, which is largely populated by people of below average size. Hostesses hell-bent on serving such food to occidentals would be well advised to supplement it with something more substantial and to keep in mind that almost everybody likes french fries.” —Fran Lebowitz (b. 1975) “One cannot develop taste from what is of average quality but only from the very best.” —Johann Wolfgang Von Goethe (17491832) “The average Kentuckian may appear a bit confused in his knowledge of history, but he is firmly certain about current politics. Kentucky cannot claim first place in political importance, but it tops the list in its keen enjoyment of politics for its own sake. It takes the average Kentuckian only a matter of moments to dispose of the weather and personal helath, but he never tires of a political discussion.” —For the State of Kentucky, U.S. public relief program (1935-1943)
Indeed, Microsoft Excel is a highly effective program. We can execute continuous operations on a given dataset using Excel’s tools and capabilities. Excel also offers an abundance of handy Library Functions. This article explains how Excel’s DELTA Function operates independently. In addition, we will examine three practical examples to have a better understanding of the DELTA Function. Therefore, you should go through these 3 practical examples to use DELTA Function in Excel. Download Practice Workbook Obtain a free copy of the example workbook used during the session. Introduction to DELTA Function in Excel DELTA is a mathematical function in Excel that compares two numbers to see whether they are equal. DELTA yields 1 when the numbers are equivalent. Otherwise, DELTA returns 0. |number1||Required||The first number.| |[number2]||Optional||The second number. If omitted, [number2] is assumed to be 0.| All versions of Microsoft Excel have the DELTA function. 3 Practical Examples of Using DELTA Function in Excel As an example, we shall investigate a sample dataset. For instance, the following dataset has four columns: First Value, Second Value, Result, and Conclusion. We will examine each practical case using the DELTA function in this post. In addition, I should have mentioned that I wrote this essay using Microsoft Excel 365. You can choose the version that best suits your needs. 1. Find Similar Values Between Two Columns Through DELTA Function The first example of the DELTA function covered in this post is finding similar values in two columns. Here, the DELTA function compares the columns First Value and Second Value. The function returns 1 in the Result column if two values are identical. If not, DELTA returns 0. To facilitate comprehension, we illustrate a remark using the IF Function. Please follow these instructions attentively to complete the work. - First, select cell D5. - Second, insert the following formula in D5. - Later, hit the Tab key or Enter key. - Subsequently, It provides the desired outcome as below. - Apply the same method to other cells as was performed in cell D5. - To get this, choose the Fill Handle icon. - Importantly, hold and drag the Fill Handle icon to cell D10. - Consequently, the required output will be returned, as seen below. - We find a VALUE# error in cell D7 because one of our values is non-numeric. In this case, A in cell B7. - At this point, choose cell E5. - Then, write the below formula in cell E5. - Now, hit Enter to see the intended outcome. - Like previously, utilize the Fill Handle icon. - Important, hold the icon and drag it to cell E10. - Finally, we will find our desired output below. - Due to the D7 cell, another VALUE# error will occur in E7. - How to Calculate Delta E Color in Excel (4 Suitable Methods) - Insert Inverted Delta Symbol in Excel (8 Easy Methods) - How to Type Delta Symbol in Excel (8 Effective Ways) 2. Insert DELTA Function to Compare Column Values with 0 The second DELTA function demonstration in this tutorial compares cell values with 0. Here, the DELTA function compares the Second Value columns with zero. In this situation, only the Required parameter of the DELTA function will be provided. If the needed parameter is present, but the optional parameter is missing, DELTA assumes that the optional parameter is equal to 0. Using the IF function, we illustrate a remark for clarity. Therefore, carefully follow these instructions to complete the work. - First of all, select cell D5. - Later, insert the following formula into cell D5. - At this time, press Tab or Enter at a later time. - Thus, it yields the intended effect, as seen below: - Next, apply the same procedure to other cells as was done with cell D5. - Now, use the Fill Handle symbol to get this. - Importantly, Not only hold but also move it to the D10 cell. - As a result, the needed output will be returned, as seen below. - At this time, choose cell E5. - Enter the following formula in cell E5: - Now, press Enter to see the desired result. - As before, utilize the Fill Handle symbol. - Significantly move the sign to cell E10 while holding it down. - At last, it will show the required result below. 3. Utilize DELTA Function to Determine Number Format in Excel In this part of the study of the DELTA function, we will also examine another practical and appealing case. We can identify whether the value of a cell is a number. Here, we will use the DELTA function to determine if the First Value column has numeric values. To be highly involved, we illustrate a message in the Conclusion column using the TYPE Function. So, follow these steps below attentively to complete the work. - To begin, select the D5 cell first. - Second, input the formula below into cell D5. - At this moment, hit Tab or Enter to proceed. - Consequently, it has the desired effect, as seen below. - Afterward, repeat the same method to other cells as was done with cell D5. - Presently, use the Fill Handle icon to get this. - Importantly, hold the Fill Handle icon and drag it to cell D10. - The relevant output will be returned, as seen below. - At this stage, choose cell E5. - Then, type the formula below into cell E5. =IF(TYPE(D5)=1,"Number Type","Not Number Type") - Now, press Enter to see what you want to happen. - After that, use the Fill Handle symbol as you did before. - Importantly, hold the icon down and move it to cell E10. - Last but not least, the output we want is shown below. Common Errors While Using Excel DELTA Function |Common Errors||When They Show| |#VALUE!||When number 1 or [number2] does not have a numeric value, #VALUE! Error appears.| After learning about the DELTA Function and seeing how it works in the examples we discussed, you can now use it in Excel. There are many articles like this on the ExcelDemy Website. Keep using this, and let us know if you think of other ways to get the work done or if you have any new ideas. Remember to leave questions, comments, or suggestions in the section below.
1. A cylindrical container must be constructed to contain 250 cubic inches of liquid.a. Express the entire surface area of the container as a function of its radius r and height h. Dont forget the top and bottom.b. Express the volume as a function of r and h, and set this expression equal to 250.c. Use your work from part b to solve for h in terms of r. Then substitute this expression for h into your area expression from part a.d. Using a graphing technique, find the value of r that makes the surface area a minimum.e. What dimensions of the container should the manufacturer use if his goal is to minimize the amount of material used in its manufacture?2. A rectangular box with a square base of length s and height y is to have a volume of 16 cubic feet. The cost of top and bottom material for the box is 25 cents per square foot, and the cost for the sides is 10 cents per square foot.a. Find an expression for the volume of the box in terms of s and y, and set this equal to 16.b. Find an expression for the cost of the material used to make the box in terms of s and y.c. Use your work from part a to express the cost of the box in terms of s only.d. Find the dimensions of the box that will minimize the cost of materials used to make it.3. A box without a lid is formed by taking a piece of cardboard that is 40 inches by 20 inches, cutting out square pieces from the four corners, and then bending up the sides to form a box. a. Find an expression for the volume of the box in terms of the side length x of the cut-out squares.b. Find the value of x that yields maximum volume.4. Consider the function . Use the graph of this function to answer the questions below.a. What is the domain of this function?b. What is its range?c. On which interval(s) is the function increasing?d. On which interval(s) is it decreasing?e. Does the function have a horizontal asymptote? If so, what is it?f. Identify any vertical asymptotes.g. Does this function have an inverse on the interval (-2, 1)?h. Does it have an inverse on (1, ??)?
“Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude – not just, does a treatment affect people, but how much does it affect them.” -Gene V. Glass In statistics, we often use p-values to determine if there is a statistically significant difference between two groups. For example, suppose we want to know if two different studying techniques lead to different test scores. So, we have one group of 20 students use one studying technique to prepare for a test while another group of 20 students uses a different studying technique. We then have each student take the same test. After running a two-sample t-test for a difference in means, we find that the p-value of the test is 0.001. If we use a 0.05 significance level, then this means there is a statistically significant difference between the mean test scores of the two groups. Thus, studying technique has an impact on test scores. However, while the p-value tells us that studying technique has an impact on test scores, it doesn’t tell us the size of the impact. To understand this, we need to know the effect size. What is Effect Size? An effect size is a way to quantify the difference between two groups. While a p-value can tell us whether or not there is a statistically significant difference between two groups, an effect size can tell us how large this difference actually is. In practice, effect sizes are much more interesting and useful to know than p-values. There are three ways to measure effect size, depending on the type of analysis you’re doing: 1. Standardized Mean Difference When you’re interested in studying the mean difference between two groups, the appropriate way to calculate the effect size is through a standardized mean difference. The most popular formula to use is known as Cohen’s d, which is calculated as: Cohen’s d = (1 – 2) / s where 1 and 2 are the sample means of group 1 and group 2, respectively, and s is the standard deviation of the population from which the two groups were taken. Using this formula, the effect size is easy to interpret: - A d of 1 indicates that the two group means differ by one standard deviation. - A d of 2 means that the group means differ by two standard deviations. - A d of 2.5 indicates that the two means differ by 2.5 standard deviations, and so on. Another way to interpret the effect size is as follows: An effect size of 0.3 means the score of the average person in group 2 is 0.3 standard deviations above the average person in group 1 and thus exceeds the scores of 62% of those in group 1. The following table shows various effect sizes and their corresponding percentiles: |Effect Size||Percentage of Group 2 who would be below average person in Group 1| The larger the effect size, the larger the difference between the average individual in each group. In general, a d of 0.2 or smaller is considered to be a small effect size, a d of around 0.5 is considered to be a medium effect size, and a d of 0.8 or larger is considered to be a large effect size. Thus, if the means of two groups don’t differ by at least 0.2 standard deviations, the difference is trivial, even if the p-value is statistically significant. 2. Correlation Coefficient When you’re interested in studying the quantitative relationship between two variables, the most popular way to calculate the effect size is through the Pearson Correlation Coefficient. This is a measure of the linear association between two variables X and Y. It has a value between -1 and 1 where: - -1 indicates a perfectly negative linear correlation between two variables - 0 indicates no linear correlation between two variables - 1 indicates a perfectly positive linear correlation between two variables The formula to calculate the Pearson Correlation Coefficient is quite complex, but it can be found here for those who are interested. The further away the correlation coefficient is from zero, the stronger the linear relationship between two variables. This can also be seen by creating a simple scatterplot of the values for variables X and Y. For example, the following scatterplot shows the values of two variables that have a correlation coefficient of r = 0.94. This value is far from zero, which indicates that there is a strong positive relationship between the two variables. Conversely, the following scatterplot shows the values of two variables that have a correlation coefficient of r = 0.03. This value is close to zero, which indicates that there is virtually no relationship between the two variables. In general, the effect size is considered to be low if the value of the Pearson Correlation Coefficient r is around 0.1, medium if r is around 0.3, and large if r is 0.5 or greater. 3. Odds Ratio When you’re interested in studying the odds of success in a treatment group relative to the odds of success in a control group, the most popular way to calculate the effect size is through the odds ratio. For example, suppose we have the following table: |Effect Size||# Successes||# Failures| The odds ratio would be calculated as: Odds ratio = (AD) / (BC) The further away the odds ratio is from 1, the higher the likelihood that the treatment has an actual effect. The Advantages of Using Effect Sizes Over P-Values Effect sizes have several advantages over p-values: 1. An effect size helps us get a better idea of how large the difference is between two groups or how strong the association is between two groups. A p-value can only tell us whether or not there is some significant difference or some significant association. 2. Unlike p-values, effect sizes can be used to quantitatively compare the results of different studies done in different settings. For this reason, effect sizes are often used in meta-analyses. 3. P-values can be affected by large sample sizes. The larger the sample size, the greater the statistical power of a hypothesis test, which enables it to detect even small effects. This can lead to low p-values, despite small effect sizes that may have no practical significance. A simple example can make this clear: Suppose we want to know whether two studying techniques lead to different test scores. We have one group of 20 students use one studying technique while another group of 20 students uses a different studying technique. We then have each student take the same test. The mean score for group 1 is 90.65 and the mean score for group 2 is 90.75. The standard deviation for sample 1 is 2.77 and the standard deviation for sample 2 is 2.78. When we perform an independent two-sample t test, it turns out that the test statistic is -0.113 and the corresponding p-value is 0.91. The difference between the mean test scores is not statistically significant. However, consider if the sample sizes of the two samples were both 200, yet the means and the standard deviations remained the exact same. In this case, an independent two-sample t test would reveal that the test statistic is -1.97 and the corresponding p-value is just under 0.05. The difference between the mean test scores is statistically significant. The underlying reason that large sample sizes can lead to statistically significant conclusions is due to the formula used to calculate the test statistics t: test statistic t = [ (1 – 2) – d ] / (√ ) Notice that when n1 and n2 are small, the entire denominator of the test statistic t is small. And when we divide by a small number, we end up with a large number. This means the test statistic t will be large and the corresponding p-value will be small, thus leading to statistically significant results.
This article may be too long to read and navigate comfortably. (June 2021) Southern United States |Preceded by||American Civil War| Confederate States of America |Followed by||Gilded Age| Nadir of American race relations Ulysses S. Grant |Key events||Freedmen's Bureau| Assassination of Abraham Lincoln Formation of the KKK Impeachment of Andrew Johnson Compromise of 1877 |Part of a series on| |This article is part of a series on the| |History of the | The Reconstruction era was a period in American history following the American Civil War (1861–1865); it lasted from 1865 to 1877 and marked a significant chapter in the history of civil rights in the United States. Reconstruction, as directed by Congress, abolished slavery and ended the remnants of Confederate secession in the Southern states. It proclaimed the newly freed slaves (freedmen; black people) citizens with (ostensibly) the same civil rights as those of whites; these rights were nominally guaranteed by three new constitutional amendments: the 13th, 14th, and 15th, collectively known as the Reconstruction Amendments. Reconstruction also refers to the general attempt by Congress to transform the 11 former Confederate states, and refers to the role of the Union states in that transformation. Following the assassination of President Abraham Lincoln—who led the Republican Party in opposing slavery and fighting the war—Vice President Andrew Johnson assumed the presidency. He had been a prominent Unionist in the South but soon favored the ex-Confederates and became the leading opponent of freedmen and their Radical Republicans allies. His intention was to give the returning Southern states relatively free rein in deciding the rights (and fates) of former slaves. While Lincoln's last speeches showed a grand vision for Reconstruction—including full suffrage for freedmen—Johnson and the Democrats adamantly opposed any such goals. Johnson's Reconstruction policies generally prevailed until the Congressional elections of 1866, following a year of violent attacks against blacks in the South. These included the Memphis riots in May and New Orleans massacre in July. The 1866 elections gave Republicans a majority in Congress, power they used to press forward and adopt the 14th Amendment. Congress federalized the protection of equal rights and dissolved the legislatures of rebel states, requiring new state constitutions to be adopted throughout the South which guaranteed the civil rights of freedmen. Radical Republicans in the House of Representatives, frustrated by Johnson's opposition to Congressional Reconstruction, filed impeachment charges; the action failed by just one vote in the Senate. The new national Reconstruction laws incensed many whites in the South, giving rise to the Ku Klux Klan. The Klan intimidated, terrorized, and murdered Republicans and outspoken freedmen throughout the former Confederacy, including Arkansas Congressman James M. Hinds. In nearly all ex-Confederate states, Republican coalitions came to power and directly set out to transform Southern society. The Freedmen's Bureau and the U.S. Army both aimed to implement a free-labor economy to replace the slave-labor economy that had existed until the end of the Civil War. The Bureau protected the legal rights of freedmen, negotiated labor contracts, and helped establish networks of schools and churches. Thousands of Northerners came to the South as missionaries and teachers as well as businessmen and politicians to serve in the social and economic programs of Reconstruction. The pejorative term "carpetbagger" arose out of this period, referring to the use of cheap carpet bags by the accused. The term was a derision of perceived northern opportunism, brought on by 'tyrannical' federal occupation of the South. Elected in 1868, Republican President Ulysses S. Grant supported congressional Reconstruction and enforced the protection of African Americans in the South via the Enforcement Acts recently passed by Congress. Grant used the Acts to combat the Ku Klux Klan, the first iteration of which was essentially wiped out by 1872. Grant's policies and appointments were designed to promote federal integration, equal rights, black immigration, and the Civil Rights Act of 1875. Nevertheless, Grant failed to resolve the escalating tensions inside the Republican Party between Northern and Southern Republicans (the latter group would be labeled "scalawags" by those opposing Reconstruction). Meanwhile, white "Redeemers", self-styled conservatives in close cooperation with a faction of the Democratic Party, strongly opposed Reconstruction. Eventually, support for continuing Reconstruction policies declined in the North. A new Republican faction emerged that wanted Reconstruction ended and the Army withdrawn—the Liberal Republicans. After a major economic recession in 1873, the Democrats rebounded and regained control of the House of Representatives in 1874. They called for an immediate end to the occupation. In 1877, as part of a congressional bargain to elect a Republican as president following the disputed 1876 presidential election, federal troops were withdrawn from the three states (South Carolina, Louisiana, and Florida) where they remained. This marked the end of Reconstruction. Reconstruction has been noted by historians for many "shortcomings and failures" including failure to protect many freed blacks from Ku Klux Klan violence prior to 1871, starvation, disease and death, and brutal treatment of former slaves by Union soldiers, while offering reparations to former slaveowners but denying them to former slaves. However, Reconstruction had four primary successes including the restoration of the Federal Union, limited reprisals against the South directly after the war, property ownership for black people, and the establishment of national citizenship and a framework for eventual legal equality. Dating the Reconstruction era In different states, Reconstruction began and ended at different times; though federal Reconstruction ended with the Compromise of 1877. Some historians follow Eric Foner in dating the Reconstruction of the South as starting in 1863, with the Emancipation Proclamation and the Port Royal Experiment, rather than 1865. The usual ending for Reconstruction has always been 1877. Reconstruction policies were debated in the North when the war began, and commenced in earnest after Lincoln's Emancipation Proclamation, issued on January 1, 1863. Textbooks covering the entire range of American history North, South, and West typically use 1865–1877 for their chapter on the Reconstruction era. Foner, for example, does this in his general history of the United States, Give Me Liberty! (2005). However, in his 1988 monograph specializing on the situation in the South, titled Reconstruction: America's Unfinished Revolution, 1863–1877, he begins in 1863. As Confederate states came back under control of the U.S. Army, President Abraham Lincoln set up reconstructed governments in Tennessee, Arkansas, and Louisiana during the war. A restored government of Virginia operated since 1861 in parts of Virginia, and also acted to create the new state of West Virginia. Lincoln experimented by giving land to black people in South Carolina. By fall 1865, the new President Andrew Johnson declared the war goals of national unity and the ending of slavery achieved and Reconstruction completed. Republicans in Congress, refusing to accept Johnson's lenient terms, rejected and refused to seat new members of Congress, some of whom had been high-ranking Confederate officials a few months before. Johnson broke with the Republicans after vetoing two key bills that supported the Freedmen's Bureau and provided federal civil rights to the freedmen. The 1866 Congressional elections turned on the issue of Reconstruction, producing a sweeping Republican victory in the North, and providing the Radical Republicans with sufficient control of Congress to override Johnson's vetoes and commence their own "Radical Reconstruction" in 1867. That same year, Congress removed civilian governments in the South, and placed the former Confederacy under the rule of the U.S. Army (except in Tennessee, where anti-Johnson Republicans were already in control). The Army conducted new elections in which the freed slaves could vote, while Whites who had held leading positions under the Confederacy were temporarily denied the vote and were not permitted to run for office. In 10 states, coalitions of freedmen, recent Black and White arrivals from the North ("carpetbaggers"), and White Southerners who supported Reconstruction ("scalawags") cooperated to form Republican biracial state governments. They introduced various Reconstruction programs including funding public schools, establishing charitable institutions, raising taxes, and funding public improvements such as improved railroad transportation and shipping. In the 1860s and 1870s, the terms "Radical" and "conservative" had distinct meanings. "Conservative" was the name of a faction, often led by the planter class. Conservative opponents called the Republican regimes corrupt and instigated violence toward freedmen and Whites who supported Reconstruction. Most of the violence was carried out by members of the Ku Klux Klan (KKK), a secretive terrorist organization closely allied with the Southern Democratic Party. Klan members attacked and intimidated black people seeking to exercise their new civil rights, as well as Republican politicians in the South favoring those civil rights. One such politician murdered by the Klan on the eve of the 1868 presidential election was Republican Congressman James M. Hinds of Arkansas. Widespread violence in the South led to federal intervention by President Ulysses S. Grant in 1871, which suppressed the Klan. Nevertheless, White Democrats, calling themselves "Redeemers", regained control of the South state by state, sometimes using fraud and violence to control state elections. A deep national economic depression following the Panic of 1873 led to major Democratic gains in the North, the collapse of many railroad schemes in the South, and a growing sense of frustration in the North. The end of Reconstruction was a staggered process, and the period of Republican control ended at different times in different states. With the Compromise of 1877, military intervention in Southern politics ceased and Republican control collapsed in the last three state governments in the South. This was followed by a period which White Southerners labeled "Redemption", during which White-dominated state legislatures enacted Jim Crow laws, disenfranchising most black people and many poor Whites through a combination of constitutional amendments and election laws beginning in 1890. The White Southern Democrats' memory of Reconstruction played a major role in imposing the system of White supremacy and second-class citizenship for black people using laws known as Jim Crow laws. Three visions of Civil War memory appeared during Reconstruction: - The reconciliationist vision was rooted in coping with the death and devastation the war had brought; - the white supremacist vision demanded strict segregation of the races and the preservation of political and cultural domination of Blacks by Whites; any right to vote by Blacks was not to be countenanced; intimidation and violence were acceptable means to enforce the vision; - the emancipationist vision sought full freedom, citizenship, male suffrage, and constitutional equality for African Americans. Reconstruction addressed how the 11 seceding rebel states in the South would regain what the Constitution calls a "republican form of government" and be re-seated in Congress, the civil status of the former leaders of the Confederacy, and the constitutional and legal status of freedmen, especially their civil rights and whether they should be given the right to vote. Intense controversy erupted throughout the South over these issues.[i] Passage of the 13th, 14th, and 15th Amendments is the constitutional legacy of Reconstruction. These Reconstruction Amendments established the rights that led to Supreme Court rulings in the mid-20th century that struck down school segregation. A "Second Reconstruction", sparked by the civil rights movement, led to civil-rights laws in 1964 and 1965 that ended legal segregation and re-opened the polls to Blacks. The laws and constitutional amendments that laid the foundation for the most radical phase of Reconstruction were adopted from 1866 to 1871. By the 1870s, Reconstruction had officially provided freedmen with equal rights under the Constitution, and Blacks were voting and taking political office. Republican legislatures, coalitions of Whites and Blacks, established the first public school systems and numerous charitable institutions in the South. White paramilitary organizations, especially the Ku Klux Klan (KKK) as well as the White League and Red Shirts, formed with the political aim of driving out the Republicans. They also disrupted political organizing and terrorized Blacks to bar them from the polls. President Grant used federal power to effectively shut down the KKK in the early 1870s, though the other, smaller groups continued to operate. From 1873 to 1877, conservative Whites (calling themselves "Redeemers") regained power in the Southern states. They constituted the Bourbon wing of the national Democratic Party. In the 1860s and 1870s, leaders who had been Whigs were committed to economic modernization, built around railroads, factories, banks, and cities. Most of the "Radical" Republicans in the North were men who believed in integrating African Americans by providing them civil rights as citizens, along with free enterprise; most were also modernizers and former Whigs. The "Liberal Republicans" of 1872 shared the same outlook except that they were especially opposed to the corruption they saw around President Grant, and believed that the goals of the Civil War had been achieved, and that the federal military intervention could now end. Material devastation of the South in 1865 Reconstruction played out against an economy in ruins. The Confederacy in 1861 had 297 towns and cities, with a total population of 835,000 people; of these, 162, with 681,000 people, were at some point occupied by Union forces. 11 were destroyed or severely damaged by war action, including Atlanta (with an 1860 population of 9,600), Charleston, Columbia, and Richmond (with prewar populations of 40,500, 8,100, and 37,900, respectively); the 11 contained 115,900 people according to the 1860 Census, or 14% of the urban South. The number of people who lived in the destroyed towns represented just over 1% of the Confederacy's combined urban and rural populations. The rate of damage in smaller towns was much lower—only 45 courthouses were burned out of a total of 830. Farms were in disrepair, and the prewar stock of horses, mules, and cattle was much depleted; 40% of the South's livestock had been killed. The South's farms were not highly mechanized, but the value of farm implements and machinery according to the 1860 Census was $81 million and was reduced by 40% by 1870. The transportation infrastructure lay in ruins, with little railroad or riverboat service available to move crops and animals to market. Railroad mileage was located mostly in rural areas; over two-thirds of the South's rails, bridges, rail yards, repair shops, and rolling stock were in areas reached by Union armies, which systematically destroyed what they could. Even in untouched areas, the lack of maintenance and repair, the absence of new equipment, the heavy over-use, and the deliberate relocation of equipment by the Confederates from remote areas to the war zone ensured the system would be ruined at war's end. Restoring the infrastructure—especially the railroad system—became a high priority for Reconstruction state governments. The enormous cost of the Confederate war effort took a high toll on the South's economic infrastructure. The direct costs to the Confederacy in human capital, government expenditures, and physical destruction from the war totaled $3.3 billion. By early 1865, high inflation made the Confederate dollar worth little. When the war ended, Confederate currency and bank deposits were worth zero, making the banking system a near-total loss. People had to resort to bartering services for goods, or else try to obtain scarce Union dollars. With the emancipation of the Southern slaves, the entire economy of the South had to be rebuilt. Having lost their enormous investment in slaves, White plantation owners had minimal capital to pay freedmen workers to bring in crops. As a result, a system of sharecropping was developed, in which landowners broke up large plantations and rented small lots to the freedmen and their families. The main feature of the Southern economy changed from an elite minority of landed gentry slaveholders into a tenant farming agriculture system. The end of the Civil War was accompanied by a large migration of new freed people to the cities. In the cities, Black people were relegated to the lowest paying jobs such as unskilled and service labor. Men worked as rail workers, rolling and lumber mills workers, and hotel workers. The large population of slave artisans during the antebellum period had not been translated into a large number of freedmen artisans during Reconstruction. Black women were largely confined to domestic work employed as cooks, maids, and child nurses. Others worked in hotels. A large number became laundresses. The dislocations had a severe negative impact on the Black population, with a large amount of sickness and death. Over a quarter of Southern White men of military age—the backbone of the South's White workforce—died during the war, leaving countless families destitute. Per capita income for White Southerners declined from $125 in 1857 to a low of $80 in 1879. By the end of the 19th century and well into the 20th century, the South was locked into a system of poverty. How much of this failure was caused by the war and by previous reliance on agriculture remains the subject of debate among economists and historians. Restoring the South to the Union During the Civil War, the Radical Republican leaders argued that slavery and the Slave Power had to be permanently destroyed. Moderates said this could be easily accomplished as soon as the Confederate States Army surrendered and the Southern states repealed secession and accepted the Thirteenth Amendment–most of which happened by December 1865. President Lincoln was the leader of the moderate Republicans and wanted to speed up Reconstruction and reunite the nation painlessly and quickly. Lincoln formally began Reconstruction on December 8, 1863, with his ten percent plan, which went into operation in several states but which Radical Republicans opposed. 1864: Wade–Davis Bill Lincoln broke with the Radicals in 1864. The Wade–Davis Bill of 1864 passed in Congress by the Radicals was designed to permanently disfranchise the Confederate element in the South. The bill required voters to take the "ironclad oath" swearing that they had never supported the Confederacy or been one of its soldiers. Lincoln blocked it. Pursuing a policy of "malice toward none" announced in his second inaugural address, Lincoln asked voters only to support the Union in the future, regardless of the past. Lincoln pocket vetoed the Wade–Davis Bill, which was much more strict than the ten percent plan. Following Lincoln's veto, the Radicals lost support but regained strength after Lincoln's assassination in April 1865. Upon Lincoln's assassination in April 1865, vice president Andrew Johnson became president. Radicals considered Johnson to be an ally, but upon becoming president, he rejected the Radical program of Reconstruction. He was on good terms with ex-Confederates in the South and ex-Copperheads in the North. He appointed his own governors and tried to close the Reconstruction process by the end of 1865. Thaddeus Stevens vehemently opposed Johnson's plans for an abrupt end to Reconstruction, insisting that Reconstruction must "revolutionize Southern institutions, habits, and manners .... The foundations of their institutions ... must be broken up and relaid, or all our blood and treasure have been spent in vain." Johnson broke decisively with the Republicans in Congress when he vetoed the Civil Rights Act in early 1866. While Democrats celebrated, the Republicans rallied, passed the bill again, and overrode Johnson's repeat veto. Full-scale political warfare now existed between Johnson (now allied with the Democrats) and the Radical Republicans. Since the war had ended, Congress rejected Johnson's argument that he had the war power to decide what to do. Congress decided it had the primary authority to decide how Reconstruction should proceed, because the Constitution stated the United States had to guarantee each state a republican form of government. The Radicals insisted that meant Congress decided how Reconstruction should be achieved. The issues were multiple: Who should decide, Congress or the president? How should republicanism operate in the South? What was the status of the former Confederate states? What was the citizenship status of the leaders of the Confederacy? What was the citizenship and suffrage status of freedmen? By 1866, the faction of Radical Republicans led by Congressman Thaddeus Stevens and Senator Charles Sumner was convinced that Johnson's Southern appointees were disloyal to the Union, hostile to loyal Unionists, and enemies of the Freedmen. Radicals used as evidence outbreaks of mob violence against Black people, such as the Memphis riots of 1866 and the New Orleans massacre of 1866. Radical Republicans demanded a prompt and strong federal response to protect freedmen and curb Southern racism. Stevens and his followers viewed secession as having left the states in a status like new territories. Sumner argued that secession had destroyed statehood but the Constitution still extended its authority and its protection over individuals, as in existing U.S. territories. The Republicans sought to prevent Johnson's Southern politicians from "restoring the historical subordination of Negroes". Since slavery was abolished, the Three-fifths Compromise no longer applied to counting the population of Blacks. After the 1870 Census, the South would gain numerous additional representatives in Congress, based on the full population of freedmen.[ii] One Illinois Republican expressed a common fear that if the South were allowed to simply restore its previous established powers, that the "reward of treason will be an increased representation". The election of 1866 decisively changed the balance of power, giving the Republicans two-thirds majorities in both houses of Congress, and enough votes to overcome Johnson's vetoes. They moved to impeach Johnson because of his constant attempts to thwart Radical Reconstruction measures, by using the Tenure of Office Act. Johnson was acquitted by one vote, but he lost the influence to shape Reconstruction policy. The Republican Congress established military districts in the South and used Army personnel to administer the region until new governments loyal to the Union—that accepted the Fourteenth Amendment and the right of freedmen to vote—could be established. Congress temporarily suspended the ability to vote of approximately 10,000 to 15,000 former Confederate officials and senior officers, while constitutional amendments gave full citizenship to all African Americans, and suffrage to the adult men. With the power to vote, freedmen began participating in politics. While many enslaved people were illiterate, educated Blacks (including fugitive slaves) moved down from the North to aid them, and natural leaders also stepped forward. They elected White and Black men to represent them in constitutional conventions. A Republican coalition of freedmen, Southerners supportive of the Union (derisively called "scalawags" by White Democrats), and Northerners who had migrated to the South (derisively called "carpetbaggers")—some of whom were returning natives, but were mostly Union veterans—organized to create constitutional conventions. They created new state constitutions to set new directions for Southern states. Congress had to consider how to restore to full status and representation within the Union those Southern states that had declared their independence from the United States and had withdrawn their representation. Suffrage for former Confederates was one of two main concerns. A decision needed to be made whether to allow just some or all former Confederates to vote (and to hold office). The moderates in Congress wanted virtually all of them to vote, but the Radicals resisted. They repeatedly imposed the ironclad oath, which would effectively have allowed no former Confederates to vote. Historian Harold Hyman says that in 1866 congressmen "described the oath as the last bulwark against the return of ex-rebels to power, the barrier behind which Southern Unionists and Negroes protected themselves". Radical Republican leader Thaddeus Stevens proposed, unsuccessfully, that all former Confederates lose the right to vote for five years. The compromise that was reached disenfranchised many Confederate civil and military leaders. No one knows how many temporarily lost the vote, but one estimate placed the number as high as 10,000 to 15,000. However, Radical politicians took up the task at the state level. In Tennessee alone, over 80,000 former Confederates were disenfranchised. Second, and closely related, was the issue of whether the 4 million freedmen were to be received as citizens: Would they be able to vote? If they were to be fully counted as citizens, some sort of representation for apportionment of seats in Congress had to be determined. Before the war, the population of slaves had been counted as three-fifths of a corresponding number of free Whites. By having 4 million freedmen counted as full citizens, the South would gain additional seats in Congress. If Blacks were denied the vote and the right to hold office, then only Whites would represent them. Many conservatives, including most White Southerners, Northern Democrats, and some Northern Republicans, opposed Black voting. Some Northern states that had referendums on the subject limited the ability of their own small populations of Blacks to vote. Lincoln had supported a middle position: to allow some Black men to vote, especially U.S. Army veterans. Johnson also believed that such service should be rewarded with citizenship. Lincoln proposed giving the vote to "the very intelligent, and especially those who have fought gallantly in our ranks". In 1864, Governor Johnson said: "The better class of them will go to work and sustain themselves, and that class ought to be allowed to vote, on the ground that a loyal Negro is more worthy than a disloyal white man." As president in 1865, Johnson wrote to the man he appointed as governor of Mississippi, recommending: "If you could extend the elective franchise to all persons of color who can read the Constitution in English and write their names, and to all persons of color who own real estate valued at least two hundred and fifty dollars, and pay taxes thereon, you would completely disarm the adversary [Radicals in Congress], and set an example the other states will follow." Charles Sumner and Thaddeus Stevens, leaders of the Radical Republicans, were initially hesitant to enfranchise the largely illiterate freedmen. Sumner preferred at first impartial requirements that would have imposed literacy restrictions on Blacks and Whites. He believed that he would not succeed in passing legislation to disenfranchise illiterate Whites who already had the vote. In the South, many poor Whites were illiterate as there was almost no public education before the war. In 1880, for example, the White illiteracy rate was about 25% in Tennessee, Kentucky, Alabama, South Carolina, and Georgia, and as high as 33% in North Carolina. This compares with the 9% national rate, and a Black rate of illiteracy that was over 70% in the South. By 1900, however, with emphasis within the Black community on education, the majority of Blacks had achieved literacy. Sumner soon concluded that "there was no substantial protection for the freedman except in the franchise". This was necessary, he stated, "(1) For his own protection; (2) For the protection of the white Unionist; and (3) For the peace of the country. We put the musket in his hands because it was necessary; for the same reason we must give him the franchise." The support for voting rights was a compromise between moderate and Radical Republicans. The Republicans believed that the best way for men to get political experience was to be able to vote and to participate in the political system. They passed laws allowing all male freedmen to vote. In 1867, Black men voted for the first time. Over the course of Reconstruction, more than 1,500 African Americans held public office in the South; some of them were men who had escaped to the North and gained educations, and returned to the South. They did not hold office in numbers representative of their proportion in the population, but often elected Whites to represent them. The question of women's suffrage was also debated but was rejected. Women eventually gained the right to vote with the Nineteenth Amendment to the United States Constitution in 1920. From 1890 to 1908, Southern states passed new state constitutions and laws that disenfranchised most Blacks and tens of thousands of poor Whites with new voter registration and electoral rules. When establishing new requirements such as subjectively administered literacy tests, in some states, they used "grandfather clauses" to enable illiterate Whites to vote. Southern Treaty Commission The Five Civilized Tribes that had been relocated to Indian Territory (now part of Oklahoma) held Black slaves and signed treaties supporting the Confederacy. During the war, a war among pro-Union and anti-Union Native Americans had raged. Congress passed a statute that gave the president the authority to suspend the appropriations of any tribe if the tribe is "in a state of actual hostility to the government of the United States ... and, by proclamation, to declare all treaties with such tribe to be abrogated by such tribe". As a component of Reconstruction, the Interior Department ordered a meeting of representatives from all Indian tribes who had affiliated with the Confederacy. The council, the Southern Treaty Commission, was first held in Fort Smith, Arkansas in September 1865, and was attended by hundreds of Native Americans representing dozens of tribes. Over the next several years the commission negotiated treaties with tribes that resulted in additional re-locations to Indian Territory and the de facto creation (initially by treaty) of an unorganized Oklahoma Territory. Lincoln's presidential Reconstruction President Lincoln signed two Confiscation Acts into law, the first on August 6, 1861, and the second on July 17, 1862, safeguarding fugitive slaves who crossed from the Confederacy across Union lines and giving them indirect emancipation if their masters continued insurrection against the United States. The laws allowed the confiscation of lands for colonization from those who aided and supported the rebellion. However, these laws had limited effect as they were poorly funded by Congress and poorly enforced by Attorney General Edward Bates. In August 1861, Maj. Gen. John C. Frémont, Union commander of the Western Department, declared martial law in Missouri, confiscated Confederate property, and emancipated their slaves. President Lincoln immediately ordered Frémont to rescind his emancipation declaration, stating: "I think there is great danger that ... the liberating slaves of traitorous owners, will alarm our Southern Union friends, and turn them against us—perhaps ruin our fair prospect for Kentucky." After Frémont refused to rescind the emancipation order, President Lincoln terminated him from active duty on November 2, 1861. Lincoln was concerned that the border states would secede from the Union if slaves were given their freedom. On May 26, 1862, Union Maj. Gen. David Hunter emancipated slaves in South Carolina, Georgia, and Florida, declaring all "persons ... heretofore held as slaves ... forever free". Lincoln, embarrassed by the order, rescinded Hunter's declaration and canceled the emancipation. On April 16, 1862, Lincoln signed a bill into law outlawing slavery in Washington, D.C., and freeing the estimated 3,500 slaves in the city. On June 19, 1862, he signed legislation outlawing slavery in all U.S. territories. On July 17, 1862, under the authority of the Confiscation Acts and an amended Force Bill of 1795, he authorized the recruitment of freed slaves into the U.S. Army and seizure of any Confederate property for military purposes. Gradual emancipation and compensation In an effort to keep border states in the Union, President Lincoln, as early as 1861, designed gradual compensated emancipation programs paid for by government bonds. Lincoln desired Delaware, Maryland, Kentucky, and Missouri to "adopt a system of gradual emancipation which should work the extinction of slavery in twenty years". On March 26, 1862, Lincoln met with Senator Charles Sumner and recommended that a special joint session of Congress be convened to discuss giving financial aid to any border states who initiated a gradual emancipation plan. In April 1862, the joint session of Congress met; however, the border states were not interested and did not make any response to Lincoln or any congressional emancipation proposal. Lincoln advocated compensated emancipation during the 1865 River Queen steamer conference. In August 1862, President Lincoln met with African-American leaders and urged them to colonize some place in Central America. Lincoln planned to free the Southern slaves in the Emancipation Proclamation and he was concerned that freedmen would not be well treated in the United States by Whites in both the North and South. Although Lincoln gave assurances that the United States government would support and protect any colonies that were established for former slaves, the leaders declined the offer of colonization. Many free Blacks had been opposed to colonization plans in the past because they wanted to remain in the United States. President Lincoln persisted in his colonization plan in the belief that emancipation and colonization were both part of the same program. By April 1863, Lincoln was successful in sending Black colonists to Haiti as well as 453 to Chiriqui in Central America; however, none of the colonies were able to remain self-sufficient. Frederick Douglass, a prominent 19th-century American civil rights activist, criticized Lincoln by stating that he was "showing all his inconsistencies, his pride of race and blood, his contempt for Negroes and his canting hypocrisy". African Americans, according to Douglass, wanted citizenship and civil rights rather than colonies. Historians are unsure if Lincoln gave up on the idea of African American colonization at the end of 1863 or if he actually planned to continue this policy up until 1865. Installation of military governors Starting in March 1862, in an effort to forestall Reconstruction by the Radicals in Congress, President Lincoln installed military governors in certain rebellious states under Union military control. Although the states would not be recognized by the Radicals until an undetermined time, installation of military governors kept the administration of Reconstruction under presidential control, rather than that of the increasingly unsympathetic Radical Congress. On March 3, 1862, Lincoln installed a loyalist Democrat, Senator Andrew Johnson, as military governor with the rank of brigadier general in his home state of Tennessee. In May 1862, Lincoln appointed Edward Stanly military governor of the coastal region of North Carolina with the rank of brigadier general. Stanly resigned almost a year later when he angered Lincoln by closing two schools for Black children in New Bern. After Lincoln installed Brigadier General George Foster Shepley as military governor of Louisiana in May 1862, Shepley sent two anti-slavery representatives, Benjamin Flanders and Michael Hahn, elected in December 1862, to the House, which capitulated and voted to seat them. In July 1862, Lincoln installed Colonel John S. Phelps as military governor of Arkansas, though he resigned soon after due to poor health. In July 1862, President Lincoln became convinced that "a military necessity" was needed to strike at slavery in order to win the Civil War for the Union. The Confiscation Acts were only having a minimal effect to end slavery. On July 22, he wrote a first draft of the Emancipation Proclamation that freed the slaves in states in rebellion. After he showed his Cabinet the document, slight alterations were made in the wording. Lincoln decided that the defeat of the Confederate invasion of the North at Sharpsburg was enough of a battlefield victory to enable him to release the preliminary Emancipation Proclamation that gave the rebels 100 days to return to the Union or the actual proclamation would be issued. On January 1, 1863, the actual Emancipation Proclamation was issued, specifically naming 10 states in which slaves would be "forever free". The proclamation did not name the states of Tennessee, Kentucky, Missouri, Maryland, and Delaware, and specifically excluded numerous counties in some other states. Eventually, as the U.S. Army advanced into the Confederacy, millions of slaves were set free. Many of these freedmen joined the U.S. Army and fought in battles against the Confederate forces. Yet hundreds of thousands of freed slaves died during emancipation from illness that devastated army regiments. Freed slaves suffered from smallpox, yellow fever, and malnutrition. Louisiana 10% electorate plan President Abraham Lincoln was concerned to effect a speedy restoration of the Confederate states to the Union after the Civil War. In 1863, President Lincoln proposed a moderate plan for the Reconstruction of the captured Confederate state of Louisiana. The plan granted amnesty to rebels who took an oath of loyalty to the Union. Black freedmen workers were tied to labor on plantations for one year at a pay rate of $10 a month. Only 10% of the state's electorate had to take the loyalty oath in order for the state to be readmitted into the U.S. Congress. The state was required to abolish slavery in its new state constitution. Identical Reconstruction plans would be adopted in Arkansas and Tennessee. By December 1864, the Lincoln plan of Reconstruction had been enacted in Louisiana and the legislature sent two senators and five representatives to take their seats in Washington. However, Congress refused to count any of the votes from Louisiana, Arkansas, and Tennessee, in essence rejecting Lincoln's moderate Reconstruction plan. Congress, at this time controlled by the Radicals, proposed the Wade–Davis Bill that required a majority of the state electorates to take the oath of loyalty to be admitted to Congress. Lincoln pocket-vetoed the bill and the rift widened between the moderates, who wanted to save the Union and win the war, and the Radicals, who wanted to effect a more complete change within Southern society. Frederick Douglass denounced Lincoln's 10% electorate plan as undemocratic since state admission and loyalty only depended on a minority vote. Legalization of slave marriages Before 1864, slave marriages had not been recognized legally; emancipation did not affect them. When freed, many made official marriages. Before emancipation, slaves could not enter into contracts, including the marriage contract. Not all free people formalized their unions. Some continued to have common-law marriages or community-recognized relationships. The acknowledgement of marriage by the state increased the state's recognition of freed people as legal actors and eventually helped make the case for parental rights for freed people against the practice of apprenticeship of Black children. These children were legally taken away from their families under the guise of "providing them with guardianship and 'good' homes until they reached the age of consent at twenty-one" under acts such as the Georgia 1866 Apprentice Act. Such children were generally used as sources of unpaid labor. On March 3, 1865, the Freedmen's Bureau Bill became law, sponsored by the Republicans to aid freedmen and White refugees. A federal bureau was created to provide food, clothing, fuel, and advice on negotiating labor contracts. It attempted to oversee new relations between freedmen and their former masters in a free labor market. The act, without deference to a person's color, authorized the bureau to lease confiscated land for a period of three years and to sell it in portions of up to 40 acres (16 ha) per buyer. The bureau was to expire one year after the termination of the war. Lincoln was assassinated before he could appoint a commissioner of the bureau. A popular myth was that the act offered 40 acres and a mule, or that slaves had been promised this. With the help of the bureau, the recently freed slaves began voting, forming political parties, and assuming the control of labor in many areas. The bureau helped to start a change of power in the South that drew national attention from the Republicans in the North to the conservative Democrats in the South. This is especially evident in the election between Grant and Seymour (Johnson did not get the Democratic nomination), where almost 700,000 Black voters voted and swayed the election 300,000 votes in Grant's favor. Even with the benefits that it gave to the freedmen, the Freedmen's Bureau was unable to operate effectively in certain areas. Terrorizing freedmen for trying to vote, hold a political office, or own land, the Ku Klux Klan was the nemesis of the Freedmen's Bureau. Bans color discrimination Other legislation was signed that broadened equality and rights for African Americans. Lincoln outlawed discrimination on account of color, in carrying U.S. mail, in riding on public street cars in Washington, D.C., and in pay for soldiers. February 1865 peace conference Lincoln and Secretary of State William H. Seward met with three Southern representatives to discuss the peaceful Reconstruction of the Union and the Confederacy on February 3, 1865, in Hampton Roads, Virginia. The Southern delegation included Confederate Vice President Alexander H. Stephens, John Archibald Campbell, and Robert M. T. Hunter. The Southerners proposed the Union recognition of the Confederacy, a joint Union–Confederate attack on Mexico to oust Emperor Maximilian I, and an alternative subordinate status of servitude for Blacks rather than slavery. Lincoln flatly rejected recognition of the Confederacy, and said that the slaves covered by his Emancipation Proclamation would not be re-enslaved. He said that the Union states were about to pass the Thirteenth Amendment, outlawing slavery. Lincoln urged the governor of Georgia to remove Confederate troops and "ratify this constitutional amendment prospectively, so as to take effect—say in five years.... Slavery is doomed." Lincoln also urged compensated emancipation for the slaves as he thought the North should be willing to share the costs of freedom. Although the meeting was cordial, the parties did not settle on agreements. Historical legacy debated Lincoln continued to advocate his Louisiana Plan as a model for all states up until his assassination on April 15, 1865. The plan successfully started the Reconstruction process of ratifying the Thirteenth Amendment in all states. Lincoln is typically portrayed as taking the moderate position and fighting the Radical positions. There is considerable debate on how well Lincoln, had he lived, would have handled Congress during the Reconstruction process that took place after the Civil War ended. One historical camp argues that Lincoln's flexibility, pragmatism, and superior political skills with Congress would have solved Reconstruction with far less difficulty. The other camp believes that the Radicals would have attempted to impeach Lincoln, just as they did to his successor, Andrew Johnson, in 1868. Johnson's presidential Reconstruction Northern anger over the assassination of Lincoln and the immense human cost of the war led to demands for punitive policies. Vice President Andrew Johnson had taken a hard line and spoke of hanging Confederates, but when he succeeded Lincoln as president, Johnson took a much softer position, pardoning many Confederate leaders and other former Confederates. Former Confederate President Jefferson Davis was held in prison for two years, but other Confederate leaders were not. There were no trials on charges of treason. Only one person—Captain Henry Wirz, the commandant of the prison camp in Andersonville, Georgia—was executed for war crimes. Andrew Johnson's conservative view of Reconstruction did not include the involvement of Blacks in government, and he refused to heed Northern concerns when Southern state legislatures implemented Black Codes that set the status of the freedmen much lower than that of citizens. Smith argues that "Johnson attempted to carry forward what he considered to be Lincoln's plans for Reconstruction." McKitrick says that in 1865 Johnson had strong support in the Republican Party, saying: "It was naturally from the great moderate sector of Unionist opinion in the North that Johnson could draw his greatest comfort." Billington says: "One faction, the moderate Republicans under the leadership of Presidents Abraham Lincoln and Andrew Johnson, favored a mild policy toward the South." Lincoln biographers Randall and Current argued that: It is likely that had he lived, Lincoln would have followed a policy similar to Johnson's, that he would have clashed with congressional Radicals, that he would have produced a better result for the freedmen than occurred, and that his political skills would have helped him avoid Johnson's mistakes. Historians generally agree that President Johnson was an inept politician who lost all his advantages by unskilled maneuvering. He broke with Congress in early 1866 and then became defiant and tried to block enforcement of Reconstruction laws passed by the U.S. Congress. He was in constant conflict constitutionally with the Radicals in Congress over the status of freedmen and whites in the defeated South. Although resigned to the abolition of slavery, many former Confederates were unwilling to accept both social changes and political domination by former slaves. In the words of Benjamin Franklin Perry, President Johnson's choice as the provisional governor of South Carolina: "First, the Negro is to be invested with all political power, and then the antagonism of interest between capital and labor is to work out the result." However, the fears of the mostly conservative planter elite and other leading white citizens were partly assuaged by the actions of President Johnson, who ensured that a wholesale land redistribution from the planters to the freedmen did not occur. President Johnson ordered that confiscated or abandoned lands administered by the Freedmen's Bureau would not be redistributed to the freedmen but would be returned to pardoned owners. Land was returned that would have been forfeited under the Confiscation Acts passed by Congress in 1861 and 1862. Freedmen and the enactment of Black Codes Southern state governments quickly enacted the restrictive "Black Codes". However, they were abolished in 1866 and seldom had effect, because the Freedmen's Bureau (not the local courts) handled the legal affairs of freedmen. The Black Codes indicated the plans of the Southern whites for the former slaves. The freedmen would have more rights than did free Blacks before the war, but they would still have only second-class civil rights, no voting rights, and no citizenship. They could not own firearms, serve on a jury in a lawsuit involving whites, or move about without employment. The Black Codes outraged Northern opinion. They were overthrown by the Civil Rights Act of 1866 that gave the freedmen more legal equality (although still without the right to vote). The freedmen, with the strong backing of the Freedmen's Bureau, rejected gang-labor work patterns that had been used in slavery. Instead of gang labor, freed people preferred family-based labor groups. They forced planters to bargain for their labor. Such bargaining soon led to the establishment of the system of sharecropping, which gave the freedmen greater economic independence and social autonomy than gang labor. However, because they lacked capital and the planters continued to own the means of production (tools, draft animals, and land), the freedmen were forced into producing cash crops (mainly cotton) for the land-owners and merchants, and they entered into a crop-lien system. Widespread poverty, disruption to an agricultural economy too dependent on cotton, and the falling price of cotton, led within decades to the routine indebtedness of the majority of the freedmen, and the poverty of many planters. Northern officials gave varying reports on conditions for the freedmen in the South. One harsh assessment came from Carl Schurz, who reported on the situation in the states along the Gulf Coast. His report documented dozens of extra-judicial killings and claimed that hundreds or thousands more African Americans were killed: The number of murders and assaults perpetrated upon Negroes is very great; we can form only an approximative estimate of what is going on in those parts of the South which are not closely garrisoned, and from which no regular reports are received, by what occurs under the very eyes of our military authorities. As to my personal experience, I will only mention that during my two days sojourn at Atlanta, one Negro was stabbed with fatal effect on the street, and three were poisoned, one of whom died. While I was at Montgomery, one Negro was cut across the throat evidently with intent to kill, and another was shot, but both escaped with their lives. Several papers attached to this report give an account of the number of capital cases that occurred at certain places during a certain period of time. It is a sad fact that the perpetration of those acts is not confined to that class of people which might be called the rabble. The report included sworn testimony from soldiers and officials of the Freedmen's Bureau. In Selma, Alabama, Major J. P. Houston noted that whites who killed 12 African Americans in his district never came to trial. Many more killings never became official cases. Captain Poillon described white patrols in southwestern Alabama: who board some of the boats; after the boats leave they hang, shoot, or drown the victims they may find on them, and all those found on the roads or coming down the rivers are almost invariably murdered. The bewildered and terrified freedmen know not what to do—to leave is death; to remain is to suffer the increased burden imposed upon them by the cruel taskmaster, whose only interest is their labor, wrung from them by every device an inhuman ingenuity can devise; hence the lash and murder is resorted to intimidate those whom fear of an awful death alone cause to remain, while patrols, Negro dogs and spies, disguised as Yankees, keep constant guard over these unfortunate people. Much of the violence that was perpetrated against African Americans was shaped by gender prejudices regarding African Americans. Black women were in a particularly vulnerable situation. To convict a white man of sexually assaulting Black women in this period was exceedingly difficult. The South's judicial system had been wholly refigured to make one of its primary purposes the coercion of African Americans to comply with the social customs and labor demands of whites.[further explanation needed]Trials were discouraged and attorneys for Black misdemeanor defendants were difficult to find. The goal of county courts was a fast, uncomplicated trial with a resulting conviction. Most Blacks were unable to pay their fines or bail, and "the most common penalty was nine months to a year in a slave mine or lumber camp". The South's judicial system was rigged to generate fees and claim bounties, not to ensure public protection. Black women were socially perceived as sexually avaricious and since they were portrayed as having little virtue, society held that they could not be raped. One report indicates two freed women, Frances Thompson and Lucy Smith, describe their violent sexual assault during the Memphis Riots of 1866. However, Black women were vulnerable even in times of relative normalcy. Sexual assaults on African-American women were so pervasive, particularly on the part of their white employers, that Black men sought to reduce the contact between white males and Black females by having the women in their family avoid doing work that was closely overseen by whites. Black men were construed as being extremely sexually aggressive and their supposed or rumored threats to white women were often used as a pretext for lynching and castrations. During fall 1865, out of response to the Black Codes and worrisome signs of Southern recalcitrance, the Radical Republicans blocked the readmission of the former rebellious states to the Congress. Johnson, however, was content with allowing former Confederate states into the Union as long as their state governments adopted the Thirteenth Amendment abolishing slavery. By December 6, 1865, the amendment was ratified and Johnson considered Reconstruction over. Johnson was following the moderate Lincoln presidential Reconstruction policy to get the states readmitted as soon as possible. Congress, however, controlled by the Radicals, had other plans. The Radicals were led by Charles Sumner in the Senate and Thaddeus Stevens in the House of Representatives. Congress, on December 4, 1865, rejected Johnson's moderate presidential Reconstruction, and organized the Joint Committee on Reconstruction, a 15-member panel to devise Reconstruction requirements for the Southern states to be restored to the Union. In January 1866, Congress renewed the Freedmen's Bureau; however, Johnson vetoed the Freedmen's Bureau Bill in February 1866. Although Johnson had sympathy for the plight of the freedmen, he was against federal assistance. An attempt to override the veto failed on February 20, 1866. This veto shocked the congressional Radicals. In response, both the Senate and House passed a joint resolution not to allow any senator or representative seat admittance until Congress decided when Reconstruction was finished. laws are to be enacted and enforced depriving persons of African descent of privileges which are essential to freemen.... A law that does not allow a colored person to go from one county to another, and one that does not allow him to hold property, to teach, to preach, are certainly laws in violation of the rights of a freeman... The purpose of this bill is to destroy all these discriminations. The key to the bill was the opening section:[This quote needs a citation] All persons born in the United States ... are hereby declared to be citizens of the United States; and such citizens of every race and color, without regard to any previous condition of slavery ... shall have the same right in every State ... to make and enforce contracts, to sue, be parties, and give evidence, to inherit, purchase, lease, sell, hold, and convey real and personal property, and to full and equal benefit of all laws and proceedings for the security of person and property, as is enjoyed by white citizens, and shall be subject to like punishment, pains, and penalties and to none other, any law, statute, ordinance, regulation, or custom to the Contrary notwithstanding. The bill did not give freedmen the right to vote. Congress quickly passed the Civil Rights Bill; the Senate on February 2 voted 33–12; the House on March 13 voted 111–38. Although strongly urged by moderates in Congress to sign the Civil Rights bill, Johnson broke decisively with them by vetoing it on March 27, 1866. His veto message objected to the measure because it conferred citizenship on the freedmen at a time when 11 out of 36 states were unrepresented and attempted to fix by federal law "a perfect equality of the white and black races in every state of the Union". Johnson said it was an invasion by federal authority of the rights of the states; it had no warrant in the Constitution and was contrary to all precedents. It was a "stride toward centralization and the concentration of all legislative power in the national government". The Democratic Party, proclaiming itself the party of white men, North and South, supported Johnson. However, the Republicans in Congress overrode his veto (the Senate by the close vote of 33–15, and the House by 122–41) and the civil rights bill became law. Congress also passed a watered-down Freedmen's Bureau bill; Johnson quickly vetoed as he had done to the previous bill. Once again, however, Congress had enough support and overrode Johnson's veto. The last moderate proposal was the Fourteenth Amendment, whose principal drafter was Representative John Bingham. It was designed to put the key provisions of the Civil Rights Act into the Constitution, but it went much further. It extended citizenship to everyone born in the United States (except Indians on reservations), penalized states that did not give the vote to freedmen, and most important, created new federal civil rights that could be protected by federal courts. It guaranteed the federal war debt would be paid (and promised the Confederate debt would never be paid). Johnson used his influence to block the amendment in the states since three-fourths of the states were required for ratification (the amendment was later ratified). The moderate effort to compromise with Johnson had failed, and a political fight broke out between the Republicans (both Radical and moderate) on one side, and on the other side, Johnson and his allies in the Democratic Party in the North, and the conservative groupings (which used different names) in each Southern state. Concerned that President Johnson viewed Congress as an "illegal body" and wanted to overthrow the government, Republicans in Congress took control of Reconstruction policies after the election of 1866. Johnson ignored the policy mandate, and he openly encouraged Southern states to deny ratification of the Fourteenth Amendment (except for Tennessee, all former Confederate states did refuse to ratify, as did the border states of Delaware, Maryland, and Kentucky). Radical Republicans in Congress, led by Stevens and Sumner, opened the way to suffrage for male freedmen. They were generally in control, although they had to compromise with the moderate Republicans (the Democrats in Congress had almost no power). Historians refer to this period as "Radical Reconstruction" or "congressional Reconstruction". The business spokesmen in the North generally opposed Radical proposals. Analysis of 34 major business newspapers showed that 12 discussed politics, and only one, Iron Age, supported radicalism. The other 11 opposed a "harsh" Reconstruction policy, favored the speedy return of the Southern states to congressional representation, opposed legislation designed to protect the freedmen, and deplored the impeachment of President Andrew Johnson. The South's White leaders, who held power in the immediate post-bellum era before the vote was granted to the freedmen, renounced secession and slavery, but not White supremacy. People who had previously held power were angered in 1867 when new elections were held. New Republican lawmakers were elected by a coalition of White Unionists, freedmen and Northerners who had settled in the South. Some leaders in the South tried to accommodate new conditions. Three constitutional amendments, known as the Reconstruction amendments, were adopted. The Thirteenth Amendment abolishing slavery was ratified in 1865. The Fourteenth Amendment was proposed in 1866 and ratified in 1868, guaranteeing United States citizenship to all persons born or naturalized in the United States and granting them federal civil rights. The Fifteenth Amendment, proposed in late February 1869, and passed in early February 1870, decreed that the right to vote could not be denied because of "race, color, or previous condition of servitude". Left unaffected was that states would still determine voter registration and electoral laws. The amendments were directed at ending slavery and providing full citizenship to freedmen. Northern congressmen believed that providing Black men with the right to vote would be the most rapid means of political education and training. Many Blacks took an active part in voting and political life, and rapidly continued to build churches and community organizations. Following Reconstruction, White Democrats and insurgent groups used force to regain power in the state legislatures, and pass laws that effectively disenfranchised most Blacks and many poor Whites in the South. From 1890 to 1910, Southern states passed new state constitutions that completed the disenfranchisement of Blacks. U.S. Supreme Court rulings on these provisions upheld many of these new Southern state constitutions and laws, and most Blacks were prevented from voting in the South until the 1960s. Full federal enforcement of the Fourteenth and Fifteenth Amendments did not reoccur until after passage of legislation in the mid-1960s as a result of the civil rights movement. For details, see: - Redemption (United States history) - Disenfranchisement after the Reconstruction Era - Jim Crow laws - United States v. Cruikshank (1875), related to the Colfax Massacre - Posse Comitatus Act (1878) - Civil Rights Cases (1883) - Civil rights movement (1896–1954) - Plessy v. Ferguson (1896) - Williams v. Mississippi (1898) - Giles v. Harris (1903) The Reconstruction Acts as originally passed, were initially called "An act to provide for the more efficient Government of the Rebel States" The legislation was enacted by the 39th Congress, on March 2, 1867. It was vetoed by President Johnson, and the veto then overridden by a two-thirds majority, in both the House and the Senate, the same day. Congress also clarified the scope of the federal writ of habeas corpus, to allow federal courts to vacate unlawful state court convictions or sentences, in 1867. With the Radicals in control, Congress passed the Reconstruction Acts on July 19, 1867. The first Reconstruction Act, authored by Oregon Sen. George Henry Williams, a Radical Republican, placed 10 of the former Confederate states—all but Tennessee—under military control, grouping them into five military districts: - First Military District: Virginia, under General John Schofield - Second Military District: North Carolina and South Carolina, under General Daniel Sickles - Third Military District: Georgia, Alabama, and Florida, under Generals John Pope and George Meade - Fourth Military District: Arkansas and Mississippi, under General Edward Ord - Fifth Military District: Texas and Louisiana, under Generals Philip Sheridan and Winfield Scott Hancock 20,000 U.S. troops were deployed to enforce the act. The five border states that had not joined the Confederacy were not subject to military Reconstruction. West Virginia, which had seceded from Virginia in 1863, and Tennessee, which had already been re-admitted in 1866, were not included in the military districts. Federal troops however were kept in West Virginia through 1868 in order to control civil unrest in several areas throughout the state. Federal troops were removed from Kentucky and Missouri in 1866. The 10 Southern state governments were re-constituted under the direct control of the United States Army. One major purpose was to recognize and protect the right of African Americans to vote. There was little to no combat, but rather a state of martial law in which the military closely supervised local government, supervised elections, and tried to protect office holders and freedmen from violence. Blacks were enrolled as voters; former Confederate leaders were excluded for a limited period. No one state was entirely representative. Randolph Campbell describes what happened in Texas: The first critical step ... was the registration of voters according to guidelines established by Congress and interpreted by Generals Sheridan and Charles Griffin. The Reconstruction Acts called for registering all adult males, white and black, except those who had ever sworn an oath to uphold the Constitution of the United States and then engaged in rebellion.... Sheridan interpreted these restrictions stringently, barring from registration not only all pre-1861 officials of state and local governments who had supported the Confederacy but also all city officeholders and even minor functionaries such as sextons of cemeteries. In May Griffin ... appointed a three-man board of registrars for each county, making his choices on the advice of known scalawags and local Freedmen's Bureau agents. In every county where practicable a freedman served as one of the three registrars.... Final registration amounted to approximately 59,633 whites and 49,479 blacks. It is impossible to say how many whites were rejected or refused to register (estimates vary from 7,500 to 12,000), but blacks, who constituted only about 30 percent of the state's population, were significantly over-represented at 45 percent of all voters. State constitutional conventions: 1867–1869 The 11 Southern states held constitutional conventions giving Black men the right to vote, where the factions divided into the Radical, conservative, and in-between delegates. The Radicals were a coalition: 40% were Southern White Republicans ("scalawags"); 25% were White carpetbaggers, and 34% were Black. Scalawags wanted to disenfranchise all of the traditional White leadership class, but moderate Republican leaders in the North warned against that, and Black delegates typically called for universal voting rights. The carpetbaggers inserted provisions designed to promote economic growth, especially financial aid to rebuild the ruined railroad system. The conventions set up systems of free public schools funded by tax dollars, but did not require them to be racially integrated. Until 1872, most former Confederate or prewar Southern office holders were disqualified from voting or holding office; all but 500 top Confederate leaders were pardoned by the Amnesty Act of 1872. "Proscription" was the policy of disqualifying as many ex-Confederates as possible. It appealed to the scalawag element. For example, in 1865 Tennessee had disenfranchised 80,000 ex-Confederates. However, proscription was soundly rejected by the Black element, which insisted on universal suffrage. The issue would come up repeatedly in several states, especially in Texas and Virginia. In Virginia, an effort was made to disqualify for public office every man who had served in the Confederate Army even as a private, and any civilian farmer who sold food to the Confederate States Army. Disenfranchising Southern Whites was also opposed by moderate Republicans in the North, who felt that ending proscription would bring the South closer to a republican form of government based on the consent of the governed, as called for by the Constitution and the Declaration of Independence. Strong measures that were called for in order to forestall a return to the defunct Confederacy increasingly seemed out of place, and the role of the United States Army and controlling politics in the state was troublesome. Historian Mark Summers states that increasingly "the disenfranchisers had to fall back on the contention that denial of the vote was meant as punishment, and a lifelong punishment at that ... Month by month, the un-republican character of the regime looked more glaring." Election of 1868 During the Civil War, many in the North believed that fighting for the Union was a noble cause–for the preservation of the Union and the end of slavery. After the war ended, with the North victorious, the fear among Radicals was that President Johnson too quickly assumed that slavery and Confederate nationalism were dead and that the Southern states could return. The Radicals sought out a candidate for president who represented their viewpoint. In May 1868, the Republicans unanimously chose Ulysses S. Grant as their presidential candidate, and Schuyler Colfax, as their vice-presidential candidate. Grant won favor with the Radicals after he allowed Edwin Stanton, a Radical, to be reinstated as secretary of war. As early as 1862, during the Civil War, Grant had appointed the Ohio military chaplain John Eaton to protect and gradually incorporate refugee slaves in west Tennessee and northern Mississippi into the Union war effort and pay them for their labor. It was the beginning of his vision for the Freedmen's Bureau. Grant opposed President Johnson by supporting the Reconstruction Acts passed by the Radicals. In northern cities Grant contended with a strong immigrant, and particularly in New York City an Irish, anti-Reconstructionist Democratic bloc. Republicans sought to make inroads campaigning for the Irish taken prisoner in the Fenian raids into Canada, and calling on the Johnson administration to recognize a lawful state of war between Ireland and England. In 1867 Grant personally intervened with David Bell and Michael Scanlon to move their paper, the Irish Republic, articulate in its support for black equality, to New York from Chicago. The Democrats, having abandoned Johnson, nominated former governor Horatio Seymour of New York for president and Francis P. Blair of Missouri for vice president. The Democrats advocated the immediate restoration of former Confederate states to the Union and amnesty from "all past political offenses". Grant won the popular vote by 300,000 votes out of 5,716,082 votes cast, receiving an Electoral College landslide of 214 votes to Seymour's 80. Seymour received a majority of white votes, but Grant was aided by 500,000 votes cast by blacks, winning him 52.7 percent of the popular vote. He lost Louisiana and Georgia primarily due to Ku Klux Klan violence against African-American voters. At the age of 46, Grant was the youngest president yet elected, and the first president after the nation had outlawed slavery. Grant's presidential Reconstruction Effective civil rights executive President Ulysses S. Grant was considered an effective civil rights executive, concerned about the plight of African Americans. Grant met with prominent black leaders for consultation and signed a bill into law, on March 18, 1869, that guaranteed equal rights to both blacks and whites, to serve on juries, and hold office, in Washington D.C. In 1870 Grant signed into law a Naturalization Act that gave foreign blacks citizenship. Additionally, Grant's Postmaster General, John Creswell used his patronage powers to integrate the postal system and appointed a record number of African-American men and women as postal workers across the nation, while also expanding many of the mail routes. Grant appointed Republican abolitionist and champion of black education Hugh Lennox Bond as U.S. Circuit Court judge. Final four Reconstruction states admitted Immediately upon inauguration in 1869, Grant bolstered Reconstruction by prodding Congress to readmit Virginia, Mississippi, and Texas into the Union, while ensuring their state constitutions protected every citizen's voting rights. Grant advocated the ratification of the Fifteenth Amendment that said states could not disenfranchise African Americans. Within a year, the three remaining states—Mississippi, Virginia, and Texas—adopted the new amendment—and were admitted to Congress. Grant put military pressure on Georgia to reinstate its black legislators and adopt the new amendment. Georgia complied, and on February 24, 1871, its Senators were seated in Congress, with all the former Confederate states represented. Southern Reconstructed states were controlled by Republican carpetbaggers, scalawags and former slaves. By 1877 the conservative Democrats had full control of the region and Reconstruction was dead. Department of Justice created In 1870, to enforce Reconstruction, Congress and Grant created the Justice Department that allowed the Attorney General Amos Akerman and the first Solicitor General Benjamin Bristow to prosecute the Klan. In Grant's two terms he strengthened Washington's legal capabilities to directly intervene to protect citizenship rights even if the states ignored the problem. Enforcement acts (1870-1871) Congress and Grant passed a series (three) of powerful civil rights Enforcement Acts between 1870 and 1871, designed to protect blacks and Reconstruction governments. These were criminal codes that protected the freedmen's right to vote, to hold office, to serve on juries, and receive equal protection of laws. Most important, they authorized the federal government to intervene when states did not act. Urged by Grant and his Attorney General Amos T. Akerman, the strongest of these laws was the Ku Klux Klan Act, passed on April 20, 1871, that authorized the president to impose martial law and suspend the writ of habeas corpus. Grant was so adamant about the passage of the Ku Klux Klan Act, he earlier had sent a message to Congress, on March 23, 1871, in which he said: "A condition of affairs now exists in some of the States of the Union rendering life and property insecure, and the carrying of the mails and the collection of the revenue dangerous. The proof that such a, condition of affairs exists in some localities is now before the Senate. That the power to correct these evils is beyond the control of State authorities, I do not doubt. That the power of the Executive of the United States, acting within the limits of existing laws, is sufficient for present emergencies, is not clear." Grant also recommended the enforcement of laws in all parts of the United States to protect life, liberty, and property. Prosecuted Ku Klux Klan Grant's Justice Department destroyed the Ku Klux Klan, but during both of his terms, Blacks lost their political strength in the Southern United States. By October, Grant suspended habeas corpus in part of South Carolina and he also sent federal troops to help marshals, who initiated prosecutions of Klan members. Grant's Attorney General, Amos T. Akerman, who replaced Hoar, was zealous in his attempt to destroy the Klan. Akerman and South Carolina's U.S. marshal arrested over 470 Klan members, but hundreds of Klansmen, including the Klan's wealthy leaders, fled the state. Akerman returned over 3,000 indictments of the Klan throughout the South and obtained 600 convictions for the worst offenders. By 1872, Grant had crushed the Klan, and African Americans peacefully voted in record numbers in elections in the South. Attorney General George H. Williams, Akerman's replacement, suspended his prosecutions of the Klan in North Carolina and South Carolina in the Spring of 1873, but prior to the election of 1874, he changed course and prosecuted the Klan. Civil rights prosecutions continued but with fewer yearly cases and convictions. Amnesty act 1872 In addition to fighting for African American civil rights, Grant wanted to reconcile with white southerners, out of a spirit of Appomattox. To placate the South, in May 1872, Grant signed the Amnesty Act, which restored political rights to former Confederates, except for a few hundred former Confederate officers. Grant wanted people to vote and practice free speech despite their "views, color or nativity." Civil Rights Act of 1875 The Civil Rights Act of 1875 was one of the last major acts of Congress and Grant to preserve Reconstruction and equality for African Americans. The initial bill was created by Senator Charles Sumner. Grant endorsed the measure, despite his previous feud with Sumner, signing it into law on March 1, 1875. The law, ahead of its times, outlawed discrimination for blacks in public accommodations, schools, transportation, and selecting juries. Although weakly enforceable, the law spread fear among whites opposed to interracial justice and was overturned by the Supreme Court in 1883. The later enforceable Civil Rights Act of 1964 borrowed many of the earlier 1875's law's provisions. Countered election fraud To counter vote fraud in the Democratic stronghold of New York City, Grant sent in tens of thousands of armed, uniformed federal marshals and other election officials to regulate the 1870 and subsequent elections. Democrats across the North then mobilized to defend their base and attacked Grant's entire set of policies. On October 21, 1876, President Grant deployed troops to protect Black and White Republican voters in Petersburg, Virginia. National support of Reconstruction declines Grant's support from Congress and the nation declined due to scandals within his administration and the political resurgence of the Democrats in the North and South. By 1870, most Republicans felt the war goals had been achieved, and they turned their attention to other issues such as economic policies. African American officeholders Republicans took control of all Southern state governorships and state legislatures, except for Virginia.[iii] The Republican coalition elected numerous African Americans to local, state, and national offices; though they did not dominate any electoral offices, Black men as representatives voting in state and federal legislatures marked a drastic social change. At the beginning of 1867, no African American in the South held political office, but within three or four years "about 15 percent of the officeholders in the South were Black—a larger proportion than in 1990". Most of those offices were at the local level. In 1860, Blacks constituted the majority of the population in Mississippi and South Carolina, 47% in Louisiana, 45% in Alabama, and 44% in Georgia and Florida, so their political influence was still far less than their percentage of the population. About 137 Black officeholders had lived outside the South before the Civil War. Some who had escaped from slavery to the North and had become educated returned to help the South advance in the postbellum era. Others were Free people of color before the war, who had achieved education and positions of leadership elsewhere. Other African American men elected to office were already leaders in their communities, including a number of preachers. As happened in White communities, not all leadership depended upon wealth and literacy. |State||White||Black||% White||Statewide White| (% in 1870) There were few African Americans elected or appointed to national office. African Americans voted for both White and Black candidates. The Fifteenth Amendment to the United States Constitution guaranteed only that voting could not be restricted on the basis of race, color, or previous condition of servitude. From 1868 on, campaigns and elections were surrounded by violence as White insurgents and paramilitaries tried to suppress the Black vote, and fraud was rampant. Many congressional elections in the South were contested. Even states with majority-African-American populations often elected only one or two African American representatives to Congress. Exceptions included South Carolina; at the end of Reconstruction, four of its five congressmen were African Americans. Social and economic factors Freedmen were very active in forming their own churches, mostly Baptist or Methodist, and giving their ministers both moral and political leadership roles. In a process of self-segregation, practically all Blacks left White churches so that few racially integrated congregations remained (apart from some Catholic churches in Louisiana). They started many new Black Baptist churches and soon, new Black state associations. Four main groups competed with each other across the South to form new Methodist churches composed of freedmen. They were the African Methodist Episcopal Church; the African Methodist Episcopal Zion Church, both independent Black denominations founded in Philadelphia and New York, respectively; the Colored Methodist Episcopal Church (which was sponsored by the White Methodist Episcopal Church, South) and the well-funded Methodist Episcopal Church (predominantly White Methodists of the North). The Methodist Church had split before the war due to disagreements about slavery. By 1871, the Northern Methodists had 88,000 Black members in the South, and had opened numerous schools for them. Blacks in the South made up a core element of the Republican Party. Their ministers had powerful political roles that were distinctive since they did not depend on White support, in contrast to teachers, politicians, businessmen, and tenant farmers. Acting on the principle as stated by Charles H. Pearce, an AME minister in Florida: "A man in this state cannot do his whole duty as a minister except he looks out for the political interests of his people." More than 100 Black ministers were elected to state legislatures during Reconstruction, as well as several to Congress and one, Hiram Rhodes Revels, to the U.S. Senate. In a highly controversial action during the war, the Northern Methodists used the Army to seize control of Methodist churches in large cities, over the vehement protests of the Southern Methodists. Historian Ralph Morrow reports: A War Department order of November 1863, applicable to the Southwestern states of the Confederacy, authorized the Northern Methodists to occupy "all houses of worship belonging to the Methodist Episcopal Church South in which a loyal minister, appointed by a loyal bishop of said church, does not officiate." Across the North, several denominations—especially the Methodists, Congregationalists, and Presbyterians, as well as the Quakers—strongly supported Radical policies. The focus on social problems paved the way for the Social Gospel movement. Matthew Simpson, a Methodist bishop, played a leading role in mobilizing the Northern Methodists for the cause. Biographer Robert D. Clark called him the "High Priest of the Radical Republicans". The Methodist Ministers Association of Boston, meeting two weeks after Lincoln's assassination, called for a hard line against the Confederate leadership: Resolved, that no terms should be made with traitors, no compromise with rebels.... That we hold the national authority bound by the most solemn obligation to God and man to bring all the civil and military leaders of the rebellion to trial by due course of law, and when they are clearly convicted, to execute them. The denominations all sent missionaries, teachers and activists to the South to help the freedmen. Only the Methodists made many converts, however. Activists sponsored by the Northern Methodist Church played a major role in the Freedmen's Bureau, notably in such key educational roles as the bureau's state superintendent or assistant superintendent of education for Virginia, Florida, Alabama, and South Carolina. Many Americans interpreted great events in religious terms. Historian Wilson Fallin Jr. contrasts the interpretation of the Civil War and Reconstruction in White versus Black Baptist sermons in Alabama. White Baptists expressed the view that: God had chastised them and given them a special mission—to maintain orthodoxy, strict biblicism, personal piety, and traditional race relations. Slavery, they insisted, had not been sinful. Rather, emancipation was a historical tragedy and the end of Reconstruction was a clear sign of God's favor. In sharp contrast, Black Baptists interpreted the Civil War, emancipation, and Reconstruction as: God's gift of freedom. They appreciated opportunities to exercise their independence, to worship in their own way, to affirm their worth and dignity, and to proclaim the fatherhood of God and the brotherhood of man. Most of all, they could form their own churches, associations, and conventions. These institutions offered self-help and racial uplift, and provided places where the gospel of liberation could be proclaimed. As a result, black preachers continued to insist that God would protect and help them; God would be their rock in a stormy land. Historian James D. Anderson argues that the freed slaves were the first Southerners "to campaign for universal, state-supported public education". Blacks in the Republican coalition played a critical role in establishing the principle in state constitutions for the first time during congressional Reconstruction. Some slaves had learned to read from White playmates or colleagues before formal education was allowed by law; African Americans started "native schools" before the end of the war; Sabbath schools were another widespread means that freedmen developed to teach literacy. When they gained suffrage, Black politicians took this commitment to public education to state constitutional conventions. The Republicans created a system of public schools, which were segregated by race everywhere except New Orleans. Generally, elementary and a few secondary schools were built in most cities, and occasionally in the countryside, but the South had few cities. The rural areas faced many difficulties opening and maintaining public schools. In the country, the public school was often a one-room affair that attracted about half the younger children. The teachers were poorly paid, and their pay was often in arrears. Conservatives contended the rural schools were too expensive and unnecessary for a region where the vast majority of people were cotton or tobacco farmers. They had no expectation of better education for their residents. One historian found that the schools were less effective than they might have been because "poverty, the inability of the states to collect taxes, and inefficiency and corruption in many places prevented successful operation of the schools". After Reconstruction ended and White elected officials disenfranchised Blacks and imposed Jim Crow laws, they consistently underfunded Black institutions, including the schools. After the war, Northern missionaries founded numerous private academies and colleges for freedmen across the South. In addition, every state founded state colleges for freedmen, such as Alcorn State University in Mississippi. The normal schools and state colleges produced generations of teachers who were integral to the education of African American children under the segregated system. By the end of the century, the majority of African Americans were literate. In the late 19th century, the federal government established land grant legislation to provide funding for higher education across the United States. Learning that Blacks were excluded from land grant colleges in the South, in 1890 the federal government insisted that Southern states establish Black state institutions as land grant colleges to provide for Black higher education, in order to continue to receive funds for their already established White schools. Some states classified their Black state colleges as land grant institutions. Former Congressman John Roy Lynch wrote: "there are very many liberal, fair-minded and influential Democrats in the state [Mississippi] who are strongly in favor of having the state provide for the liberal education of both races". According to a 2020 study by economist Trevon Logan, increases in Black politicians led to greater tax revenue, which was put towards public education spending (and land tenancy reforms). Logan finds that this led to greater literacy among Black men. Railroad subsidies and payoffs Every Southern state subsidized railroads, which modernizers believed could haul the South out of isolation and poverty. Millions of dollars in bonds and subsidies were fraudulently pocketed. One ring in North Carolina spent $200,000 in bribing the legislature and obtained millions of state dollars for its railroads. Instead of building new track, however, it used the funds to speculate in bonds, reward friends with extravagant fees, and enjoy lavish trips to Europe. Taxes were quadrupled across the South to pay off the railroad bonds and the school costs. There were complaints among taxpayers because taxes had historically been low, as the planter elite was not committed to public infrastructure or public education. Taxes historically had been much lower in the South than in the North, reflecting the lack of government investment by the communities. Nevertheless, thousands of miles of lines were built as the Southern system expanded from 11,000 miles (18,000 km) in 1870 to 29,000 miles (47,000 km) in 1890. The lines were owned and directed overwhelmingly by Northerners. Railroads helped create a mechanically skilled group of craftsmen and broke the isolation of much of the region. Passengers were few, however, and apart from hauling the cotton crop when it was harvested, there was little freight traffic. As Franklin explains: "numerous railroads fed at the public trough by bribing legislators ... and through the use and misuse of state funds". According to one businessman, the effect "was to drive capital from the state, paralyze industry, and demoralize labor". Taxation during Reconstruction Reconstruction changed the means of taxation in the South. In the U.S. from the earliest days until today, a major source of state revenue was the property tax. In the South, wealthy landowners were allowed to self-assess the value of their own land. These fraudulent assessments were almost valueless, and pre-war property tax collections were lacking due to property value misrepresentation. State revenues came from fees and from sales taxes on slave auctions. Some states assessed property owners by a combination of land value and a capitation tax, a tax on each worker employed. This tax was often assessed in a way to discourage a free labor market, where a slave was assessed at 75 cents, while a free White was assessed at a dollar or more, and a free African American at $3 or more. Some revenue also came from poll taxes. These taxes were more than poor people could pay, with the designed and inevitable consequence that they did not vote. During Reconstruction, the state legislature mobilized to provide for public need more than had previous governments: establishing public schools and investing in infrastructure, as well as charitable institutions such as hospitals and asylums. They set out to increase taxes which were unusually low. The planters had provided privately for their own needs. There was some fraudulent spending in the postbellum years; a collapse in state credit because of huge deficits, forced the states to increase property tax rates. In places, the rate went up to 10 times higher—despite the poverty of the region. The planters had not invested in infrastructure and much had been destroyed during the war. In part, the new tax system was designed to force owners of large plantations with huge tracts of uncultivated land either to sell or to have it confiscated for failure to pay taxes. The taxes would serve as a market-based system for redistributing the land to the landless freedmen and White poor. Mississippi, for instance, was mostly frontier, with 90% of the bottom lands in the interior undeveloped. The following table shows property tax rates for South Carolina and Mississippi. Note that many local town and county assessments effectively doubled the tax rates reported in the table. These taxes were still levied upon the landowners' own sworn testimony as to the value of their land, which remained the dubious and exploitable system used by wealthy landholders in the South well into the 20th century. |1869||5 mills (0.5%)||1 mill (0.1%) (lowest rate between 1822 and 1898)| |1870||9 mills||5 mills| |1871||7 mills||4 mills| |1872||12 mills||8.5 mills| |1873||12 mills||12.5 mills| |1874||10.3–8 mills||14 mills (1.4%) "a rate which virtually amounted to confiscation" (highest rate between 1822 and 1898)| |Sources||Reynolds, J. S. (1905). Reconstruction in South Carolina, 1865–1877. Columbia, SC: The State Co. p. 329.||Hollander, J. H. (1900). Studies in State Taxation with Particular Reference to the Southern States. Baltimore: Johns Hopkins Press. p. 192.| Called upon to pay taxes on their property, essentially for the first time, angry plantation owners revolted. The conservatives shifted their focus away from race to taxes. Former Congressman John R. Lynch, a Black Republican leader from Mississippi, later wrote: The argument made by the taxpayers, however, was plausible and it may be conceded that, upon the whole, they were about right; for no doubt it would have been much easier upon the taxpayers to have increased at that time the interest-bearing debt of the state than to have increased the tax rate. The latter course, however, had been adopted and could not then be changed unless of course they wanted to change them. National financial issues The Civil War had been financed primarily by issuing short-term and long-term bonds and loans, plus inflation caused by printing paper money, plus new taxes. Wholesale prices had more than doubled, and reduction of inflation was a priority for Secretary McCulloch. A high priority, and by far the most controversial, was the currency question. The old paper currency issued by state banks had been withdrawn, and Confederate currency was worthless. The national banks had issued $207 million in currency, which was backed by gold and silver. The federal treasury had issued $428 million in greenbacks, which was legal tender but not backed by gold or silver. In addition about $275 million of coin was in circulation. The new administration policy announced in October would be to make all the paper convertible into specie, if Congress so voted. The House of Representatives passed the Alley Resolution on December 18, 1865, by a vote of 144 to 6. In the Senate it was a different matter, for the key player was Senator John Sherman, who said that inflation contraction was not nearly as important as refunding the short-term and long-term national debt. The war had been largely financed by national debt, in addition to taxation and inflation. The national debt stood at $2.8 billion. By October 1865, most of it in short-term and temporary loans. Wall Street bankers typified by Jay Cooke believe that the economy was about to grow rapidly, thanks to the development of agriculture through the Homestead Act, the expansion of railroads, especially rebuilding the devastated Southern railroads and opening the transcontinental railroad line to the West Coast, and especially the flourishing of manufacturing during the war. The gold premium over greenbacks was $145 in greenbacks to $100 in gold, and the optimists thought that the heavy demand for currency in an era of prosperity would return the ratio to 100. A compromise was reached in April 1866, that limited the treasury to a currency contraction of only $10 million over six months. Meanwhile, the Senate refunded the entire national debt, but the House failed to act. By early 1867, postbellum prosperity was a reality, and the optimists wanted an end to contraction, which Congress ordered in January 1868. Meanwhile, the Treasury issued new bonds at a lower interest rate to refinance the redemption of short-term debt. While the old state bank notes were disappearing from circulation, new national bank notes, backed by species, were expanding. By 1868 inflation was minimal. Congressional investigation into Reconstruction states 1872 On April 20, 1871, prior to the passage of the Ku Klux Klan Act (Last of three Enforcement Acts), on the same day, the U.S. Congress launched a 21-member investigation committee on the status of the Southern Reconstruction states North Carolina, South Carolina, Georgia, Mississippi, Alabama, and Florida. Congressional members on the committee included Rep. Benjamin Butler, Sen. Zachariah Chandler, and Sen. Francis P. Blair. Subcommittee members traveled into the South to interview the people living in their respective states. Those interviewed included top-ranking officials, such as Wade Hampton III, former South Carolina Gov. James L. Orr, and Nathan Bedford Forrest, a former Confederate general and prominent Ku Klux Klan leader (Forrest denied in his congressional testimony being a member). Other Southerners interviewed included farmers, doctors, merchants, teachers, and clergymen. The committee heard numerous reports of White violence against Blacks, while many Whites denied Klan membership or knowledge of violent activities. The majority report by Republicans concluded that the government would not tolerate any Southern "conspiracy" to resist violently the congressional Reconstruction. The committee completed its 13-volume report in February 1872. While President Ulysses S. Grant had been able to suppress the KKK through the Enforcement Acts, other paramilitary insurgents organized, including the White League in 1874, active in Louisiana; and the Red Shirts, with chapters active in Mississippi and the Carolinas. They used intimidation and outright attacks to run Republicans out of office and repress voting by Blacks, leading to White Democrats regaining power by the elections of the mid-to-late 1870s. While the scalawag element of Republican Whites supported measures for Black civil rights, the conservative Whites typically opposed these measures. Some supported armed attacks to suppress Blacks. They self-consciously defended their own actions within the framework of a White American discourse of resistance against tyrannical government, and they broadly succeeded in convincing many fellow White citizens, says Steedman. The opponents of Reconstruction formed state political parties, affiliated with the national Democratic Party and often named the "Conservative Party". They supported or tolerated violent paramilitary groups, such as the White League in Louisiana and the Red Shirts in Mississippi and the Carolinas, that assassinated and intimidated both Black and White Republican leaders at election time. Historian George C. Rable called such groups the "military arm of the Democratic Party". By the mid-1870s, the conservatives and Democrats had aligned with the national Democratic Party, which enthusiastically supported their cause even as the national Republican Party was losing interest in Southern affairs. The Negro troops, even at their best, were everywhere considered offensive by the native whites.... The Negro soldier, impudent by reason of his new freedom, his new uniform, and his new gun, was more than Southern temper could tranquilly bear, and race conflicts were frequent. Often, these White Southerners identified as the "Conservative Party" or the "Democratic and Conservative Party" in order to distinguish themselves from the national Democratic Party and to obtain support from former Whigs. These parties sent delegates to the 1868 Democratic National Convention and abandoned their separate names by 1873 or 1874. Most White members of both the planter and business class and common farmer class of the South opposed Black civil rights, carpetbaggers, and military rule, and sought white supremacy. Democrats nominated some Blacks for political office and tried to entice other Blacks from the Republican side. When these attempts to combine with the Blacks failed, the planters joined the common farmers in simply trying to displace the Republican governments. The planters and their business allies dominated the self-styled "conservative" coalition that finally took control in the South. They were paternalistic toward the Blacks but feared they would use power to raise taxes and slow business development. Fleming described the first results of the insurgent movement as "good", and the later ones as "both good and bad". According to Fleming (1907), the KKK "quieted the Negroes, made life and property safer, gave protection to women, stopped burnings, forced the Radical leaders to be more moderate, made the Negroes work better, drove the worst of the Radical leaders from the country and started the whites on the way to gain political supremacy". The evil result, Fleming said, was that lawless elements "made use of the organization as a cloak to cover their misdeeds ... The lynching habits of today are largely due to conditions, social and legal, growing out of Reconstruction." Historians have noted that the peak of lynchings took place near the turn of the century, decades after Reconstruction ended, as Whites were imposing Jim Crow laws and passing new state constitutions that disenfranchised the Blacks. The lynchings were used for intimidation and social control, with a frequency associated more with economic stresses and the settlement of sharecropper accounts at the end of the season, than for any other reason. Outrages upon the former slaves in the South there were in plenty. Their sufferings were many. But white men, too, were victims of lawless violence, and in all portions of the North and the late "rebel" states. Not a political campaign passed without the exchange of bullets, the breaking of skulls with sticks and stones, the firing of rival club-houses. Republican clubs marched the streets of Philadelphia, amid revolver shots and brickbats, to save the Negroes from the "rebel" savages in Alabama.... The project to make voters out of black men was not so much for their social elevation as for the further punishment of the Southern white people—for the capture of offices for Radical scamps and the entrenchment of the Radical party in power for a long time to come in the South and in the country at large. As Reconstruction continued, Whites accompanied elections with increased violence in an attempt to run Republicans out of office and suppress Black voting. The victims of this violence were overwhelmingly African American, as in the Colfax Massacre of 1873. After federal suppression of the Klan in the early 1870s, White insurgent groups tried to avoid open conflict with federal forces. In 1874 in the Battle of Liberty Place, the White League entered New Orleans with 5,000 members and defeated the police and militia, to occupy federal offices for three days in an attempt to overturn the disputed government of William Pitt Kellogg, but retreated before federal troops reached the city. None were prosecuted. Their election-time tactics included violent intimidation of African American and Republican voters prior to elections, while avoiding conflict with the U.S. Army or the state militias, and then withdrawing completely on election day. Conservative reaction continued in both the North and South; the White Liners movement to elect candidates dedicated to White supremacy reached as far as Ohio in 1875. The Redeemers were the Southern wing of the Bourbon Democrats, the conservative, pro-business faction of the Democratic Party. They sought to regain political power, reestablish White supremacy, and oust the Radical Republicans. Led by rich former planters, businessmen, and professionals, they dominated Southern politics in most areas from the 1870s to 1910. Republicans split nationally: election of 1872 Congress was right in not limiting, by its Reconstruction acts, the right of suffrage to Whites; but wrong in the exclusion from suffrage of certain classes of citizens and all unable to take its prescribed retrospective oath, and wrong also in the establishment of despotic military governments for the states and in authorizing military commissions for the trial of civilians in time of peace. There should have been as little military government as possible; no military commissions; no classes excluded from suffrage; and no oath except one of faithful obedience and support to the Constitution and laws, and of sincere attachment to the constitutional government of the United States. By 1872, President Ulysses S. Grant had alienated large numbers of leading Republicans, including many Radicals, by the corruption of his administration and his use of federal soldiers to prop up Radical state regimes in the South. The opponents, called "Liberal Republicans", included founders of the party who expressed dismay that the party had succumbed to corruption. They were further wearied by the continued insurgent violence of Whites against Blacks in the South, especially around every election cycle, which demonstrated that the war was not over and changes were fragile. Leaders included editors of some of the nation's most powerful newspapers. Charles Sumner, embittered by the corruption of the Grant administration, joined the new party, which nominated editor Horace Greeley. The loosely-organized Democratic Party also supported Greeley. Grant made up for the defections by new gains among Union veterans and by strong support from the "Stalwart" faction of his party (which depended on his patronage), and the Southern Republican Party. Grant won with 55.6% of the vote to Greeley's 43.8%. The Liberal Republican Party vanished and many former supporters—even former abolitionists—abandoned the cause of Reconstruction. The Republican coalition splinters in the South In the South, political and racial tensions built up inside the Republican Party as they were attacked by the Democrats. In 1868, Georgia Democrats, with support from some Republicans, expelled all 28 Black Republican members from the state house, arguing Blacks were eligible to vote but not to hold office. In most states, the more conservative scalawags fought for control with the more Radical carpetbaggers and their Black allies. Most of the 430 Republican newspapers in the South were edited by scalawags—only 20 percent were edited by carpetbaggers. White businessmen generally boycotted Republican papers, which survived through government patronage. Nevertheless, in the increasingly bitter battles inside the Republican Party, the scalawags usually lost; many of the disgruntled losers switched over to the conservative or Democratic side. In Mississippi, the conservative faction led by scalawag James Lusk Alcorn was decisively defeated by the Radical faction led by carpetbagger Adelbert Ames. The party lost support steadily as many scalawags left it; few recruits were acquired. The most bitter contest took place inside the Republican Party in Arkansas, where the two sides armed their forces and confronted each other in the streets; no actual combat took place in the Brooks–Baxter War. The carpetbagger faction led by Elisha Baxter finally prevailed when the White House intervened, but both sides were badly weakened, and the Democrats soon came to power. Meanwhile, in state after state the freedmen were demanding a bigger share of the offices and patronage, squeezing out carpetbagger allies but never commanding the numbers equivalent to their population proportion. By the mid-1870s: "The hard realities of Southern political life had taught the lesson that black constituents needed to be represented by black officials."[clarification needed] The financial depression increased the pressure on Reconstruction governments, dissolving progress. Finally, some of the more prosperous freedmen were joining the Democrats, as they were angered at the failure of the Republicans to help them acquire land. The South was "sparsely settled"; only 10 percent of Louisiana was cultivated, and 90 percent of Mississippi bottom land was undeveloped in areas away from the river fronts, but freedmen often did not have the stake to get started. They hoped that the government would help them acquire land which they could work. Only South Carolina created any land redistribution, establishing a land commission and resettling about 14,000 freedmen families and some poor Whites on land purchased by the state. Although historians such as W. E. B. Du Bois celebrated a cross-racial coalition of poor Whites and Blacks, such coalitions rarely formed in these years. Writing in 1915, former Congressman Lynch, recalling his experience as a Black leader in Mississippi, explained that: While the colored men did not look with favor upon a political alliance with the poor whites, it must be admitted that, with very few exceptions, that class of whites did not seek, and did not seem to desire such an alliance. Lynch reported that poor Whites resented the job competition from freedmen. Furthermore, the poor Whites: with a few exceptions, were less efficient, less capable, and knew less about matters of state and governmental administration than many of the former slaves.... As a rule, therefore, the Whites that came into the leadership of the Republican Party between 1872 and 1875 were representatives of the most substantial families of the land. Democrats try a "New Departure" By 1870, the Democratic–Conservative leadership across the South decided it had to end its opposition to Reconstruction and Black suffrage to survive and move on to new issues. The Grant administration had proven by its crackdown on the Ku Klux Klan that it would use as much federal power as necessary to suppress open anti-Black violence. Democrats in the North concurred with these Southern Democrats. They wanted to fight the Republican Party on economic grounds rather than race. The New Departure offered the chance for a clean slate without having to re-fight the Civil War every election. Furthermore, many wealthy Southern landowners thought they could control part of the newly enfranchised Black electorate to their own advantage. Not all Democrats agreed; an insurgent element continued to resist Reconstruction no matter what. Eventually, a group called "Redeemers" took control of the party in the Southern states. They formed coalitions with conservative Republicans, including scalawags and carpetbaggers, emphasizing the need for economic modernization. Railroad building was seen as a panacea since Northern capital was needed. The new tactics were a success in Virginia where William Mahone built a winning coalition. In Tennessee, the Redeemers formed a coalition with Republican Governor Dewitt Clinton Senter. Across the South, some Democrats switched from the race issue to taxes and corruption, charging that Republican governments were corrupt and inefficient. With a continuing decrease in cotton prices, taxes squeezed cash-poor farmers who rarely saw $20 in currency a year, but had to pay taxes in currency or lose their farms. But major planters, who had never paid taxes before, often recovered their property even after confiscation. In North Carolina, Republican Governor William Woods Holden used state troops against the Klan, but the prisoners were released by federal judges. Holden became the first governor in American history to be impeached and removed from office. Republican political disputes in Georgia split the party and enabled the Redeemers to take over. In the North, a live-and-let-live attitude made elections more like a sporting contest. But in the Deep South, many White citizens had not reconciled with the defeat of the war or the granting of citizenship to freedmen. As an Alabama scalawag explained: "Our contest here is for life, for the right to earn our bread, ... for a decent and respectful consideration as human beings and members of society." Panic of 1873 The Panic of 1873 (a depression) hit the Southern economy hard and disillusioned many Republicans who had gambled that railroads would pull the South out of its poverty. The price of cotton fell by half; many small landowners, local merchants, and cotton factors (wholesalers) went bankrupt. Sharecropping for Black and White farmers became more common as a way to spread the risk of owning land. The old abolitionist element in the North was aging away, or had lost interest, and was not replenished. Many carpetbaggers returned to the North or joined the Redeemers. Blacks had an increased voice in the Republican Party, but across the South it was divided by internal bickering and was rapidly losing its cohesion. Many local Black leaders started emphasizing individual economic progress in cooperation with White elites, rather than racial political progress in opposition to them, a conservative attitude that foreshadowed Booker T. Washington. Nationally, President Grant was blamed for the depression; the Republican Party lost 96 seats in all parts of the country in the 1874 elections. The Bourbon Democrats took control of the House and were confident of electing Samuel J. Tilden president in 1876. President Grant was not running for re-election and seemed to be losing interest in the South. States fell to the Redeemers, with only four in Republican hands in 1873: Arkansas, Louisiana, Mississippi, and South Carolina. Arkansas then fell after the violent Brooks–Baxter War in 1874 ripped apart the Republican Party there. In the lower South, violence increased as new insurgent groups arose, including the Red Shirts in Mississippi and the Carolinas, and the White League in Louisiana. The disputed election in Louisiana in 1872 found both Republican and Democratic candidates holding inaugural balls while returns were reviewed. Both certified their own slates for local parish offices in many places, causing local tensions to rise. Finally, federal support helped certify the Republican as governor. Slates for local offices were certified by each candidate. In rural Grant Parish in the Red River Valley, freedmen fearing a Democratic attempt to take over the parish government reinforced defenses at the small Colfax courthouse in late March. White militias gathered from the area a few miles outside the settlement. Rumors and fears abounded on both sides. William Ward, an African American Union veteran and militia captain, mustered his company in Colfax and went to the courthouse. On Easter Sunday, April 13, 1873, the Whites attacked the defenders at the courthouse. There was confusion about who shot one of the White leaders after an offer by the defenders to surrender. It was a catalyst to mayhem. In the end, three Whites died and 120–150 Blacks were killed, some 50 that evening while being held as prisoners. The disproportionate numbers of Black to White fatalities and documentation of brutalized bodies are why contemporary historians call it the Colfax Massacre rather than the Colfax Riot, as it was known locally. This marked the beginning of heightened insurgency and attacks on Republican officeholders and freedmen in Louisiana and other Deep South states. In Louisiana, Judge T. S. Crawford and District Attorney P. H. Harris of the 12th Judicial District were shot off their horses and killed by ambush October 8, 1873, while going to court. One widow wrote to the Department of Justice that her husband was killed because he was a Union man, telling "the efforts made to screen those who committed a crime". Political violence was endemic in Louisiana. In 1874, the White militias coalesced into paramilitary organizations such as the White League, first in parishes of the Red River Valley. The new organization operated openly and had political goals: the violent overthrow of Republican rule and suppression of Black voting. White League chapters soon rose in many rural parishes, receiving financing for advanced weaponry from wealthy men. In the Coushatta Massacre in 1874, the White League assassinated six White Republican officeholders and five to 20 Black witnesses outside Coushatta, Red River Parish. Four of the White men were related to the Republican representative of the parish, who was married to a local woman; three were native to the region. Later in 1874 the White League mounted a serious attempt to unseat the Republican governor of Louisiana, in a dispute that had simmered since the 1872 election. It brought 5,000 troops to New Orleans to engage and overwhelm forces of the metropolitan police and state militia to turn Republican Governor William P. Kellogg out of office and seat John McEnery. The White League took over and held the state house and city hall, but they retreated before the arrival of reinforcing federal troops. Kellogg had asked for reinforcements before, and Grant finally responded, sending additional troops to try to quell violence throughout plantation areas of the Red River Valley, although 2,000 troops were already in the state. Similarly, the Red Shirts, another paramilitary group, arose in 1875 in Mississippi and the Carolinas. Like the White League and White Liner rifle clubs, to which 20,000 men belonged in North Carolina alone, these groups operated as a "military arm of the Democratic Party", to restore White supremacy. Democrats and many Northern Republicans agreed that Confederate nationalism and slavery were dead—the war goals were achieved—and further federal military interference was an undemocratic violation of historical Republican values. The victory of Rutherford B. Hayes in the hotly contested Ohio gubernatorial election of 1875 indicated his "let alone" policy toward the South would become Republican policy, as happened when he won the 1876 Republican nomination for president. An explosion of violence accompanied the campaign for Mississippi's 1875 election, in which Red Shirts and Democratic rifle clubs, operating in the open, threatened or shot enough Republicans to decide the election for the Democrats. Hundreds of Black men were killed. Republican Governor Adelbert Ames asked Grant for federal troops to fight back; Grant initially refused, saying public opinion was "tired out" of the perpetual troubles in the South. Ames fled the state as the Democrats took over Mississippi. The campaigns and elections of 1876 were marked by additional murders and attacks on Republicans in Louisiana, North Carolina, South Carolina, and Florida. In South Carolina the campaign season of 1876 was marked by murderous outbreaks and fraud against freedmen. Red Shirts paraded with arms behind Democratic candidates; they killed Blacks in the Hamburg and Ellenton, South Carolina massacres. One historian estimated 150 Blacks were killed in the weeks before the 1876 election across South Carolina. Red Shirts prevented almost all Black voting in two majority-Black counties. The Red Shirts were also active in North Carolina. A 2019 study found that counties that were occupied by the U.S. Army to enforce enfranchisement of emancipated slaves were more likely to elect Black politicians. The study also found that "political murders by White-supremacist groups occurred less frequently" in these counties than in Southern counties that were not occupied. Election of 1876 Reconstruction continued in South Carolina, Louisiana, and Florida until 1877. The elections of 1876 were accompanied by heightened violence across the Deep South. A combination of ballot stuffing and intimidating Blacks suppressed their vote even in majority Black counties. The White League was active in Louisiana. After Republican Rutherford B. Hayes won the disputed 1876 presidential election, the national Compromise of 1877 (a corrupt bargain) was reached. The White Democrats in the South agreed to accept Hayes' victory if he withdrew the last federal troops. By this point, the North was weary of insurgency. White Democrats controlled most of the Southern legislatures and armed militias controlled small towns and rural areas. Blacks considered Reconstruction a failure because the federal government withdrew from enforcing their ability to exercise their rights as citizens. Hayes ends Reconstruction On January 29, 1877, President Grant signed the Electoral Commission Act, which set up a 15-member commission of eight Republicans and seven Democrats to settle the disputed 1876 election. Since the Constitution did not explicitly indicate how Electoral College disputes were to be resolved, Congress was forced to consider other methods to settle the crisis. Many Democrats argued that Congress as a whole should determine which certificates to count. However, the chances that this method would result in a harmonious settlement were slim, as the Democrats controlled the House, while the Republicans controlled the Senate. Several Hayes supporters, on the other hand, argued that the President pro tempore of the Senate had the authority to determine which certificates to count, because he was responsible for chairing the congressional session at which the electoral votes were to be tallied. Since the office of president pro tempore was occupied by a Republican, Senator Thomas W. Ferry of Michigan, this method would have favored Hayes. Still others proposed that the matter should be settled by the Supreme Court. In a stormy session that began on March 1, 1877, the House debated the objection for about twelve hours before overruling it. Immediately, another spurious objection was raised, this time to the electoral votes from Wisconsin. Again, the Senate voted to overrule the objection, while a filibuster was conducted in the House. However, the Speaker of the House, Democrat Samuel J. Randall, refused to entertain dilatory motions. Eventually, the filibusterers gave up, allowing the House to reject the objection in the early hours of March 2. The House and Senate then reassembled to complete the count of the electoral votes. At 4:10 am on March 2, Senator Ferry announced that Hayes and Wheeler had been elected to the presidency and vice presidency, by an electoral margin of 185–184. The Democrats agreed not to block Hayes' inauguration based on a "back room" deal. Key to this deal was the understanding that federal troops would no longer interfere in Southern politics despite substantial election-associated violence against Blacks. The Southern states indicated that they would protect the lives of African Americans; however, such promises were largely not kept. Hayes' friends also let it be known that he would promote federal aid for internal improvements, including help with a railroad in Texas (which never happened) and name a Southerner to his cabinet (this did happen). With the end to the political role of Northern troops, the president had no method to enforce Reconstruction; thus, this "back room" deal signaled the end of American Reconstruction. After assuming office on March 4, 1877, President Hayes removed troops from the capitals of the remaining Reconstruction states, Louisiana and South Carolina, allowing the Redeemers to have full control of these states. President Grant had already removed troops from Florida, before Hayes was inaugurated, and troops from the other Reconstruction states had long since been withdrawn. Hayes appointed David M. Key from Tennessee, a Southern Democrat, to the position of postmaster general. By 1879, thousands of African American "Exodusters" packed up and headed to new opportunities in Kansas. The Democrats gained control of the Senate, and had complete control of Congress, having taken over the House in 1875. Hayes vetoed bills from the Democrats that outlawed the Republican Enforcement Acts; however, with the military underfunded, Hayes could not adequately enforce these laws. African-Americans remained involved in Southern politics, particularly in Virginia, which was run by the biracial Readjuster Party. Numerous African-Americans were elected to local office through the 1880s, and in the 1890s in some states, biracial coalitions of populists and Republicans briefly held control of state legislatures. In the last decade of the 19th century, Southern states elected five Black U.S. congressmen before disenfranchising state constitutions were passed throughout the former Confederacy. Legacy and historiography Besides the election of Southern black people to state governments and the United States Congress other achievements of the Reconstruction era include "the South’s first state-funded public school systems, more equitable taxation legislation, laws against racial discrimination in public transport and accommodations and ambitious economic development programs (including aid to railroads and other enterprises)." Despite these achievements the interpretation of Reconstruction has been a topic of controversy because nearly all historians hold that Reconstruction ended in failure, but for very different reasons. The first generation of Northern historians believed that the former Confederates were traitors and Johnson was their ally who threatened to undo the Union's constitutional achievements. By the 1880s, however, Northern historians argued that Johnson and his allies were not traitors but had blundered badly in rejecting the Fourteenth Amendment and setting the stage for Radical Reconstruction. The Black leader Booker T. Washington, who grew up in West Virginia during Reconstruction, concluded later that: "the Reconstruction experiment in racial democracy failed because it began at the wrong end, emphasizing political means and civil rights acts rather than economic means and self-determination". His solution was to concentrate on building the economic infrastructure of the Black community, in part by his leadership and the Southern Tuskegee Institute. Dunning School: 1900 to 1920s The Dunning School of scholars, who were trained at the history department of Columbia University under Professor William A. Dunning, analyzed Reconstruction as a failure after 1866 for different reasons. They claimed that Congress took freedoms and rights from qualified Whites and gave them to unqualified Blacks who were being duped by corrupt "carpetbaggers and scalawags". As T. Harry Williams (who was a sharp critic of the Dunning School) noted, the Dunning scholars portrayed the era in stark terms: Reconstruction was a battle between two extremes: the Democrats, as the group which included the vast majority of the whites, standing for decent government and racial supremacy, versus the Republicans, the Negroes, alien carpetbaggers, and renegade scalawags, standing for dishonest government and alien ideals. These historians wrote literally in terms of white and black. Revisionists and Beardians, 1930s–1940s In the 1930s, historical revisionism became popular among scholars. As disciples of Charles A. Beard, revisionists focused on economics, downplaying politics and constitutional issues. The central figure was a young scholar at the University of Wisconsin, Howard K. Beale, who in his PhD dissertation, finished in 1924, developed a complex new interpretation of Reconstruction. The Dunning School portrayed freedmen as mere pawns in the hands of the carpetbaggers. Beale argued that the carpetbaggers themselves were pawns in the hands of Northern industrialists, who were the real villains of Reconstruction. These industrialists had taken control of the nation during the Civil War, and set up high tariffs to protect their profits, as well as a lucrative national banking system and a railroad network fueled by government subsidies and secret payoffs. The return to power of the Southern Whites would seriously threaten all their gains, and so the ex-Confederates had to be kept out of power. The tool used by the industrialists was the combination of the Northern Republican Party and sufficient Southern support using carpetbaggers and Black voters. The rhetoric of civil rights for Blacks, and the dream of equality, was rhetoric designed to fool idealistic voters. Beale called it "claptrap", arguing: "Constitutional discussions of the rights of the Negro, the status of Southern states, the legal position of ex-rebels, and the powers of Congress and the president determined nothing. They were pure sham." President Andrew Johnson had tried, and failed, to stop the juggernaut of the industrialists. The Dunning School had praised Johnson for upholding the rights of the White men in the South and endorsing White supremacy. Beale was not a racist, and indeed was one of the most vigorous historians working for Black civil rights in the 1930s and 1940s. In his view, Johnson was not a hero for his racism, but rather for his forlorn battle against the industrialists. Charles A. Beard and Mary Beard had already published The Rise of American Civilization (1927) three years before Beale, and had given very wide publicity to a similar theme. The Beard–Beale interpretation of Reconstruction became known as "revisionism", and replaced the Dunning School for most historians, until the 1950s. The Beardian interpretation of the causes of the Civil War downplayed slavery, abolitionism, and issues of morality. It ignored constitutional issues of states' rights and even ignored American nationalism as the force that finally led to victory in the war. Indeed, the ferocious combat itself was passed over as merely an ephemeral event. Much more important was the calculus of class conflict. As the Beards explained in The Rise of American Civilization (1927), the Civil War was really a: social cataclysm in which the capitalists, laborers, and farmers of the North and West drove from power in the national government the planting aristocracy of the South. The Beards were especially interested in the Reconstruction era, as the industrialists of the Northeast and the farmers of the West cashed in on their great victory over the Southern aristocracy. Historian Richard Hofstadter paraphrases the Beards as arguing that in victory: the Northern capitalists were able to impose their economic program, quickly passing a series of measures on tariffs, banking, homesteads, and immigration that guaranteed the success of their plans for economic development. Solicitude for the freedmen had little to do with Northern policies. The Fourteenth Amendment, which gave the Negro his citizenship, Beard found significant primarily as a result of a conspiracy of a few legislative draftsmen friendly to corporations to use the supposed elevation of the blacks as a cover for a fundamental law giving strong protection to business corporations against regulation by state government. Wisconsin historian William Hesseltine added the point that the Northeastern businessmen wanted to control the Southern economy directly, which they did through ownership of the railroads. The Beard–Beale interpretation of the monolithic Northern industrialists fell apart in the 1950s when it was closely examined by numerous historians, including Robert P. Sharkey, Irwin Unger, and Stanley Coben. The younger scholars conclusively demonstrated that there was no unified economic policy on the part of the dominant Republican Party. Some wanted high tariffs and some low. Some wanted greenbacks and others wanted gold. There was no conspiracy to use Reconstruction to impose any such unified economic policy on the nation. Northern businessmen were widely divergent on monetary or tariff policy, and seldom paid attention to Reconstruction issues. Furthermore, the rhetoric on behalf of the rights of the freedmen was not claptrap but deeply-held and very serious political philosophy. The Black scholar W. E. B. Du Bois, in his Black Reconstruction in America, 1860–1880, published in 1935, compared results across the states to show achievements by the Reconstruction legislatures and to refute claims about wholesale African American control of governments. He showed Black contributions, as in the establishment of universal public education, charitable and social institutions and universal suffrage as important results, and he noted their collaboration with Whites. He also pointed out that Whites benefited most by the financial deals made, and he put excesses in the perspective of the war's aftermath. He noted that despite complaints, several states kept their Reconstruction era state constitutions into the early 20th century. Despite receiving favorable reviews, his work was largely ignored by White historians of his time. In the 1960s, neo-abolitionist historians emerged, led by John Hope Franklin, Kenneth Stampp, Leon Litwack, and Eric Foner. Influenced by the civil rights movement, they rejected the Dunning School and found a great deal to praise in Radical Reconstruction. Foner, the primary advocate of this view, argued that it was never truly completed, and that a "Second Reconstruction" was needed in the late 20th century to complete the goal of full equality for African Americans. The neo-abolitionists followed the revisionists in minimizing the corruption and waste created by Republican state governments, saying it was no worse than Boss Tweed's ring in New York City. Instead, they emphasized that suppression of the rights of African Americans was a worse scandal, and a grave corruption of America's republicanist ideals. They argued that the tragedy of Reconstruction was not that it failed because Blacks were incapable of governing, especially as they did not dominate any state government, but that it failed because Whites raised an insurgent movement to restore White supremacy. White-elite-dominated state legislatures passed disenfranchising state constitutions from 1890 to 1908 that effectively barred most Blacks and many poor Whites from voting. This disenfranchisement affected millions of people for decades into the 20th century, and closed African Americans and poor Whites out of the political process in the South. Re-establishment of White supremacy meant that within a decade African Americans were excluded from virtually all local, state, and federal governance in all states of the South. Lack of representation meant that they were treated as second-class citizens, with schools and services consistently underfunded in segregated societies, no representation on juries or in law enforcement, and bias in other legislation. It was not until the civil rights movement and the passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965 that segregation was outlawed and suffrage restored, under what is sometimes[when?] referred to as the "Second Reconstruction". In 1990, Eric Foner concluded that from the Black point of view "Reconstruction must be judged a failure." Foner stated Reconstruction was "a noble if flawed experiment, the first attempt to introduce a genuine inter-racial democracy in the United States". According to him, the many factors contributing to the failure included: lack of a permanent federal agency specifically designed for the enforcement of civil rights; the Morrison R. Waite Supreme Court decisions that dismantled previous congressional civil rights legislation; and the economic reestablishment of conservative White planters in the South by 1877. Historian William McFeely explained that although the constitutional amendments and civil rights legislation on their own merit were remarkable achievements, no permanent government agency whose specific purpose was civil rights enforcement had been created.[iv] More recent work by Nina Silber, David W. Blight, Cecelia O'Leary, Laura Edwards, LeeAnn Whites, and Edward J. Blum has encouraged greater attention to race, religion, and issues of gender while at the same time pushing the effective end of Reconstruction to the end of the 19th century, while monographs by Charles Reagan Wilson, Gaines Foster, W. Scott Poole, and Bruce Baker have offered new views of the Southern "Lost Cause". Dating the end of the Reconstruction era At the national level, textbooks typically date the era from 1865 to 1877. Eric Foner's textbook of national history Give Me Liberty is an example. His monograph Reconstruction: America's Unfinished Revolution, 1863–1877 (1988) focusing on the situation in the South, covers 1863 to 1865. While 1877 is the usual date given for the end of Reconstruction, some historians such as Orville Vernon Burton extend the era to the 1890s to include the imposition of segregation. Economic role of race Economists and economic historians have different interpretations of the economic impact of race on the postbellum Southern economy. In 1995, Robert Whaples took a random survey of 178 members of the Economic History Association, who studied American history in all time periods. He asked whether they wholly or partly accepted, or rejected, 40 propositions in the scholarly literature about American economic history. The greatest difference between economics PhDs and history PhDs came in questions on competition and race. For example, the proposition originally put forward by Robert Higgs, "in the post-bellum South economic competition among Whites played an important part in protecting blacks from racial coercion", was accepted in whole or part by 66% of the economists, but by only 22% of the historians. Whaples says this highlights: "A recurring difference dividing historians and economists. The economists have more faith in the power of the competitive market. For example, they see the competitive market as protecting disenfranchised blacks and are less likely to accept the idea that there was exploitation by merchant monopolists." The "failure" issue Reconstruction is widely considered a failure, though the reason for this is a matter of controversy. - The Dunning School considered failure inevitable because it felt that taking the right to vote or hold office away from Southern Whites was a violation of republicanism. - A second school sees the reason for failure as Northern Republicans' lack of effectiveness in guaranteeing political rights to Blacks. - A third school blames the failure on not giving land to the freedmen so they could have their own economic base of power. - A fourth school sees the major reason for the failure of Reconstruction as the states' inability to suppress the violence of Southern Whites when they sought reversal for Blacks' gains. Etcheson (2009) points to the "violence that crushed black aspirations and the abandonment by Northern whites of Southern Republicans". Etcheson wrote that it is hard to see Reconstruction "as concluding in anything but failure". Etcheson adds: "W. E. B. DuBois captured that failure well when he wrote in Black Reconstruction in America (1935): 'The slave went free; stood a brief moment in the sun; then moved back again toward slavery.'" - Other historians emphasize the failure to fully incorporate Southern Unionists into the Republican coalition. Derek W. Frisby points to "Reconstruction's failure to appreciate the challenges of Southern Unionism and incorporate these loyal Southerners into a strategy that would positively affect the character of the peace". Historian Donald R. Shaffer maintained that the gains during Reconstruction for African Americans were not entirely extinguished. The legalization of African American marriages and families and the independence of Black churches from White denominations were a source of strength during the Jim Crow era. Reconstruction was never forgotten within the Black community and it remained a source of inspiration. The system of sharecropping granted Blacks a considerable amount of freedom as compared to slavery. What remains certain is that Reconstruction failed, and that for Blacks its failure was a disaster whose magnitude cannot be obscured by the genuine accomplishments that did endure. However, in 2014, historian Mark Summers argued that the "failure" question should be looked at from the viewpoint of the war goals; in that case, he argues: If we see Reconstruction's purpose as making sure that the main goals of the war would be fulfilled, of a Union held together forever, of a North and South able to work together, of slavery extirpated, and sectional rivalries confined, of the permanent banishment of the fear of vaunting appeals to state sovereignty, backed by armed force, then Reconstruction looks like what in that respect it was, a lasting and unappreciated success. In popular culture The journalist Joel Chandler Harris, who wrote under the name "Joe Harris" for the Atlanta Constitution (mostly after Reconstruction), tried to advance racial and sectional reconciliation in the late 19th century. He supported Henry W. Grady's vision of a New South during Grady's time as editor from 1880 to 1889. Harris wrote many editorials in which he encouraged Southerners to accept the changed conditions along with some Northern influences, but he asserted his belief that change should proceed under White supremacy. In popular literature, two early 20th-century novels by Thomas Dixon Jr. – The Leopard's Spots: A Romance of the White Man's Burden – 1865–1900 (1902), and The Clansman: A Historical Romance of the Ku Klux Klan (1905) – idealized White resistance to Northern and Black coercion, hailing vigilante action by the Ku Klux Klan. D. W. Griffith adapted Dixon's The Clansman for the screen in his anti-Republican movie The Birth of a Nation (1915); it stimulated the formation of the 20th-century version of the KKK. Many other authors romanticized the supposed benevolence of slavery and the elite world of the antebellum plantations, in memoirs and histories which were published in the late 19th and early 20th centuries; the United Daughters of the Confederacy promoted influential works which were written in these genres by women. Of much more lasting impact was the story Gone with the Wind, first in the form of the best-selling 1936 novel, which enabled its author Margaret Mitchell to win the Pulitzer Prize, and an award-winning Hollywood blockbuster with the same title in 1939. In each case, the second half of the story focuses on Reconstruction in Atlanta. The book sold millions of copies nationwide; the film is regularly re-broadcast on television. In 2018, it remained at the top of the list of the highest-grossing films, adjusted in order to keep up with inflation. The New Georgia Encyclopedia argues: Politically, the film offers a conservative view of Georgia and the South. In her novel, despite her Southern prejudices, Mitchell showed clear awareness of the shortcomings of her characters and their region. The film is less analytical. It portrays the story from a clearly Old South point of view: the South is presented as a great civilization, the practice of slavery is never questioned, and the plight of the freedmen after the Civil War is implicitly blamed on their emancipation. A series of scenes whose racism rivals that of D. W. Griffith's film The Birth of a Nation (1915) mainly portrays Reconstruction as a time when Southern whites were victimized by freed slaves, who themselves were exploited by Northern carpetbaggers. Reconstruction state-by-state – significant dates Georgia was first readmitted to the U.S. Congress on July 25, 1868, but it was expelled on March 3, 1869. Virginia had been represented in the U.S. Senate until March 3, 1865, by the Restored Government of Virginia. in each state |South Carolina||December 20, 1860||February 8, 1861||June 25, 1868||April 11, 1877| |Mississippi||January 9, 1861||February 8, 1861||February 23, 1870||January 4, 1876| |Florida||January 10, 1861||February 8, 1861||June 25, 1868||January 2, 1877| |Alabama||January 11, 1861||February 8, 1861||June 25, 1868||November 16, 1874| |Georgia||January 19, 1861||February 8, 1861||July 15, 1870||November 1, 1871| |Louisiana||January 26, 1861||February 8, 1861||June 25, 1868||January 2, 1877| |Texas||February 1, 1861||March 2, 1861||March 30, 1870||January 14, 1873| |Virginia||April 17, 1861||May 7, 1861||January 26, 1870||October 5, 1869| |Arkansas||May 6, 1861||May 18, 1861||June 22, 1868||November 10, 1874| |North Carolina||May 20, 1861||May 20, 1861||June 25, 1868||November 28, 1870| |Tennessee||June 8, 1861||July 2, 1861||July 24, 1866||October 4, 1869| - Reconstruction Era National Monument - Category:African-American politicians during the Reconstruction Era - A somewhat similar "Reconstruction" process took place in the border states of Missouri, Kentucky, and West Virginia, but they had never left the Union and were never directly controlled by Congress. - All Blacks would be counted in 1870, whether or not they were citizens. - Georgia had a Republican governor and legislature, but the Republican hegemony was tenuous at best, and Democrats continued to win presidential elections there. See 1834 March 28 article in This Day in Georgia History compiled by Ed Jackson and Charles Pou; cf. Rufus Bullock. - Although Grant and Attorney General Amos T. Akerman set up a strong legal system to protect African Americans, the Department of Justice did not set up a permanent Civil Rights Division until the Civil Rights Act of 1957. - "The First Vote" by William Waud Harpers Weekly Nov. 16, 1867 - Rodrigue, John C. (2001). Reconstruction in the Cane Fields: From Slavery to Free Labor in Louisiana's Sugar Parishes, 1862–1880. Louisiana State University Press. p. 168. ISBN 978-0-8071-5263-8. - Lynn, Samara; Thorbecke, Catherine (September 27, 2020). "What America owes: How reparations would look and who would pay". ABC News. Retrieved February 24, 2021. - Guelzo (2018), pp. 11–12; Foner (2019), p. 198. - Foner (1988), p. xxv. - Foner, Eric (2017) . "'What Is Freedom?': Reconstruction, 1865–1877". Give Me Liberty! (5th ed.). W. W. Norton & Company. ISBN 978-0-393-60338-5. - Foner, Eric (Winter 2009). "If Lincoln hadn't died ..." American Heritage Magazine. 58 (6). Retrieved July 26, 2010. - Baker, Bruce E. (2007). What Reconstruction Meant: Historical Memory in the American South. - Blight, David W. (2001). Race and Reunion: The Civil War in American Memory. - Lemann, Nicholas. 2007. Redemption: The Last Battle of the Civil War. pp. 75–77. - Alexander, Thomas B. (1961). "Persistent Whiggery in the Confederate South, 1860–1877". Journal of Southern History 27(3):305–29. JSTOR 2205211. - Trelease, Allen W. 1976. "Republican Reconstruction in North Carolina: A Roll-call Analysis of the State House of Representatives, 1866–1870". Journal of Southern History 42(3):319–44. JSTOR 2207155. - Paskoff, Paul F. 2008. "Measures of War: A Quantitative Examination of the Civil War's Destructiveness in the Confederacy". Civil War History 54(1):35–62. doi:10.1353/cwh.2008.0007. - McPherson (1992), p. 38. - Hesseltine, William B. 1936. A History of the South, 1607–1936. pp. 573–74. - Ezell, John Samuel. 1963. The South Since 1865. pp. 27–28. - Lash, Jeffrey N. 1993. "Civil-War Irony-Confederate Commanders And The Destruction Of Southern Railways". Prologue-Quarterly of the National Archives 25(1):35–47. - Goldin, Claudia D., and Frank D. Lewis. 1975. "The economic cost of the American Civil War: Estimates and implications". The Journal of Economic History 35(2):299–326. JSTOR 2119410. - Jones (2010), p. 72. - Hunter (1997), p. 21–73 - Downs, Jim. 2015. Sick from Freedom: African-American Illness and Suffering during the Civil War and Reconstruction.[clarification needed] - Ransom, Roger L. (February 1, 2010). "The Economics of the Civil War". Archived from the original on December 13, 2011. Retrieved March 7, 2010. Direct costs for the Confederacy are based on the value of the dollar in 1860. - Donald, Baker & Holt (2001), ch. 26. - "The Second Inaugural Address" – via The Atlantic. - Harris (1997), p. [page needed]. - Simpson (2009), p. [page needed]. - McPherson (1992), p. 6. - Alexander, Leslie M.; Rucker, Walter C. (2010). Encyclopedia of African American History. ABC-CLIO. p. 699. ISBN 978-1-85109-774-6. - Donald, Baker & Holt (2001),[page needed]. - Trefousse (1989), p. [page needed]. - Donald, Baker & Holt (2001), ch. 26–27. - Forrest Conklin, "'Wiping Out' Andy" Johnson's Moccasin Tracks: The Canvass of Northern States By Southern Radicals, 1866." Tennessee Historical Quarterly 52.2 (1993): 122–133. - Valelly, Richard M. (2004). The Two Reconstructions: The Struggle for Black Enfranchisement. University of Chicago Press. p. 29. ISBN 978-0-226-84530-2. - Trefouse, Hans (1975). The Radical Republicans. - Donald, Baker & Holt (2001), ch. 28–29. - Donald, Baker & Holt (2001), ch. 29. - Donald, Baker & Holt (2001), ch. 30. - Hyman, Harold (1959) To Try Men's Souls: Loyalty Tests in American History, p. 93. - Foner (1988), pp. 273–276. - Severance, Ben H., Tennessee's Radical Army: The State Guard and Its Role in Reconstruction, p. 59. - William Gienapp, Abraham Lincoln and Civil War America (2002), p. 155. - Patton, p. 126. sfnp error: no target: CITEREFPatton (help) - Johnson to Gov. William L. Sharkey, August 1865; quoted in Franklin (1961), p. 42. - Donald, Charles Sumner, p. 201.[full citation needed] - Ayers, The Promise of the New South p. 418.[full citation needed] - Anderson (1988), pp. 244–245. - Randall & Donald, p. 581.[full citation needed] sfnp error: no target: CITEREFRandallDonald (help) - Foner, Eric (1993). Freedom's lawmakers: a directory of Black officeholders during Reconstruction. - Ellen DuBois, Feminism and suffrage: The emergence of an independent women's movement in America (1978). - Glenn Feldman, The Disfranchisement Myth: Poor Whites and Suffrage Restriction in Alabama (2004), p. 136. - 25 U.S.C. Sec. 72. - "Act of Congress, R.S. Sec. 2080 derived from act July 5, 1862, ch. 135, Sec. 1, 12 Stat. 528". US House of Representatives. Archived from the original on March 17, 2012. Retrieved February 7, 2012 – via USCode.House.gov. - Perry, Dan W. (March 1936). "Oklahoma, A Foreordained Commonwealth". Chronicles of Oklahoma. Oklahoma Historical Society. 14 (1): 30. Retrieved February 8, 2012. - Cimbala, Miller, and Syrette (2002), An uncommon time: the Civil War and the northern home front, pp. 285, 305. - Wagner, Gallagher & McPherson (2002), pp. 735–736. - Williams (2006), "Doing Less" and "Doing More", pp. 54–59. - Guelzo (1999), pp. 290–291. - Trefousse, Hans L. (1991), Historical Dictionary of Reconstruction, Greenwood, p. viiii. - "Abraham Lincoln". BlueAndGrayTrail.com. Retrieved July 21, 2010. - Guelzo (1999), pp. 333–335. - Catton (1963), Terrible Swift Sword, pp. 365–367, 461–468. - Guelzo (1999), p. 390. - Hall, Clifton R. (1916). Andrew Johnson: military governor of Tennessee. Princeton University Press. p. 19. Retrieved July 24, 2010. - Guelzo (2004), p. 1. - Sick from Freedom, New York: Oxford University Press, 2012.[clarification needed] - Stauffer (2008), p. 279. - Peterson (1995) Lincoln in American Memory, pp. 38–41. - McCarthy (1901), Lincoln's plan of Reconstruction, p. 76. - Stauffer (2008), p. 280. - Harris, J. William (2006). The Making of the American South: A Short History 1500–1977. Malden, Massachusetts: Blackwell Publishing. p. 240. - Edwards, Laura F. (1997). Gendered Strife and Confusion: The Political Culture of Reconstruction. Chicago: University of Illinois Press. p. 53. ISBN 978-0-252-02297-5. - Hunter (1997), p. 34. - Mikkelson, David (May 27, 2011). "'Black Tax' Credit". Snopes. - Zebley, Kathleen (October 8, 2017). "Freedmen's Bureau". Tennessee Encyclopedia. Retrieved April 29, 2010. - Belz (1998), Abraham Lincoln, Constitutionalism, and Equal Rights in the Civil War Era, pp. 138, 141, 145. - Rawley (2003), Abraham Lincoln and a nation worth fighting for. p. 205. - McFeely (2002), pp. 198–207. - Smith, John David (2013). A Just and Lasting Peace: A Documentary History of Reconstruction. Penguin. p. 17. ISBN 9781101617465. - McKitrick, Eric L. (1988). Andrew Johnson and Reconstruction. Oxford University Press. p. 172. ISBN 9780195057072. - Billington, Ray Allen; Ridge, Martin (1981). American History After 1865. Rowman & Littlefield. p. 3. ISBN 9780822600275. - Lincove, David A. (2000). Reconstruction in the United States: An Annotated Bibliography. Greenwood. p. 80. ISBN 9780313291999. - McFeely (1974), p. 125. - Barney (1987), p. 245. - Donald, Baker & Holt (2001), ch. 31. - Oberholtzer (1917), pp. 128–129. - Donald (2001), p. 527.[full citation needed] sfnp error: no target: CITEREFDonald2001 (help) - Hunter (1997), p. 67. - Barney (1987), pp. 251, 284–286. - Schurz, Carl (December 1865). Report on the Condition of the South (Report). U.S. Senate Exec. Doc. No. 2, 39th Congress, 1st session. Archived from the original on October 14, 2007. - Blackmon, Douglas A. (2009). Slavery by Another Name: The Re-enslavement of Black Americans from the Civil War to World War II. New York: Anchor Books. p. 16. - Edwards, Laura F. (1997). Gendered Strife and Confusion: The Political Culture of Reconstruction. Chicago: University of Illinois Press. p. 202. ISBN 978-0-252-02297-5. - Farmer-Kaiser, Mary (2010). Freedwomen and the Freedmen's Bureau: Race, Gender, and Public Policy in the Age of Emancipation. New York: Fordham University Press. p. 160. - Jones (2010), p. 70. - Schouler, James (1913). History of the United States of America under the Constitution, Vol. 7: The Reconstruction Period. Kraus Reprints. pp. 43–57. Retrieved July 3, 2010. - Rhodes (1920), v. 6: pp. 65–66. - "The Freedman's Bureau, 1866". America's Reconstruction: People and Politics After the Civil War. Digital History Project, University of Houston. image 11 of 40. Archived from the original on September 24, 2006. Retrieved October 11, 2006. - Rhodes (1920), v. 6: p. 68. - Badeau (1887) Grant in Peace, pp. 46, 57. - Teed, Paul E.; Ladd Teed, Melissa (2015). Reconstruction: A Reference Guide. ABC-CLIO. pp. 51, 174 ff. ISBN 978-1-61069-533-6.. Foner (1988) entitles his sixth chapter "The Making of Radical Reconstruction". Benedict argues the Radical Republicans were conservative on many other issues, in: Benedict, Michael Les (1974). "Preserving the Constitution: The Conservative Basis of Radical Reconstruction". Journal of American History. 61 (1): 65–90. doi:10.2307/1918254. JSTOR 1918254. - Kolchin, Peter (1967). "The Business Press and Reconstruction, 1865–1868". Journal of Southern History. 33 (2): 183–196. doi:10.2307/2204965. JSTOR 2204965. - Pope, James Gray (Spring 2014). "Snubbed landmark: Why United States v. Cruikshank (1876) belongs at the heart of the American constitutional canon" (PDF). Harvard Civil Rights–Civil Liberties Law Review. 49 (2): 385–447. - Greene, Jamal (November 2012). "Thirteenth Amendment optimism". Columbia Law Review. 112 (7): 1733–1768. JSTOR 41708163. Archived from the original on January 7, 2015. PDF version. - "1875". A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774. Retrieved October 21, 2020. - 28 U.S.C. § 2254. - Foner (1988), ch. 6. - Journal of the Senate of the State of West Virginia for the Sixth Session, Commencing January 21, 1868, John Frew, Wheeling, 1868, p. 10 - Phillips, Christopher, The Rivers Ran Backward: The Civil War and the Remaking of the American Middle Border, Oxford Univ. Press, 2016, p. 296, ISBN 9780199720170 - Chin, Gabriel Jackson (September 14, 2004). "Gabriel J. Chin, "The 'Voting Rights Act of 1867': The Constitutionality of Federal Regulation of Suffrage During Reconstruction", 82 North Carolina Law Review 1581 (2004)". Papers.ssrn.com. SSRN 589301. Cite journal requires - Foner (1988), ch. 6–7. - Foner (1988), pp. 274–275. - Randolph Campbell (2003), Gone to Texas, p. 276. - Rhodes (1920), v. 6: p. 199. - Foner (1988), pp. 316–333. - Hume, Richard L.; Gough, Jerry B. (2008). Blacks, Carpetbaggers, and Scalawags: the Constitutional Conventions of Radical Reconstruction. LSU Press. - Jenkins, Jeffery A.; Heersink, Boris (June 4, 2016). Republican Party Politics and the American South: From Reconstruction to Redemption, 1865–1880 (PDF). 2016 Annual Meeting of the Southern Political Science Association, San Juan, Puerto Rico. p. 18. Archived from the original (PDF) on April 18, 2016. - Russ, William A., Jr. (1934). "The Negro and White Disfranchisement During Radical Reconstruction". Journal of Negro History. 19 (2): 171–192. doi:10.2307/2714531. JSTOR 2714531. S2CID 149894321. - Summers (2014), pp. 130–131, 159. - Foner (1988), pp. 323–325. - Summers (2014a), p. [page needed]. - Tyack, David; Lowe, Robert (1986). "The constitutional moment: Reconstruction and Black education in the South". American Journal of Education. 94 (2): 236–256. doi:10.1086/443844. JSTOR 1084950. S2CID 143849662. - Cooper, William J., Jr.; Terrill, Thomas E. (2009). The American South: A History. p. 436. ISBN 978-0-7425-6450-3. - Zuczek (2006), Vol. 2 p. 635. - Foner (1988), p. 324. - Perman (1985), pp. 36–37. - Gillette (1982), Retreat from Reconstruction, 1869–1879, p. 99. - Zuczek (2006), Vol. 1 p. 323; Vol. 2 pp. 645, 698. - Summers (2014), pp. 160–161. - Smith (2001), pp. 455–457. - Calhoun (2017), pp. 41–42. - Simpson, Brooks D. (1999). "Ulysses S. Grant and the Freedmen's Bureau". In Paul A. Cimbala & Randall M. Miller (eds.). The Freedmen's Bureau and Reconstruction: Reconsiderations. New York: Fordham University Press. - Smith (2001), pp. 437–453, 458–460. - Montgomery, David (1967). Beyond Equality: Labor and the Radical Republicans, 1862–1872. New York: Alfred Knopf. pp. 130–133. ISBN 9780252008696. Retrieved October 9, 2020. - Gleeson, David (2016) Failing to 'unite with the abolitionists': the Irish Nationalist Press and U.S. emancipation. Slavery & Abolition, 37 (3). pp. 622–637. ISSN 0144-039X - Knight, Matthew (2017). "The Irish Republic: Reconstructing Liberty, Right Principles, and the Fenian Brotherhood". Éire-Ireland (Irish-American Cultural Institute). 52 (3 & 4): 252–271. doi:10.1353/eir.2017.0029. S2CID 159525524. Retrieved October 9, 2020. - Yanoso, Nicole Anderson (2017). The Irish and the American Presidency. New York: Routledge. pp. 75–80. ISBN 9781351480635. - Simon (2002), p. 245. - Peters & Woolley (2018b). - Smith (2001), p. 461. - Calhoun (2017), p. 55. - Foner (2014), pp. 243–244. - McFeely (1981), p. 284. - White (2016), p. 471. - Kahan (2018), p. 61. - Simon (1967), Papers of Ulysses S. Grant, Vol. 19, p. xiii. - Osborne & Bombaro (2015), pp. 6, 12, 54. sfnp error: no target: CITEREFOsborneBombaro2015 (help) - Chernow (2017), p. 629. - Chernow (2017), p. 628. - Simon (2002). - Brands (2012), pp. 435, 465; Chernow (2017), pp. 686–687; Simon (2002), p. 247. - Brands (2012), p. 465. - Simon (2002), p. 246. - Simon (2002), pp. 247–248. - Smith (2001), pp. 543–545. - Brands (2012), p. 474. - Kaczorowski (1995). - Kahan (2018), pp. 64–65; Calhoun (2017), pp. 317–319. - Smith (2001), pp. 545–546; White (2016), p. 521. - Simon (2002), p. 248. - "Report of the Joint Select Committee to Inquire into the Condition of Affairs in the Late Insurrectionary States February 19, 1872". Retrieved January 13, 2021. - Kahan (2018), p. 66. - Smith (2001), p. 547. - Calhoun (2017), p. 324. - Smith (2001), pp. 547–548. - Foner (2019), pp. 120–122. - Kahan (2018), p. 122. - Wang (1997), p. 102; Kaczorowski (1995), p. 182. - Chernow (2017), p. 746. - Kahan (2018), pp. 67–68; Chernow (2017), pp. 746. - Chernow (2017), p. 795. - Calhoun (2017), p. 479. - David Quigley, "Constitutional Revision and the City: The Enforcement Acts and Urban America, 1870–1894", Journal of Policy History, January 2008, Vol. 20, Issue 1, pp. 64–75. - Blair (2005), p. 400. - McPherson (1992), p. 19. - "Date of Secession Compared to 1860 Black Population". America's Civil War. Sewanee: The University of the South. Archived from the original on August 16, 2014. Retrieved April 9, 2014. - Foner (1988), ch. 7. - Foner (1993), introduction. - Steven Hahn, A Nation under Our Feet[full citation needed] - "Table I. Population of the United States (by States and Territories) in the Aggregate and as White, Colored, Free Colored, Slave, Chinese, and Indian, at Each Census" (PDF). Population by States and Territories – 1790–1870. United States Census Bureau. 1872. Retrieved October 20, 2007. The complete 1870 census documents are available from Census.gov. - Foner, Eric (January 31, 2018). "South Carolina's Forgotten Black Political Revolution". Slate. Retrieved February 3, 2020. - Foner (1988), pp. 354–355. - Stowell (1998), pp. 83–84. - Walker, Clarence Earl (1982). A Rock in a Weary Land: The African Methodist Episcopal Church During the Civil War and Reconstruction. - Sweet, William W. (1914). "The Methodist Episcopal Church and Reconstruction". Journal of the Illinois State Historical Society. 7 (3): 157. JSTOR 40194198. - Grant, Donald Lee (1993). The Way It Was in the South: The Black Experience in Georgia. University of Georgia Press. p. 264. ISBN 978-0-8203-2329-9. - Foner (1988), p. 93. - Morrow (1954), p. 202. - Ralph E. Morrow, Northern Methodism and Reconstruction (1956) - Stowell (1998), pp. 30–31. - Robert D. Clark, The Life of Matthew Simpson (1956) pp. 245–267 - Norwood, Fredrick A., ed. (1982). Sourcebook of American Methodism. p. 323. - Sweet, William W. (1914). "The Methodist Episcopal Church and Reconstruction". Journal of the Illinois State Historical Society. 7 (3): 161. JSTOR 40194198. - Victor B. Howard, Religion and the Radical Republican Movement, 1860–1870 (1990) pp. 212–13 - Morrow (1954), p. 205. - Fallin, Wilson, Jr. (2007). Uplifting the People: Three Centuries of Black Baptists in Alabama. pp. 52–53. - Anderson (1988), p. 4. - Anderson (1988), pp. 6–15. - William Preston Vaughn, Schools for All: The Blacks and Public Education in the South, 1865–1877 (University Press of Kentucky, 2015). - Foner, pp. 365–368, Which source?. sfnp error: no target: CITEREFFoner (help) - Franklin (1961), p. 139. - Lynch (1913), p. [page needed]. - B. D. Mayberry, A Century of Agriculture in the 1890 Land Grant Institutions and Tuskegee University, 1890–1990 (1992). - Logan, Trevon D. (2020). "Do Black Politicians Matter? Evidence from Reconstruction". The Journal of Economic History. 80 (1): 1–37. doi:10.1017/S0022050719000755. ISSN 0022-0507. - Foner, p. 387, Which source?. sfnp error: no target: CITEREFFoner (help) - Franklin (1961), pp. 141–148. - Stover (1955). sfnp error: no target: CITEREFStover1955 (help) - Franklin (1961), pp. 147–148. - Foner, p. 375, Which source?. sfnp error: no target: CITEREFFoner (help) - Foner, p. 376, Which source?. sfnp error: no target: CITEREFFoner (help) - Foner, pp. 415–416, Which source?. sfnp error: no target: CITEREFFoner (help) - Schell, Herbert S. (1930). "Hugh McCulloch and the Treasury Department, 1865–1869". Mississippi Valley Historical Review. 17 (3): 404–421. doi:10.2307/1893078. JSTOR 1893078. - For an econometric approach, see: Ohanian, Lee E. (2018). The Macroeconomic Effects of War Finance in the United States: Taxes, Inflation, and Deficit Finance. Routledge. - Margaret G. Myers, A financial history of the United States (Columbia University Press, 1970), pp 174–196. - Studenski, Paul; Kroos, Herman E. (1963). Financial History of the United States (2nd ed.). - Unger, Irwin (1964). The Greenback Era: A Social and Political History of American Finance 1865–1879. Princeton University Press. - Sharkey, Robert P. (1967). Money, Class, and Party: An Economic Study of Civil War and Reconstruction. Johns Hopkins Press. - Franklin (1961), pp. 168–173. - Steedman, Marek D. (Spring 2009). "Resistance, Rebirth, and Redemption: The Rhetoric of White Supremacy in Post-Civil War Louisiana". Historical Reflections. 35 (1): 97–113. - Fleming, Walter L. (1919). The Sequel of Appomattox: A Chronicle of the Reunion of the States. Chronicles of America series, Vol. 32. New Haven: Yale University Press. p. 21. ISBN 9780554271941. • Fleming, Walter L. (1918). The Sequel of Appomattox, A Chronicle of the Reunion of the States. Archived from the original on February 13, 2006. - Perman (1985), p. 6. - Williams (1946). - Fleming (1906–1907), Vol. II, p. 328. - Fleming (1906–1907), Vol. II, pp. 328–329. - Oberholtzer (1917), p. 485. - McFeely (2002), pp. 420–422. - J. W. Schuckers, (1874), The Life and Public Services of Salmon Portland Chase, p. 585; letter of May 30, 1868 to August Belmont. - McPherson (1875), p. [page needed]. - Vaughn, Stephen L., ed. (2007). Encyclopedia of American Journalism. p. 441. - Abbott, Richard H. (2004). For Free Press and Equal Rights: Republican Newspapers in the Reconstruction South. - Earl F. Woodward, "The Brooks and Baxter War in Arkansas, 1872–1874", Arkansas Historical Quarterly (1971) 30#4 pp. 315–336 JSTOR 40038083 - Foner, p. 537–541, Which source?. sfnp error: no target: CITEREFFoner (help) - Foner, pp. 374–375, Which source?. sfnp error: no target: CITEREFFoner (help) - Lynch (1915), p. [full citation needed]. sfnp error: no target: CITEREFLynch1915 (help) - Perman (1985), ch. 3. - Foner, p. ch. 9, Which source?. sfnp error: no target: CITEREFFoner (help) - Foner, p. 443, Which source?. sfnp error: no target: CITEREFFoner (help) - Foner, pp. 545–547, Which source?. sfnp error: no target: CITEREFFoner (help) - Nicholas Lemann, Redemption: The Last Battle of the Civil War, New York: Farrar, Straus & Giroux, Pbk. 2007, pp. 15–21. - US Senate Journal, January 13, 1875, pp. 106–107. - Alexander, Danielle (January–February 2004). "Forty Acres and a Mule: The Ruined Hope of Reconstruction". Humanities. 25 (1). Archived from the original on September 16, 2008. Retrieved April 14, 2008. - Foner, pp. 555–556, Which source?. sfnp error: no target: CITEREFFoner (help) - Rable, George C. (1984). But There Was No Peace: The Role of Violence in the Politics of Reconstruction. Athens: University of Georgia Press. p. 132. - Foner, p. ch. 11, Which source?. sfnp error: no target: CITEREFFoner (help) - Nicholas Lemann, Redemption: The Last Battle of the Civil War, New York: Farrar, Straus & Giroux, paperback, 2007, p. 174. - Chacón, Mario L.; Jensen, Jeffrey L. (2020). "Democratization, De Facto Power, and Taxation: Evidence from Military Occupation during Reconstruction". World Politics. 72: 1–46. doi:10.1017/S0043887119000157. ISSN 0043-8871. S2CID 211320983. - Foner, p. 604, Which source?. sfnp error: no target: CITEREFFoner (help) - "HarpWeek | Hayes vs. Tilden: The Electoral College Controversy of 1876–1877". elections.harpweek.com. Retrieved May 14, 2021. - Woodward (1966), pp. 3–15. - Nell Irvin Painter, Exodusters: Black Migration to Kansas After Reconstruction (1976) - James T. Moore, "Black Militancy in Readjuster Virginia, 1879–1883", Journal of Southern History, Vol. 41, No. 2 (May 1975), pp. 167–186 JSTOR 2206012. - "Reconstruction". History.com. Archived from the original on January 24, 2021. Retrieved January 24, 2021. - Fletcher M. Green, "Walter Lynwood Fleming: Historian of Reconstruction", The Journal of Southern History, Vol. 2, No. 4 (November 1936), pp. 497–521. - Louis R. Harlan, Booker T. Washington in Perspective (1988), p. 164. - A. A. Taylor, "Historians of the Reconstruction", The Journal of Negro History, Vol. 23, No. 1 (January 1938), pp. 16–34. - Williams (1946), p. 473. - Beale, Howard K. (1958). The Critical Year; A study of Andrew Johnson and reconstruction. New York: F. Ungar. p. 147. - Tulloch, Hugh (1999). The Debate on the American Civil War Era. Manchester University Press. p. 226. ISBN 978-0-7190-4938-5. - Charles, Allan D. (1983). "Howard K Beale". In Wilson, Clyde N. (ed.). Twentieth-century American Historians. Gale Research. pp. 32–38. - Charles A. Beard & Mary R. Beard (1927). The Rise of American Civilization. 2. New York: Macmillan. p. 54. - Hofstadter, Richard (2012) . Progressive Historians. Knopf Doubleday. p. 303. ISBN 978-0-307-80960-5. - Hesseltine, William B. (1935). "Economic Factors in the Abandonment of Reconstruction". Mississippi Valley Historical Review. 22 (2): 191–210. doi:10.2307/1898466. JSTOR 1898466. - Coben, Stanley (1959). "Northeastern Business and Radical Reconstruction: A Re-examination". The Mississippi Valley Historical Review. 46 (1): 67–90. doi:10.2307/1892388. JSTOR 1892388. - Pressly, Thomas J. (1961). "Andrew Johnson and Reconstruction (review)". Civil War History. 7: 91–92. doi:10.1353/cwh.1961.0063. - Montgomery, David (1961). "Radical Republicanism in Pennsylvania, 1866–1873". The Pennsylvania Magazine of History and Biography. 85 (4): 439–457. JSTOR 20089450. - Stampp & Litwack (1969), pp. 85–106. - Foner (1982), p. [page needed]. sfnp error: no target: CITEREFFoner1982 (help) - Montgomery (1967), pp. vii–ix. - Du Bois, W. E. B. (1999) . Black Reconstruction in America, 1860–1880. Simon & Schuster. ISBN 9780684856575 – via Google Books. - Williams (1946), p. 469. - Foner, p. xxii, Which source?. sfnp error: no target: CITEREFFoner (help) - Feldman, Glenn (2004). The Disfranchisement Myth: Poor Whites and Suffrage Restriction in Alabama. Athens: University of Georgia Press. pp. 135–136. - Pildes, Richard H. (2000). "Democracy, Anti-democracy, and the Canon". Constitutional Commentary. 17: 27. Retrieved March 15, 2008. - Foner, Eric. A Short History of Reconstruction (1990), p. 255. Foner adds: "What remains certain is that Reconstruction failed, and that for blacks its failure was a disaster whose magnitude cannot be obscured by the accomplishments that endured." p. 256. - McFeely (2002), pp. 372–373, 424, 425. - Brown (2008), p. [page needed]. - Foner, Eric (2017). Give me liberty! : an American history. volume 2, From 1865 (Brief 5th ed.). New York: W.W. Norton & Company. C. ISBN 9780393603408. OCLC 1019904631. - Orville Vernon Burton (2007). The Age of Lincoln (1st ed.). New York: Hill and Wang. p. 312. ISBN 9780809095131. - Whaples, Robert (March 1995). "Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions". The Journal of Economic History. 55 (1): 139–154. doi:10.1017/S0022050700040602. JSTOR 2123771. - Burton, Vernon (2006). "Civil War and Reconstruction". In Barney, William L. (ed.). A Companion to 19th-century America. pp. 54–56. - Etcheson, Nicole (June 2009). "Reconstruction and the Making of a Free-Labor South". Reviews in American History. 37 (2): 236–242. doi:10.1353/rah.0.0101. S2CID 146573684. - Frisby, Derek W. (2010). "A Victory Spoiled: West Tennessee Unionists During Reconstruction". In Cimballa, Paul (ed.). The Great Task Remaining Before Us: Reconstruction as America's Continuing Civil War. p. 9. - Zuczek (2006), Vol. 1 pp. 20, 22. - Foner (1988), p. 604 reprinted in: Couvares, Francis G.; et al., eds. (2000). Interpretations of American History Vol. I Through Reconstruction (7th ed.). p. 409. ISBN 978-0-684-86773-1. - Summers (2014), p. 4. - Mixon, Wayne (1977). "Joel Chandler Harris, the Yeoman Tradition, and the New South Movement". The Georgia Historical Quarterly. 61 (4): 308–317. JSTOR 40580412. - Bloomfield, Maxwell (1964). "Dixon's The Leopard's Spots: A Study in Popular Racism". American Quarterly. 16 (3): 387–401. doi:10.2307/2710931. JSTOR 2710931. - Gardner, Sarah E. (2006). Blood and Irony: Southern White Women's Narratives of the Civil War, 1861–1937. University of North Carolina Press. pp. 128–130. ISBN 9780807857670. - Ruppersburg, Hugh; Dobbs, Chris (2017). "Gone With the Wind (Film)". New Georgia Encyclopedia. - Matthews (1864), p. 8 - Matthews (1864), p. 104. - Matthews (1864), p. 120. - Matthews (1864), p. 118. - Journal of the Convention of the People of North Carolina, Held on the 20th Day of May, A. D. 1861. Raleigh: Jno. W. Syme. 1862. p. 18. LCCN 02014915. OCLC 6786362. OL 13488372M – via Internet Archive. - Matthews (1864), p. 119. - "Tennessee Admitted as a Member of the Confederacy". Louisville Daily Courier. 33 (6). July 6, 1861. p. 1. For much more detail see Reconstruction: Bibliography Scholarly secondary sources - Anderson, James D. (1988). The Education of Blacks in the South, 1860–1935. University of North Carolina Press. - Barney, William L. (1987). Passage of the Republic: An Interdisciplinary History of Nineteenth Century America. D. C. Heath. ISBN 0-669-04758-9. - Behrend, Justin (2015). Reconstructing Democracy: Grassroots Black Politics in the Deep South after the Civil War. Athens, Georgia: University of Georgia Press. - Blair, William (2005). "The use of military force to protect the gains of reconstruction". Civil War History. 51 (4): 388–402. doi:10.1353/cwh.2005.0055. - Blum, Edward J. (2005). Reforging the White Republic: Race, Religion, and American Nationalism, 1865–1898. - Bradley, Mark L. (2009). Bluecoats and Tar Heels: Soldiers and Civilians in Reconstruction North Carolina. University Press of Kentucky. ISBN 978-0-8131-2507-7. - Brands, H. W. (2012). The Man Who Saved the Union: Ulysses S. Grant in War and Peace. New York: Doubleday. ISBN 978-0-385-53241-9. - Brown, Thomas J., ed. (2008). Reconstructions: New Perspectives on the Postbellum United States. - Calhoun, Charles W. (2017). The Presidency of Ulysses S. Grant. Lawrence: University Press of Kansas. ISBN 978-0-7006-2484-3. scholarly review and response by Calhoun at doi:10.14296/RiH/2014/2270 - Chernow, Ron (2017). Grant. New York: Penguin Press. ISBN 978-1-59420-487-6. - Cimbala, Paul Alan; Miller, Randall M.; Simpson, Brooks D. (2002). An Uncommon Time: The Civil War and the Northern Home Front. Fordham University Press. ISBN 978-0-8232-2195-0. - Cruden, Robert. The Negro in Reconstruction.[full citation needed] - Donald, David Herbert; Baker, Jean H.; Holt, Michael F. (2001). The Civil War and Reconstruction. New York: Norton. ISBN 978-0393974270. OCLC 247969097. - Downs, Gregory P. (2015). After Appomattox: Military Occupation and the Ends of War. Cambridge, MA: Harvard University Press. - Egerton, Douglas (2014). The Wars of Reconstruction: The Brief, Violent History of America's Most Progressive Era. Bloomsbury Press. ISBN 978-1-60819-566-4. - Foner, Eric; Mahoney, Olivia (June 1997). America's Reconstruction: People and Politics After the Civil War. ISBN 0-8071-2234-3. - Foner, Eric (1988). Reconstruction: America's Unfinished Revolution, 1863–1877. New York: Harper & Row. ISBN 0-06-015851-4. Pulitzer-prize winning history, and most detailed synthesis of original and previous scholarship. - Foner, Eric (2005). Forever Free: The Story of Emancipation and Reconstruction. - Foner, Eric (2019). The Second Founding How The Civil War And Reconstruction Remade The Constitution. New York: W.W. Norton & Company, Inc. ISBN 978-0-393-35852-0. - Franklin, John Hope (1961). Reconstruction after the Civil War. ISBN 0-226-26079-8. - Guelzo, Allen C. (1999). Abraham Lincoln: Redeemer President. W.B. Eerdmans. ISBN 9780802838728. - Guelzo, Allen C. (2004). Lincoln's Emancipation Proclamation: The End of Slavery in America. New York: Simon & Schuster Paperbacks. ISBN 978-1-4165-4795-2. - Guelzo, Allen C. (2018). Reconstruction A Concise History. Oxford University Press. ISBN 9780190865696. - Harris, William C. (1997). With Charity for All: Lincoln and the Restoration of the Union. Portrays Lincoln as opponent of Radicals. - Holzer, Harold; Medford, Edna Greene; Williams, Frank J. (2006). The Emancipation Proclamation: Three Views (Social, Political, Iconographic). Louisiana State University Press. ISBN 978-0-8071-3144-2. - Hubbs, G. Ward (2015). Searching for Freedom after the Civil War: Klansman, Carpetbagger, Scalawg, and Freedman. Tuscaloosa: University of Alabama Press. - Hunter, Tera W. (1997). To 'Joy My Freedom: Southern Black Women's Lives and Labors after the Civil War. Cambridge, MA: Harvard University Press. - Jenkins, Wilbert L. (2002). Climbing up to Glory: A Short History of African Americans During the Civil War and Reconstruction. - Jones, Jacqueline (2010). Labor of Love, Labor of Sorrow: Black Women, Work, and the Family, from Slavery to the Present. New York: Basic Books. - Kaczorowski, Robert J. (1995). "Federal Enforcement of Civil Rights During the First Reconstruction". Fordham Urban Law Journal. 23 (1): 155–186. ISSN 2163-5978. - Kahan, Paul (2018). The Presidency of Ulysses S. Grant: Preserving the Civil War's Legacy. Yardley, PA: Westholme Publishing, LLC. ISBN 978-1-59416-273-2. - McCarthy, Charles Hallan (1901). Lincoln's Plan of Reconstruction. New York: McClure, Philips, & Company. - McFeely, William S. (1974). Woodward, C. Vann (ed.). Responses of the Presidents to Charges of Misconduct. New York: Delacorte Press. ISBN 978-0-440-05923-3. - McFeely, William S. (1981). Grant: A Biography. Norton. ISBN 0-393-01372-3. - McFeely, William S. (2002). Grant: A Biography.[full citation needed] - McPherson, James M. (1992). Abraham Lincoln and the Second American Revolution. Oxford University Press. ISBN 978-0-19-507606-6. - McPherson, James M.; Hogue, James (2009). Ordeal By Fire: The Civil War and Reconstruction. - Milton, George Fort (1930). The Age of Hate: Andrew Johnson and the Radicals; from Dunning School.CS1 maint: postscript (link) - Morrow, Ralph E. (1954). "Northern Methodism in the South during Reconstruction". Mississippi Valley Historical Review. 41 (2): 197–218. doi:10.2307/1895802. JSTOR 1895802. - Oberholtzer, Ellis Paxson (1917). A History of the United States Since the Civil War: 1865-68. Vol. 1. - Patrick, Rembert (1967). The Reconstruction of the Nation. New York: Oxford University Press. - Perman, Michael (1985). The Road to Redemption: Southern Politics, 1869–1879. Chapel Hill, NC: University of North Carolina Press. ISBN 978-0807841419. - Perman, Michael (2003). Emancipation and Reconstruction. - Peterson, Merrill D. (1994). Lincoln in American Memory. New York: Oxford University Press. ISBN 978-0-19-802304-3. - Randall, J. G.; Donald, David (2016). The Civil War and Reconstruction [Second Edition]. Pickle Partners Publishing. ISBN 978-1787200272. - Rhodes, James F. (1920). History of the United States from the Compromise of 1850 to the McKinley–Bryan Campaign of 1896. Highly detailed narrative by Pulitzer Prize winner; argues was a political disaster because it violated the rights of White Southerners. - Richter, William L. (2009). A to Z of the Civil War and Reconstruction. Scarecrow Press. ISBN 978-0-8108-6336-1. - Simon, John Y. (2002). "Ulysses S. Grant". In Graff, Henry (ed.). The Presidents: A Reference History (7th ed.). pp. 245–260. ISBN 0-684-80551-0. - Simpson, Brooks D. (2009). The Reconstruction Presidents.[full citation needed] - Smith, Jean Edward (2001). Grant. New York: Simon & Schuster. ISBN 0-684-84927-5. - Stampp, Kenneth M. (1965). The Era of Reconstruction, 1865–1877. New York: Vintage Books; short survey; rejects Dunning School analysis.CS1 maint: postscript (link) - Stauffer (2008). Giants.[full citation needed] - Stowell, Daniel W. (1998). Rebuilding Zion: The Religious Reconstruction of the South, 1863–1877. Oxford University Press. ISBN 978-0-19-802621-1. - Summers, Mark Wahlgren (2009). A Dangerous Stir: Fear, Paranoia, and the Making of Reconstruction. excerpt and text search - Summers, Mark Wahlgren (2014). The Ordeal of the Reunion: A New History of Reconstruction. University of North Carolina Press. ISBN 978-1-4696-1757-2. text search; online - Summers, Mark Wahlgren (2014a). Railroads, Reconstruction, and the Gospel of Prosperity: Aid Under the Radical Republicans, 1865–1877. Princeton University Press. ISBN 978-0-691-61282-9. - Thompson, C. Mildred (2010) . Reconstruction In Georgia: Economic, Social, Political 1865–1872 (reprint ed.). New York: The Columbia University Press; [etc.] - Trefousse, Hans L. (1989). Andrew Johnson: A Biography.[full citation needed] - Wagner, Margaret E.; Gallagher, Gary W.; McPherson, James M. (2002). The Library of Congress Civil War Desk Reference. New York: Simon & Schuster Paperbacks. ISBN 978-1-4391-4884-6. - Wang, Xi (1997). The Trial of Democracy: Black Suffrage and Northern Republicans, 1860–1910. Athens: The University of Georgia Press. ISBN 978-0-8203-4206-1. - White, Ronald C. (2016). American Ulysses: A Life of Ulysses S. Grant. Random House Publishing. ISBN 978-1-58836-992-5. - Williams, T. Harry (November 1946). "An Analysis of Some Reconstruction Attitudes". Journal of Southern History. 12 (4): 469–486. doi:10.2307/2197687. JSTOR 2197687. - Woodward, C. Vann (1966). Reunion and Reaction: The Compromise of 1877 and the End of Reconstruction. Oxford University Press. ISBN 978-0-19-506423-0. - Zuczek, Richard, ed. (2006). Encyclopedia of the Reconstruction Era. (2 vols.) - Foner, Eric (2014). "Introduction to the 2014 Anniversary Edition". Reconstruction: America's Unfinished Revolution, 1863–18 (Updated ed.). ISBN 978-0062383235. - Ford, Lacy K., ed. A Companion to the Civil War and Reconstruction. Blackwell (2005) 518 pp. - Frantz, Edward O., ed. A Companion to the Reconstruction Presidents 1865–1881 (2014). 30 essays by scholars. - Perman, Michael and Amy Murrell Taylor, eds. Major Problems in the Civil War and Reconstruction: Documents and Essays (2010) - Simpson, Brooks D. (2016). "Mission Impossible: Reconstruction Policy Reconsidered". The Journal of the Civil War Era. 6: 85–102. doi:10.1353/cwe.2016.0003. S2CID 155789816. - Smith, Stacey L. (November 3, 2016). "Beyond North and South: Putting the West in the Civil War and Reconstruction". The Journal of the Civil War Era. 6 (4): 566–591. doi:10.1353/cwe.2016.0073. S2CID 164313047. - Stalcup, Brenda, ed. (1995). Reconstruction: Opposing Viewpoints. Greenhaven Press. Uses primary documents to present opposing viewpoints. - Stampp, Kenneth M.; Litwack, Leon M., eds. (1969). Reconstruction: An Anthology of Revisionist Writings. Essays by scholars. - Weisberger, Bernard A. (1959). "The dark and bloody ground of Reconstruction historiography". Journal of Southern History. 25 (4): 427–447. doi:10.2307/2954450. JSTOR 2954450. - Appleton’s American Annual Cyclopedia and Register of Important Events of the Year 1867 (highly detailed compendium of facts and primary sources; details on every U.S. state & the national government) - Appleton’s American Annual Cyclopedia... for 1868 (1873) - Appleton’s American Annual Cyclopedia... for 1869 (1869) - Appleton’s American Annual Cyclopedia... for 1870 (1871) - Appleton’s American Annual Cyclopedia... for 1872 (1873) - Appleton’s American Annual Cyclopedia... for 1873 (1879) - Appleton’s American Annual Cyclopedia... for 1875 (1876) - Appleton’s American Annual Cyclopedia... for 1876 (1877) - Appleton’s American Annual Cyclopedia... for 1877 (1878) - The American year-book and national register for 1869 (1869) online - Barnes, William H., ed., History of the Thirty-ninth Congress of the United States (1868). Summary of Congressional activity. - Berlin, Ira, ed. Freedom: A Documentary History of Emancipation, 1861–1867 (1982), 970 pp. of archival documents; also Free at Last: A Documentary History of Slavery, Freedom, and the Civil War ed by Ira Berlin, Barbara J. Fields, and Steven F. Miller (1993). - Blaine, James G. Twenty Years of Congress: From Lincoln to Garfield. With a review of the events which led to the political revolution of 1860 (1886). By Republican Congressional leader Vol. 2 (via Internet Archive). - Fleming, Walter L. (1905). Civil War and Reconstruction in Alabama – via Project Gutenberg; the most detailed study; Dunning School.CS1 maint: postscript (link) - Fleming, Walter L. (1906–1907). Documentary History of Reconstruction: Political, Military, Social, Religious, Educational, and Industrial. 2 vols. Presents a broad collection of primary sources; Vol. 1: On National Politics; Vol. 2: On States (via Google Books). - Memoirs of W. W. Holden (1911); via Internet Archive. North Carolina "scalawag" governor. - Hyman, Harold M., ed. The Radical Republicans and Reconstruction, 1861–1870 (1967), collection of long political speeches and pamphlets. - Lee, Stephen D. (1899). "The South Since the War". In Evans, Clement A. (ed.). Confederate Military History. XII. Atlanta, Georgia: Confederate Publishing Company. pp. 267–568 – via Internet Archive. - Lynch, John R. (1913). The Facts of Reconstruction. New York: The Neale Publishing Company. One of the first Black congressmen during Reconstruction. - Matthews, James M., ed. (1864). The Statutes at Large of the Provisional Government of the Confederate States of America, from the Institution of the Government, February 8, 1861, to its Termination, February 18, 1862, Inclusive; Arranged in Chronological Order. Richmond: R. M. Smith – via Internet Archive. - McPherson, Edward (1875). The Political History of the United States of America During the Period of Reconstruction. Solomons & Chapman. large collection of speeches and primary documents, 1865–1870, complete text online. [The copyright has expired.] - Palmer, Beverly Wilson; Byers Ochoa, Holly; eds. The Selected Papers of Thaddeus Stevens 2 vols. (1998), 900 pp; his speeches plus and letters to and from Stevens. - Palmer, Beverly Wilson, ed. The Selected Letters of Charles Sumner, 2 vols. (1990); Vol. 2 covers 1859–1874. - Peters, Gerhard; Woolley, John T. (2018b). "1868 Democratic Party Platform". The American Presidency Project. - Pike, James Shepherd The prostrate state: South Carolina under negro government (1874) - Reid, Whitelaw After the War: A Southern Tour, May 1, 1865 to May 1, 1866 (1866). By Republican editor. - Smith, John David, ed. We Ask Only for Even-Handed Justice: Black Voices from Reconstruction, 1865–1877 (University of Massachusetts Press, 2014). xviii, 133 pp. - Sumner, Charles 'Our Domestic Relations: or, How to Treat the Rebel States' Atlantic Monthly September 1863, early abolitionist manifesto. - Du Bois, W. E. B. (July 1910). "Reconstruction and its Benefits" (PDF). American Historical Review. 15 (4): 781–799. doi:10.2307/1836959. JSTOR 1836959. Archived from the original (PDF) on September 27, 2011. - Du Bois, W. E. B. Black Reconstruction in America 1860–1880 (1935), Counterpoint to Dunning School explores the economics and politics of the era from Marxist perspective - Dunning, William Archibald (1905). Reconstruction: Political & Economic, 1865–1877. Influential summary of Dunning School; blames Carpetbaggers for failure of Reconstruction. - Fitzgerald, Michael W. Splendid Failure: Postwar Reconstruction in the American South (2007), 224pp; excerpt and text search - Fitzgerald, Michael R. Reconstruction in Alabama: From Civil War to Redemption in the Cotton South (LSU Press, 2017) 464 pages; a standard scholarly history - Foner, Eric (March 28, 2015). "Why Reconstruction Matters". New York Times. - Henry, Robert Selph (1938). The Story of Reconstruction. - Keith, LeeAnna (2020). When It Was Grand: The Radical Republican History of the Civil War. excerpt; online review: Jon Bekken (July 2020). "Bekken on Keith, 'When It Was Grand: The Radical Republican History of the Civil War'". H-Socialisms. - Litwack, Leon. Been in the Storm So Long (1979). Pulitzer Prize; social history of the freedmen - Roberts, Blain; Kytle, Ethan J. (January 17, 2018). "When the South Was the Most Progressive Region in America". The Atlantic. - Simkins, William Stewart (June 1916). "Why the Ku Klux". The Alcalde. Vol. 4. pp. 735–748. Archived from the original on September 22, 2006 – via Duke University School of Law / Internet Archive. Also available via WikiSource. - Suryanarayan, P., and White, S. (2020). Slavery, Reconstruction, and Bureaucratic Capacity in the American South. American Political Science Review. Newspapers and magazines - DeBow’s Review major Southern conservative magazine; stress on business, economics and statistics - Harper’s Weekly leading New York news magazine; pro-Radical - Nast, Thomas, magazine cartoons pro-Radical editorial cartoons - Primary sources from Gilder-Lehrman collection - The New York Times daily edition online through ProQuest at academic libraries |Wikiquote has quotations related to: Reconstruction era| - Behn, Richard J., ed. 2020. "Reconstruction". Mr. Lincoln and Freedom. The Lehrman Institute. - Bigelow, Bill. "Reconstructing the South: A Role Play" (teaching activity). Zinn Education Project. - Bragg, William Harris. 2019. "Reconstruction in Georgia". New Georgia Encyclopedia. - Green Jr., Robert P. 1991. "Reconstruction Historiography: A Source of Teaching Ideas". The Social Studies (July/August):153–57. - Jensen, Richard. 2006. "Jensen's Guide to Reconstruction History, 1861–1877". Scholars' Guide to WWW. University of Illinois Chicago. Links to primary and secondary sources. - Mabry, Donald J. 2006. "Reconstruction in Mississippi". The Historical Text Archive. - Smith, Llewellyn M., dir. 2004. "Reconstruction: The Second Civil War", American Experience. PBS. Film connecting the replacement of civil rights with segregation and disenfranchisement at the end of 19th-century during the Jim Crow era. - "Civil Rights During Reconstruction" – Historians Eric Foner, David Blight and Ed Ayers discuss "Civil Rights During Reconstruction" - Seward, William H. 1866. "Proclamation Declaring the Insurrection at an End". American Historical Documents, 1000–1904, (The Harvard Classics 43). - "Reconstruction: Era and Definition". The History Channel. A&E Networks. - "The Civil War: Reconstruction". 2015. – This is part of an extensive assessment of the Civil War and slavery which gives particular attention to children. - "The Civil War and Reconstruction Era, 1845–1877" [HIST 119]. Open Yale Courses. New Haven, Connecticut: Yale University. Full semester course in text/audio/video; materials free under the Creative Commons license. - "The Reconstruction Era and the Fragility of Democracy". Facing History and Ourselves.
The troposphere is the lowest portion of Earth's atmosphere, and is also where all weather takes place. It contains approximately 75% of the atmosphere's mass and 99% of the total mass of water vapor and aerosols. The average depths of the troposphere are 20 km (12 mi) in the tropics, 17 km (11 mi) in the mid latitudes, and 7 km (4.3 mi) in the polar regions in winter. The lowest part of the troposphere, where friction with the Earth's surface influences air flow, is the planetary boundary layer. This layer is typically a few hundred meters to 2 km (1.2 mi) deep depending on the landform and time of day. Atop the troposphere is the tropopause, which is the border between the troposphere and stratosphere. The tropopause is an inversion layer, where the air temperature ceases to decrease with height and remains constant through its thickness. The word troposphere derives from the Greek: tropos for "turn, turn toward, trope" and "-sphere" (as in, the Earth), reflecting the fact that rotational turbulent mixing plays an important role in the troposphere's structure and behaviour. Most of the phenomena we associate with day-to-day weather occur in the troposphere. - 1 Pressure and temperature structure - 2 Atmospheric flow - 3 Synoptic scale observations and concepts - 4 See also - 5 References - 6 External links Pressure and temperature structure The chemical composition of the troposphere is essentially uniform, with the notable exception of water vapor. The source of water vapor is at the surface through the processes of evaporation. The temperature of the troposphere decreases with height, and saturation vapor pressure decreases strongly as temperature drops, so the amount of water vapor that can exist in the atmosphere decreases strongly with height. Thus the proportion of water vapor is normally greatest near the surface and decreases with height. The pressure of the atmosphere is maximum at sea level and decreases with altitude. This is because the atmosphere is very nearly in hydrostatic equilibrium, so that the pressure is equal to the weight of air above a given point. The change in pressure with height, therefore can be equated to the density with this hydrostatic equation: Since temperature in principle also depends on altitude, one needs a second equation to determine the pressure as a function of height, as discussed in the next section. The temperature of the troposphere generally decreases as altitude increases. The rate at which the temperature decreases, , is called the environmental lapse rate (ELR). The ELR is nothing more than the difference in temperature between the surface and the tropopause divided by the height. The ELR assumes that the air is perfectly still, i.e. that there is no mixing of the layers of air from vertical convection, nor winds that would create turbulence and hence mixing of the layers of air. The reason for this temperature difference is that the ground absorbs most of the sun's energy, which then heats the lower levels of the atmosphere with which it is in contact. Meanwhile, the radiation of heat at the top of the atmosphere results in the cooling of that part of the atmosphere. |Altitude Region||Lapse rate||Lapse Rate| |0 - 11,000||-6.5||-3.57| |11,000 - 20,000||0.0||0.0| |20,000 - 32,000||1.0||0.55| |32,000 - 47,000||2.8||1.54| |47,000 - 51,000||0.0||0.0| |51,000 - 71,000||-2.8||-1.54| |71,000 - 85,000||-2.0||-1.09| The ELR assumes the atmosphere is still, but as air is heated it becomes buoyant and rises. The dry adiabatic lapse rate accounts for the effect of the expansion of dry air as it rises in the atmosphere and wet adibatic lapse rates includes the effect of the condensation of water vapor on the lapse rate. When a parcel of air rises, it expands, because the pressure is lower at higher altitudes. As the air parcel expands, it pushes the surrounding air outward, transferring energy in the form of work from that parcel to the atmosphere. As energy transfer to a parcel of air by way of heat is very slow, it is assumed to not exchange energy by way of heat with the environment. Such a process is called an adiabatic process (no energy transfer by way of heat). Since the rising parcel of air is losing energy as it does work on the surrounding atmosphere and no energy is transferred into it as heat from the atmosphere to make up for the loss, the parcel of air is losing energy, which manifests itself as a decrease in the temperature of the air parcel. The reverse, of course, will be true for a parcel of air that is sinking and is being compressed. Since the process of compression and expansion of an air parcel can be considered reversible and no energy is transferred into or out of the parcel, such a process is considered isentropic, meaning that there is no change in entropy as the air parcel rises and falls, . Since the heat exchanged is related to the entropy change by , the equation governing the temperature as a function of height for a thoroughly mixed atmosphere is where S is the entropy. The above equation states that the entropy of the atmosphere does not change with height. The rate at which temperature decreases with height under such conditions is called the adiabatic lapse rate. If the air contains water vapor, then cooling of the air can cause the water to condense, and the behavior is no longer that of an ideal gas. If the air is at the saturated vapor pressure, then the rate at which temperature drops with height is called the saturated adiabatic lapse rate. More generally, the actual rate at which the temperature drops with altitude is called the environmental lapse rate. In the troposphere, the average environmental lapse rate is a drop of about 6.5 °C for every 1 km (1,000 meters) in increased height. The environmental lapse rate (the actual rate at which temperature drops with height, ) is not usually equal to the adiabatic lapse rate (or correspondingly, ). If the upper air is warmer than predicted by the adiabatic lapse rate (), then when a parcel of air rises and expands, it will arrive at the new height at a lower temperature than its surroundings. In this case, the air parcel is denser than its surroundings, so it sinks back to its original height, and the air is stable against being lifted. If, on the contrary, the upper air is cooler than predicted by the adiabatic lapse rate, then when the air parcel rises to its new height it will have a higher temperature and a lower density than its surroundings, and will continue to accelerate upward. The troposphere is heated from below by latent heat, longwave radiation, and sensible heat. Surplus heating and vertical expansion of the troposphere occurs in the tropics. At middle latitudes, tropospheric temperatures decrease from an average of 15 °C at sea level to about -55 °C at the tropopause. At the poles, tropospheric temperature only decreases from an average of 0 °C at sea level to about -45 °C at the tropopause. At the equator, tropospheric temperatures decrease from an average of 20 °C at sea level to about -70 to -75 °C at the tropopause. The troposphere is thinner at the poles and thicker at the equator. The average thickness of the tropical tropopause is roughly 7 kilometers greater than the average tropopause thickness at the poles. The tropopause is the boundary region between the troposphere and the stratosphere. Measuring the temperature change with height through the troposphere and the stratosphere identifies the location of the tropopause. In the troposphere, temperature decreases with altitude. In the stratosphere, however, the temperature remains constant for a while and then increases with altitude. The region of the atmosphere where the lapse rate changes from positive (in the troposphere) to negative (in the stratosphere), is defined as the tropopause. Thus, the tropopause is an inversion layer, and there is little mixing between the two layers of the atmosphere. The flow of the atmosphere generally moves in a west to east direction. This, however, can often become interrupted, creating a more north to south or south to north flow. These scenarios are often described in meteorology as zonal or meridional. These terms, however, tend to be used in reference to localised areas of atmosphere (at a synoptic scale). A fuller explanation of the flow of atmosphere around the Earth as a whole can be found in the three-cell model. A zonal flow regime is the meteorological term meaning that the general flow pattern is west to east along the Earth's latitude lines, with weak shortwaves embedded in the flow. The use of the word "zone" refers to the flow being along the Earth's latitudinal "zones". This pattern can buckle and thus become a meridional flow. When the zonal flow buckles, the atmosphere can flow in a more longitudinal (or meridional) direction, and thus the term "meridional flow" arises. Meridional flow patterns feature strong, amplified troughs of low pressure and ridges of high pressure, with more north-south flow in the general pattern than west-to-east flow. The three cells model of the atmosphere attempts to describe the actual flow of the Earth's atmosphere as a whole. It divides the Earth into the tropical (Hadley cell), mid latitude (Ferrel cell), and polar (polar cell) regions, to describe energy flow and global atmospheric circulation (mass flow). Its fundamental principle is that of balance - the energy that the Earth absorbs from the sun each year is equal to that which it loses to space by radiation. This overall Earth energy balance, however, does not apply in each latitude due to the varying strength of the sun in each "cell" as a result of the tilt of the Earth's axis in relation to its orbit. The result is a circulation of the atmosphere that transports warm air poleward from the tropics and cold air equatorward from the poles. The effect of the three cells is the tendency to even out the heat and moisture in the Earth's atmosphere around the planet. Synoptic scale observations and concepts Forcing term by meteorologists Forcing is a term used by meteorologists to describe the situation where a change or an event in one part of the atmosphere causes a strengthening change in another part of the atmosphere. It is usually used to describe connections between upper, middle or lower levels (such as upper-level divergence causing lower level convergence in cyclone formation), but also be to describe such connections over lateral distance rather than height alone. In some respects, teleconnections could be considered a type of forcing. Divergence and convergence An area of convergence is one in which the total mass of air is increasing with time, resulting in an increase in pressure at locations below the convergence level (recall that atmospheric pressure is just the total weight of air above a given point). Divergence is the opposite of convergence - an area where the total mass of air is decreasing with time, resulting in falling pressure in regions below the area of divergence. Where divergence is occurring in the upper atmosphere, there will be air coming in to try to balance the net loss of mass (this is called the principle of mass conservation), and there is a resulting upward motion (positive vertical velocity). Another way to state this is to say that regions of upper air divergence are conducive to lower level convergence, cyclone formation, and positive vertical velocity. Therefore, identifying regions of upper air divergence is an important step in forecasting the formation of a surface low pressure area. - "ISS022-E-062672 caption". NASA. Retrieved 21 September 2012. - McGraw-Hill Concise Encyclopedia of Science & Technology. (1984). Troposphere. "It contains about four-fifths of the mass of the whole atmosphere." - Danielson, Levin, and Abrams, Meteorology, McGraw Hill, 2003 - Landau and Lifshitz, Fluid Mechanics, Pergamon, 1979 - Landau and Lifshitz, Statistical Physics Part 1, Pergamon, 1980 - Kittel and Kroemer, Thermal Physics, Freeman, 1980; chapter 6, problem 11 - Paul E. Lydolph (1985). "The Climate of the Earth". Rowman and Littlefield Publishers Inc. p. 12. - "American Meteorological Society Glossary - Zonal Flow". Allen Press Inc. June 2000. Retrieved 2006-10-03. External link in - "American Meteorological Society Glossary - Meridional Flow". Allen Press Inc. June 2000. Retrieved 2006-10-03. External link in - "Meteorology - MSN Encarta, "Energy Flow and Global Circulation"". Encarta.Msn.com. Archived from the original on 2009-10-31. Retrieved 2006-10-13. External link in |Look up troposphere in Wiktionary, the free dictionary.|
A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where N is the quantity and λ (lambda) is a positive rate called the exponential decay constant: The solution to this equation (see derivation below) is: Here N(t) is the quantity at time t, and N0 = N(0) is the initial quantity, i.e. the quantity at time t = 0. - 1 Measuring rates of decay - 2 Solution of the differential equation - 3 Applications and examples - 4 See also - 5 References - 6 External links Measuring rates of decay If the decaying quantity, N(t), is the number of discrete elements in a certain set, it is possible to compute the average length of time that an element remains in the set. This is called the mean lifetime (or simply the lifetime or the exponential time constant), τ, and it can be shown that it relates to the decay rate, λ, in the following way: The mean lifetime can be looked at as a "scaling time", because we can write the exponential decay equation in terms of the mean lifetime, τ, instead of the decay constant, λ: We can see that τ is the time at which the population of the assembly is reduced to 1/e = 0.367879441 times its initial value. E.g., if the initial population of the assembly, N(0), is 1000, then at time τ, the population, N(τ), is 368. A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, rather than e. In that case the scaling time is the "half-life". A more intuitive characteristic of exponential decay for many people is the time required for the decaying quantity to fall to one half of its initial value. This time is called the half-life, and often denoted by the symbol t1/2. The half-life can be written in terms of the decay constant, or the mean lifetime, as: When this expression is inserted for in the exponential equation above, and ln 2 is absorbed into the base, this equation becomes: Thus, the amount of material left is 2−1 = 1/2 raised to the (whole or fractional) number of half-lives that have passed. Thus, after 3 half-lives there will be 1/23 = 1/8 of the original material left. Therefore, the mean lifetime is equal to the half-life divided by the natural log of 2, or: E.g. polonium-210 has a half-life of 138 days, and a mean lifetime of 200 days. Solution of the differential equation The equation that describes exponential decay is or, by rearranging, Integrating, we have where C is the constant of integration, and hence where the final substitution, N0 = eC, is obtained by evaluating the equation at t = 0, as N0 is defined as being the quantity at t = 0. This is the form of the equation that is most commonly used to describe exponential decay. Any one of decay constant, mean lifetime, or half-life is sufficient to characterise the decay. The notation λ for the decay constant is a remnant of the usual notation for an eigenvalue. In this case, λ is the eigenvalue of the negative of the differential operator with N(t) as the corresponding eigenfunction. The units of the decay constant are s−1. Derivation of the mean lifetime Given an assembly of elements, the number of which decreases ultimately to zero, the mean lifetime, , (also called simply the lifetime) is the expected value of the amount of time before an object is removed from the assembly. Specifically, if the individual lifetime of an element of the assembly is the time elapsed between some reference time and the removal of that element from the assembly, the mean lifetime is the arithmetic mean of the individual lifetimes. Starting from the population formula we firstly let c be the normalizing factor to convert to a probability density function: or, on rearranging, We see that exponential decay is a scalar multiple of the exponential distribution (i.e. the individual lifetime of each object is exponentially distributed), which has a well-known expected value. We can compute it here using integration by parts. Decay by two or more processes A quantity may decay via two or more different processes simultaneously. In general, these processes (often called "decay modes", "decay channels", "decay routes" etc.) have different probabilities of occurring, and thus occur at different rates with different half-lives, in parallel. The total decay rate of the quantity N is given by the sum of the decay routes; thus, in the case of two processes: The solution to this equation is given in the previous section, where the sum of is treated as a new total decay constant . Partial mean life associated with an individual processes is by definition the multiplicative inverse of corresponding partial decay constant: . A combined can be given in terms of s: Since half-lives differ from mean life by a constant factor, the same equation holds in terms of the two corresponding half-lives: where is the combined or total half-life for the process, and are so-named partial half-lives of corresponding processes. Terms "partial half-life" and "partial mean life" denote quantities derived from a decay constant as if the given decay mode were the only decay mode for the quantity. The term "partial half-life" is misleading, because it cannot be measured as a time interval for which a certain quantity is halved. In terms of separate decay constants, the total half-life can be shown to be For a decay by three simultaneous exponential processes the total half-life can be computed as above: Applications and examples Exponential decay occurs in a wide variety of situations. Most of these fall into the domain of the natural sciences. Many decay processes that are often treated as exponential, are really only exponential so long as the sample is large and the law of large numbers holds. For small samples, a more general analysis is necessary, accounting for a Poisson process. - Beer froth: – Arnd Leike, of the Ludwig Maximilian University of Munich, won an Ig Nobel Prize for demonstrating that beer froth obeys the law of exponential decay. - Chemical reactions: The rates of certain types of chemical reactions depend on the concentration of one or another reactant. Reactions whose rate depends only on the concentration of one reactant (known as first-order reactions) consequently follow exponential decay. For instance, many enzyme-catalyzed reactions behave this way. - Electrostatics: The electric charge (or, equivalently, the potential) stored on a capacitor (capacitance C) decays exponentially, if the capacitor experiences a constant external load (resistance R). The exponential time-constant τ for the process is R C, and the half-life is therefore R C ln2. (Furthermore, the particular case of a capacitor discharging through several parallel resistors makes an interesting example of multiple decay processes, with each resistor representing a separate process. In fact, the expression for the equivalent resistance of two resistors in parallel mirrors the equation for the half-life with two decay processes.) - Fluid Dynamics: A fluid emptying from a tube with an opening at the bottom will empty at a rate which depends on the pressure at the opening (which in turn depends on the height of the fluid remaining). Thus the height of the column of fluid remaining will follow an exponential decay. - Geophysics: Atmospheric pressure decreases approximately exponentially with increasing height above sea level, at a rate of about 12% per 1000m. - Heat transfer: If an object at one temperature is exposed to a medium of another temperature, the temperature difference between the object and the medium follows exponential decay (in the limit of slow processes; equivalent to "good" heat conduction inside the object, so that its temperature remains relatively uniform through its volume). See also Newton's law of cooling. - Luminescence: After excitation, the emission intensity – which is proportional to the number of excited atoms or molecules – of a luminescent material decays exponentially. Depending on the number of mechanisms involved, the decay can be mono- or multi-exponential. - Pharmacology and toxicology: It is found that many administered substances are distributed and metabolized (see clearance) according to exponential decay patterns. The biological half-lives "alpha half-life" and "beta half-life" of a substance measure how quickly a substance is distributed and eliminated. - Physical optics: The intensity of electromagnetic radiation such as light or X-rays or gamma rays in an absorbent medium, follows an exponential decrease with distance into the absorbing medium. This is known as the Beer-Lambert law. - Radioactivity: In a sample of a radionuclide that undergoes radioactive decay to a different state, the number of atoms in the original state follows exponential decay as long as the remaining number of atoms is large. The decay product is termed a radiogenic nuclide. - Thermoelectricity: The decline in resistance of a Negative Temperature Coefficient Thermistor as temperature is increased. - Vibrations: Some vibrations may decay exponentially; this characteristic is often found in damped mechanical oscillators, and used in creating ADSR envelopes in synthesizers. An overdamped system will simply return to equilibrium via an exponential decay. - Finance: a retirement fund will decay exponentially being subject to discrete payout amounts, usually monthly, and an input subject to a continuous interest rate. A differential equation dA/dt = input – output can be written and solved to find the time to reach any amount A, remaining in the fund. - In simple glottochronology, the (debatable) assumption of a constant decay rate in languages allows one to estimate the age of single languages. (To compute the time of split between two languages requires additional assumptions, independent of exponential decay). - The core routing protocol on the Internet, BGP, has to maintain a routing table in order to remember the paths a packet can be deviated to. When one of these paths repeatedly changes its state from available to not available (and vice versa), the BGP router controlling that path has to repeatedly add and remove the path record from its routing table (flaps the path), thus spending local resources such as CPU and RAM and, even more, broadcasting useless information to peer routers. To prevent this undesired behavior, an algorithm named route flapping damping assigns each route a weight that gets bigger each time the route changes its state and decays exponentially with time. When the weight reaches a certain limit, no more flapping is done, thus suppressing the route. - Exponential formula - Exponential growth - Radioactive decay for the mathematics of chains of exponential processes with differing constants
A solar cell, or photovoltaic cell, is an electrical device that converts the energy of light directly into electricity by the photovoltaic effect, which is a physical and chemical phenomenon. It is a form of photoelectric cell, defined as a device whose electrical characteristics, such as current, voltage, or resistance, vary when exposed to light. Individual solar cell devices can be combined to form modules, otherwise known as solar panels. In basic terms a single junction silicon solar cell can produce a maximum open-circuit voltage of approximately 0.5 to 0.6 volts. Solar cells are described as being photovoltaic, irrespective of whether the source is sunlight or an artificial light. They are used as a photodetector (for example infrared detectors), detecting light or other electromagnetic radiation near the visible range, or measuring light intensity. The operation of a photovoltaic (PV) cell requires three basic attributes: - The absorption of light, generating either electron-hole pairs or excitons. - The separation of charge carriers of opposite types. - The separate extraction of those carriers to an external circuit. In contrast, a solar thermal collector supplies heat by absorbing sunlight, for the purpose of either direct heating or indirect electrical power generation from heat. A "photoelectrolytic cell" (photoelectrochemical cell), on the other hand, refers either to a type of photovoltaic cell (like that developed by Edmond Becquerel and modern dye-sensitized solar cells), or to a device that splits water directly into hydrogen and oxygen using only solar illumination. - 1 Applications - 2 History - 3 Declining costs and exponential growth - 4 Theory - 5 Efficiency - 6 Materials - 6.1 Crystalline silicon - 6.2 Thin film - 6.3 Multijunction cells - 7 Research in solar cells - 8 Manufacture - 9 Manufacturers and certification - 10 See also - 11 References - 12 Bibliography - 13 External links Assemblies of solar cells are used to make solar modules that generate electrical power from sunlight, as distinguished from a "solar thermal module" or "solar hot water panel". A solar array generates solar power using solar energy. Cells, modules, panels and systems Multiple solar cells in an integrated group, all oriented in one plane, constitute a solar photovoltaic panel or module. Photovoltaic modules often have a sheet of glass on the sun-facing side, allowing light to pass while protecting the semiconductor wafers. Solar cells are usually connected in series and parallel circuits or series in modules, creating an additive voltage. Connecting cells in parallel yields a higher current; however, problems such as shadow effects can shut down the weaker (less illuminated) parallel string (a number of series connected cells) causing substantial power loss and possible damage because of the reverse bias applied to the shadowed cells by their illuminated partners. Strings of series cells are usually handled independently and not connected in parallel, though as of 2014, individual power boxes are often supplied for each module, and are connected in parallel. Although modules can be interconnected to create an array with the desired peak DC voltage and loading current capacity, using independent MPPTs (maximum power point trackers) is preferable. Otherwise, shunt diodes can reduce shadowing power loss in arrays with series/parallel connected cells. |USD/W||Australia||China||France||Germany||Italy||Japan||United Kingdom||United States| |Source: IEA – Technology Roadmap: Solar Photovoltaic Energy report, 2014 edition:15| Note: DOE – Photovoltaic System Pricing Trends reports lower prices for the U.S. The photovoltaic effect was experimentally demonstrated first by French physicist Edmond Becquerel. In 1839, at age 19, he built the world's first photovoltaic cell in his father's laboratory. Willoughby Smith first described the "Effect of Light on Selenium during the passage of an Electric Current" in a 20 February 1873 issue of Nature. In 1883 Charles Fritts built the first solid state photovoltaic cell by coating the semiconductor selenium with a thin layer of gold to form the junctions; the device was only around 1% efficient. Other milestones include: - 1888 – Russian physicist Aleksandr Stoletov built the first cell based on the outer photoelectric effect discovered by Heinrich Hertz in 1887. - 1905 – Albert Einstein proposed a new quantum theory of light and explained the photoelectric effect in a landmark paper, for which he received the Nobel Prize in Physics in 1921. - 1941 – Vadim Lashkaryov discovered p-n-junctions in Cu2O and Ag2S protocells. - 1946 – Russell Ohl patented the modern junction semiconductor solar cell, while working on the series of advances that would lead to the transistor. - 1954 – the first practical photovoltaic cell was publicly demonstrated at Bell Laboratories. The inventors were Calvin Souther Fuller, Daryl Chapin and Gerald Pearson. - 1958 – solar cells gained prominence with their incorporation onto the Vanguard I satellite. Solar cells were first used in a prominent application when they were proposed and flown on the Vanguard satellite in 1958, as an alternative power source to the primary battery power source. By adding cells to the outside of the body, the mission time could be extended with no major changes to the spacecraft or its power systems. In 1959 the United States launched Explorer 6, featuring large wing-shaped solar arrays, which became a common feature in satellites. These arrays consisted of 9600 Hoffman solar cells. By the 1960s, solar cells were (and still are) the main power source for most Earth orbiting satellites and a number of probes into the solar system, since they offered the best power-to-weight ratio. However, this success was possible because in the space application, power system costs could be high, because space users had few other power options, and were willing to pay for the best possible cells. The space power market drove the development of higher efficiencies in solar cells up until the National Science Foundation "Research Applied to National Needs" program began to push development of solar cells for terrestrial applications. In the early 1990s the technology used for space solar cells diverged from the silicon technology used for terrestrial panels, with the spacecraft application shifting to gallium arsenide-based III-V semiconductor materials, which then evolved into the modern III-V multijunction photovoltaic cell used on spacecraft. Improvements were gradual over the 1960s. This was also the reason that costs remained high, because space users were willing to pay for the best possible cells, leaving no reason to invest in lower-cost, less-efficient solutions. The price was determined largely by the semiconductor industry; their move to integrated circuits in the 1960s led to the availability of larger boules at lower relative prices. As their price fell, the price of the resulting cells did as well. These effects lowered 1971 cell costs to some $100 per watt. In late 1969 Elliot Berman joined Exxon's task force which was looking for projects 30 years in the future and in April 1973 he founded Solar Power Corporation, a wholly owned subsidiary of Exxon at that time. The group had concluded that electrical power would be much more expensive by 2000, and felt that this increase in price would make alternative energy sources more attractive. He conducted a market study and concluded that a price per watt of about $20/watt would create significant demand. The team eliminated the steps of polishing the wafers and coating them with an anti-reflective layer, relying on the rough-sawn wafer surface. The team also replaced the expensive materials and hand wiring used in space applications with a printed circuit board on the back, acrylic plastic on the front, and silicone glue between the two, "potting" the cells. Solar cells could be made using cast-off material from the electronics market. By 1973 they announced a product, and SPC convinced Tideland Signal to use its panels to power navigational buoys, initially for the U.S. Coast Guard. Research and industrial production Research into solar power for terrestrial applications became prominent with the U.S. National Science Foundation's Advanced Solar Energy Research and Development Division within the "Research Applied to National Needs" program, which ran from 1969 to 1977, and funded research on developing solar power for ground electrical power systems. A 1973 conference, the "Cherry Hill Conference", set forth the technology goals required to achieve this goal and outlined an ambitious project for achieving them, kicking off an applied research program that would be ongoing for several decades. The program was eventually taken over by the Energy Research and Development Administration (ERDA), which was later merged into the U.S. Department of Energy. Following the 1973 oil crisis, oil companies used their higher profits to start (or buy) solar firms, and were for decades the largest producers. Exxon, ARCO, Shell, Amoco (later purchased by BP) and Mobil all had major solar divisions during the 1970s and 1980s. Technology companies also participated, including General Electric, Motorola, IBM, Tyco and RCA. Declining costs and exponential growth Adjusting for inflation, it cost $96 per watt for a solar module in the mid-1970s. Process improvements and a very large boost in production have brought that figure down 99%, to 68¢ per watt in 2016, according to data from Bloomberg New Energy Finance. Swanson's law is an observation similar to Moore's Law that states that solar cell prices fall 20% for every doubling of industry capacity. It was featured in an article in the British weekly newspaper The Economist in late 2012. Further improvements reduced production cost to under $1 per watt, with wholesale costs well under $2. Balance of system costs were then higher than those of the panels. Large commercial arrays could be built, as of 2010, at below $3.40 a watt, fully commissioned. As the semiconductor industry moved to ever-larger boules, older equipment became inexpensive. Cell sizes grew as equipment became available on the surplus market; ARCO Solar's original panels used cells 2 to 4 inches (50 to 100 mm) in diameter. Panels in the 1990s and early 2000s generally used 125 mm wafers; since 2008, almost all new panels use 156 mm cells. The widespread introduction of flat screen televisions in the late 1990s and early 2000s led to the wide availability of large, high-quality glass sheets to cover the panels. During the 1990s, polysilicon ("poly") cells became increasingly popular. These cells offer less efficiency than their monosilicon ("mono") counterparts, but they are grown in large vats that reduce cost. By the mid-2000s, poly was dominant in the low-cost panel market, but more recently the mono returned to widespread use. Manufacturers of wafer-based cells responded to high silicon prices in 2004–2008 with rapid reductions in silicon consumption. In 2008, according to Jef Poortmans, director of IMEC's organic and solar department, current cells use 8–9 grams (0.28–0.32 oz) of silicon per watt of power generation, with wafer thicknesses in the neighborhood of 200 microns. Crystalline silicon panels dominate worldwide markets and are mostly manufactured in China and Taiwan. By late 2011, a drop in European demand dropped prices for crystalline solar modules to about $1.09 per watt down sharply from 2010. Prices continued to fall in 2012, reaching $0.62/watt by 4Q2012. Solar PV is growing fastest in Asia, with China and Japan currently accounting for half of worldwide deployment. Global installed PV capacity reached at least 301 gigawatts in 2016, and grew to supply 1.3% of global power by 2016. In fact, the harnessed energy of silicon solar cells at the cost of a dollar has surpassed its oil counterpart since 2004. It was anticipated that electricity from PV will be competitive with wholesale electricity costs all across Europe and the energy payback time of crystalline silicon modules can be reduced to below 0.5 years by 2020. Subsidies and grid parity Solar-specific feed-in tariffs vary by country and within countries. Such tariffs encourage the development of solar power projects. Widespread grid parity, the point at which photovoltaic electricity is equal to or cheaper than grid power without subsidies, likely requires advances on all three fronts. Proponents of solar hope to achieve grid parity first in areas with abundant sun and high electricity costs such as in California and Japan. In 2007 BP claimed grid parity for Hawaii and other islands that otherwise use diesel fuel to produce electricity. George W. Bush set 2015 as the date for grid parity in the US. The Photovoltaic Association reported in 2012 that Australia had reached grid parity (ignoring feed in tariffs). The price of solar panels fell steadily for 40 years, interrupted in 2004 when high subsidies in Germany drastically increased demand there and greatly increased the price of purified silicon (which is used in computer chips as well as solar panels). The recession of 2008 and the onset of Chinese manufacturing caused prices to resume their decline. In the four years after January 2008 prices for solar modules in Germany dropped from €3 to €1 per peak watt. During that same time production capacity surged with an annual growth of more than 50%. China increased market share from 8% in 2008 to over 55% in the last quarter of 2010. In December 2012 the price of Chinese solar panels had dropped to $0.60/Wp (crystalline modules). (The abbreviation Wp stands for watt peak capacity, or the maximum capacity under optimal conditions.) As of the end of 2016, it was reported that spot prices for assembled solar panels (not cells) had fallen to a record-low of US$0.36/Wp. The second largest supplier, Canadian Solar Inc., had reported costs of US$0.37/Wp in the third quarter of 2016, having dropped $0.02 from the previous quarter, and hence was probably still at least breaking even. Many producers expected costs would drop to the vicinity of $0.30 by the end of 2017. It was also reported that new solar installations were cheaper than coal-based thermal power plants in some regions of the world, and this was expected to be the case in most of the world within a decade. The solar cell works in several steps: - Photons in sunlight hit the solar panel and are absorbed by semiconducting materials, such as silicon. - Electrons are excited from their current molecular/atomic orbital. Once excited an electron can either dissipate the energy as heat and return to its orbital or travel through the cell until it reaches an electrode. Current flows through the material to cancel the potential and this electricity is captured. The chemical bonds of the material are vital for this process to work, and usually silicon is used in two layers, one layer being doped with boron, the other phosphorus. These layers have different chemical electric charges and subsequently both drive and direct the current of electrons. - An array of solar cells converts solar energy into a usable amount of direct current (DC) electricity. - An inverter can convert the power to alternating current (AC). The most commonly known solar cell is configured as a large-area p–n junction made from silicon. Other possible solar cell types are organic solar cells, dye sensitized solar cells, perovskite solar cells, quantum dot solar cells etc. The illuminated side of a solar cell generally has a transparent conducting film for allowing light to enter into active material and to collect the generated charge carriers. Typically, films with high transmittance and high electrical conductance such as indium tin oxide, conducting polymers or conducting nanowire networks are used for the purpose. Solar cell efficiency may be broken down into reflectance efficiency, thermodynamic efficiency, charge carrier separation efficiency and conductive efficiency. The overall efficiency is the product of these individual metrics. A solar cell has a voltage dependent efficiency curve, temperature coefficients, and allowable shadow angles. Due to the difficulty in measuring these parameters directly, other parameters are substituted: thermodynamic efficiency, quantum efficiency, integrated quantum efficiency, VOC ratio, and fill factor. Reflectance losses are a portion of quantum efficiency under "external quantum efficiency". Recombination losses make up another portion of quantum efficiency, VOC ratio, and fill factor. Resistive losses are predominantly categorized under fill factor, but also make up minor portions of quantum efficiency, VOC ratio. The fill factor is the ratio of the actual maximum obtainable power to the product of the open circuit voltage and short circuit current. This is a key parameter in evaluating performance. In 2009, typical commercial solar cells had a fill factor > 0.70. Grade B cells were usually between 0.4 and 0.7. Cells with a high fill factor have a low equivalent series resistance and a high equivalent shunt resistance, so less of the current produced by the cell is dissipated in internal losses. Single p–n junction crystalline silicon devices are now approaching the theoretical limiting power efficiency of 33.16%, noted as the Shockley–Queisser limit in 1961. In the extreme, with an infinite number of layers, the corresponding limit is 86% using concentrated sunlight. In 2014, three companies broke the record of 25.6% for a silicon solar cell. Panasonic's was the most efficient. The company moved the front contacts to the rear of the panel, eliminating shaded areas. In addition they applied thin silicon films to the (high quality silicon) wafer's front and back to eliminate defects at or near the wafer surface. In 2015, a 4-junction GaInP/GaAs//GaInAsP/GaInAs solar cell achieved a new laboratory record efficiency of 46.1 percent (concentration ratio of sunlight = 312) in a French-German collaboration between the Fraunhofer Institute for Solar Energy Systems (Fraunhofer ISE), CEA-LETI and SOITEC. In September 2015, Fraunhofer ISE announced the achievement of an efficiency above 20% for epitaxial wafer cells. The work on optimizing the atmospheric-pressure chemical vapor deposition (APCVD) in-line production chain was done in collaboration with NexWafe GmbH, a company spun off from Fraunhofer ISE to commercialize production. For triple-junction thin-film solar cells, the world record is 13.6%, set in June 2015. In 2017, a team of researchers at National Renewable Energy Laboratory (NREL), EPFL and CSEM (Switzerland) reported record one-sun efficiencies of 32.8% for dual-junction GaInP/GaAs solar cell devices. In addition, the dual-junction device was mechanically stacked with a Si solar cell, to achieve a record one-sun efficiency of 35.9% for triple-junction solar cells. Solar cells are typically named after the semiconducting material they are made of. These materials must have certain characteristics in order to absorb sunlight. Some cells are designed to handle sunlight that reaches the Earth's surface, while others are optimized for use in space. Solar cells can be made of only one single layer of light-absorbing material (single-junction) or use multiple physical configurations (multi-junctions) to take advantage of various absorption and charge separation mechanisms. Solar cells can be classified into first, second and third generation cells. The first generation cells—also called conventional, traditional or wafer-based cells—are made of crystalline silicon, the commercially predominant PV technology, that includes materials such as polysilicon and monocrystalline silicon. Second generation cells are thin film solar cells, that include amorphous silicon, CdTe and CIGS cells and are commercially significant in utility-scale photovoltaic power stations, building integrated photovoltaics or in small stand-alone power system. The third generation of solar cells includes a number of thin-film technologies often described as emerging photovoltaics—most of them have not yet been commercially applied and are still in the research or development phase. Many use organic materials, often organometallic compounds as well as inorganic substances. Despite the fact that their efficiencies had been low and the stability of the absorber material was often too short for commercial applications, there is a lot of research invested into these technologies as they promise to achieve the goal of producing low-cost, high-efficiency solar cells. By far, the most prevalent bulk material for solar cells is crystalline silicon (c-Si), also known as "solar grade silicon". Bulk silicon is separated into multiple categories according to crystallinity and crystal size in the resulting ingot, ribbon or wafer. These cells are entirely based around the concept of a p-n junction. Solar cells made of c-Si are made from wafers between 160 and 240 micrometers thick. Monocrystalline silicon (mono-Si) solar cells are more efficient and more expensive than most other types of cells. The corners of the cells look clipped, like an octagon, because the wafer material is cut from cylindrical ingots, that are typically grown by the Czochralski process. Solar panels using mono-Si cells display a distinctive pattern of small white diamonds. Epitaxial silicon development Epitaxial wafers of crystalline silicon can be grown on a monocrystalline silicon "seed" wafer by chemical vapor deposition (CVD), and then detached as self-supporting wafers of some standard thickness (e.g., 250 µm) that can be manipulated by hand, and directly substituted for wafer cells cut from monocrystalline silicon ingots. Solar cells made with this "kerfless" technique can have efficiencies approaching those of wafer-cut cells, but at appreciably lower cost if the CVD can be done at atmospheric pressure in a high-throughput inline process. The surface of epitaxial wafers may be textured to enhance light absorption. Polycrystalline silicon, or multicrystalline silicon (multi-Si) cells are made from cast square ingots—large blocks of molten silicon carefully cooled and solidified. They consist of small crystals giving the material its typical metal flake effect. Polysilicon cells are the most common type used in photovoltaics and are less expensive, but also less efficient, than those made from monocrystalline silicon. Ribbon silicon is a type of polycrystalline silicon—it is formed by drawing flat thin films from molten silicon and results in a polycrystalline structure. These cells are cheaper to make than multi-Si, due to a great reduction in silicon waste, as this approach does not require sawing from ingots. However, they are also less efficient. Mono-like-multi silicon (MLM) This form was developed in the 2000s and introduced commercially around 2009. Also called cast-mono, this design uses polycrystalline casting chambers with small "seeds" of mono material. The result is a bulk mono-like material that is polycrystalline around the outsides. When sliced for processing, the inner sections are high-efficiency mono-like cells (but square instead of "clipped"), while the outer edges are sold as conventional poly. This production method results in mono-like cells at poly-like prices. Thin-film technologies reduce the amount of active material in a cell. Most designs sandwich active material between two panes of glass. Since silicon solar panels only use one pane of glass, thin film panels are approximately twice as heavy as crystalline silicon panels, although they have a smaller ecological impact (determined from life cycle analysis). Cadmium telluride is the only thin film material so far to rival crystalline silicon in cost/watt. However cadmium is highly toxic and tellurium (anion: "telluride") supplies are limited. The cadmium present in the cells would be toxic if released. However, release is impossible during normal operation of the cells and is unlikely during fires in residential roofs. A square meter of CdTe contains approximately the same amount of Cd as a single C cell nickel-cadmium battery, in a more stable and less soluble form. Copper indium gallium selenide Copper indium gallium selenide (CIGS) is a direct band gap material. It has the highest efficiency (~20%) among all commercially significant thin film materials (see CIGS solar cell). Traditional methods of fabrication involve vacuum processes including co-evaporation and sputtering. Recent developments at IBM and Nanosolar attempt to lower the cost by using non-vacuum solution processes. Silicon thin film Silicon thin-film cells are mainly deposited by chemical vapor deposition (typically plasma-enhanced, PE-CVD) from silane gas and hydrogen gas. Depending on the deposition parameters, this can yield amorphous silicon (a-Si or a-Si:H), protocrystalline silicon or nanocrystalline silicon (nc-Si or nc-Si:H), also called microcrystalline silicon. Amorphous silicon is the most well-developed thin film technology to-date. An amorphous silicon (a-Si) solar cell is made of non-crystalline or microcrystalline silicon. Amorphous silicon has a higher bandgap (1.7 eV) than crystalline silicon (c-Si) (1.1 eV), which means it absorbs the visible part of the solar spectrum more strongly than the higher power density infrared portion of the spectrum. The production of a-Si thin film solar cells uses glass as a substrate and deposits a very thin layer of silicon by plasma-enhanced chemical vapor deposition (PECVD). Protocrystalline silicon with a low volume fraction of nanocrystalline silicon is optimal for high open circuit voltage. Nc-Si has about the same bandgap as c-Si and nc-Si and a-Si can advantageously be combined in thin layers, creating a layered cell called a tandem cell. The top cell in a-Si absorbs the visible light and leaves the infrared part of the spectrum for the bottom cell in nc-Si. Gallium arsenide thin film The semiconductor material Gallium arsenide (GaAs) is also used for single-crystalline thin film solar cells. Although GaAs cells are very expensive, they hold the world's record in efficiency for a single-junction solar cell at 28.8%. GaAs is more commonly used in multijunction photovoltaic cells for concentrated photovoltaics (CPV, HCPV) and for solar panels on spacecrafts, as the industry favours efficiency over cost for space-based solar power. Based on the previous literature and some theoretical analysis, there are several reasons why GaAs has such high power conversion efficiency. First, GaAs bandgap is 1.43ev which is almost ideal for solar cells. Second, because Gallium is a by-product of the smelting of other metals, GaAs cells are relatively insensitive to heat and it can keep high efficiency when temperature is quite high. Third, GaAs has the wide range of design options. Using GaAs as active layer in solar cell, engineers can have multiple choices of other layers which can better generate electrons and holes in GaAs. Multi-junction cells consist of multiple thin films, each essentially a solar cell grown on top of another, typically using metalorganic vapour phase epitaxy. Each layer has a different band gap energy to allow it to absorb electromagnetic radiation over a different portion of the spectrum. Multi-junction cells were originally developed for special applications such as satellites and space exploration, but are now used increasingly in terrestrial concentrator photovoltaics (CPV), an emerging technology that uses lenses and curved mirrors to concentrate sunlight onto small, highly efficient multi-junction solar cells. By concentrating sunlight up to a thousand times, High concentrated photovoltaics (HCPV) has the potential to outcompete conventional solar PV in the future.:21,26 Tandem solar cells based on monolithic, series connected, gallium indium phosphide (GaInP), gallium arsenide (GaAs), and germanium (Ge) p–n junctions, are increasing sales, despite cost pressures. Between December 2006 and December 2007, the cost of 4N gallium metal rose from about $350 per kg to $680 per kg. Additionally, germanium metal prices have risen substantially to $1000–1200 per kg this year. Those materials include gallium (4N, 6N and 7N Ga), arsenic (4N, 6N and 7N) and germanium, pyrolitic boron nitride (pBN) crucibles for growing crystals, and boron oxide, these products are critical to the entire substrate manufacturing industry. A triple-junction cell, for example, may consist of the semiconductors: GaAs, Ge, and GaInP 2. Triple-junction GaAs solar cells were used as the power source of the Dutch four-time World Solar Challenge winners Nuna in 2003, 2005 and 2007 and by the Dutch solar cars Solutra (2005), Twente One (2007) and 21Revolution (2009). GaAs based multi-junction devices are the most efficient solar cells to date. On 15 October 2012, triple junction metamorphic cells reached a record high of 44%. In 2016, a new approach was described for producing hybrid photovoltaic wafers combining the high efficiency of III-V multi-junction solar cells with the economies and wealth of experience associated with silicon. The technical complications involved in growing the III-V material on silicon at the required high temperatures, a subject of study for some 30 years, are avoided by epitaxial growth of silicon on GaAs at low temperature by plasma-enhanced chemical vapor deposition (PECVD). Research in solar cells Perovskite solar cells Perovskite solar cells are solar cells that include a perovskite-structured material as the active layer. Most commonly, this is a solution-processed hybrid organic-inorganic tin or lead halide based material. Efficiencies have increased from below 5% at their first usage in 2009 to over 20% in 2014, making them a very rapidly advancing technology and a hot topic in the solar cell field. Perovskite solar cells are also forecast to be extremely cheap to scale up, making them a very attractive option for commercialisation. So far most types of perovskite solar cells have not reached sufficient operational stability to be commercialised, although many research groups are investigating ways to solve this. Bifacial solar cells With a transparent rear side, bifacial solar cells can absorb light from both the front and rear sides. Hence, they can produce more electricity than conventional monofacial solar cells. The first patent of bifacial solar cells was filed by Japanese researcher Hiroshi Mori, in 1966. Later, it is said that Russia was the first to deploy bifacial solar cells in their space program in the 1970s. In 1976, the Institute for Solar Energy of the Technical University of Madrid, began a research program for the development of bifacial solar cells led by Prof. Antonio Luque. Based on 1977 US and Spanish patents by Luque, a practical bifacial cell was proposed with a front face as anode and a rear face as cathode; in previously reported proposals and attempts both faces were anodic and interconnection between cells was complicated and expensive. In 1980, Andrés Cuevas, a PhD student in Luque's team, demonstrated experimentally a 50% increase in output power of bifacial solar cells, relative to identically oriented and tilted monofacial ones, when a white background was provided. In 1981 the company Isofoton was founded in Málaga to produce the developed bifacial cells, thus becoming the first industrialization of this PV cell technology. With an initial production capacity of 300 kW/yr. of bifacial solar cells, early landmarks of Isofoton's production were the 20kWp power plant in San Agustín de Guadalix, built in 1986 for Iberdrola, and an off grid installation by 1988 also of 20kWp in the village of Noto Gouye Diama (Senegal) funded by the Spanish international aid and cooperation programs. Due to the reduced manufacturing cost, companies have again started to produce commercial bifacial modules since 2010. By 2017, there were at least eight certified PV manufacturers providing bifacial modules in North America. It has been predicted by the International Technology Roadmap for Photovoltaics (ITRPV) that the global market share of bifacial technology will expand from less than 5% in 2016 to 30% in 2027. Due to the significant interest in the bifacial technology, a recent study has investigated the performance and optimization of bifacial solar modules worldwide. The results indicate that, across the globe, ground-mounted bifacial modules can only offer ~10% gain in annual electricity yields compared to the monofacial counterparts for a ground albedo coefficient of 25% (typical for concrete and vegetation groundcovers). However, the gain can be increased to ~30% by elevating the module 1 m above the ground and enhancing the ground albedo coefficient to 50%. Sun et al. also derived a set of empirical equations that can optimize bifacial solar modules analytically. An online simulation tool is available to model the performance of bifacial modules in any arbitrary location across the entire world. It can also optimize bifacial modules as a function of tilt angle, azimuth angle, and elevation above the ground. Intermediate band photovoltaics in solar cell research provides methods for exceeding the Shockley–Queisser limit on the efficiency of a cell. It introduces an intermediate band (IB) energy level in between the valence and conduction bands. Theoretically, introducing an IB allows two photons with energy less than the bandgap to excite an electron from the valence band to the conduction band. This increases the induced photocurrent and thereby efficiency. Luque and Marti first derived a theoretical limit for an IB device with one midgap energy level using detailed balance. They assumed no carriers were collected at the IB and that the device was under full concentration. They found the maximum efficiency to be 63.2%, for a bandgap of 1.95eV with the IB 0.71eV from either the valence or conduction band. Under one sun illumination the limiting efficiency is 47%. Upconversion and downconversion Photon upconversion is the process of using two low-energy (e.g., infrared) photons to produce one higher energy photon; downconversion is the process of using one high energy photon (e.g.,, ultraviolet) to produce two lower energy photons. Either of these techniques could be used to produce higher efficiency solar cells by allowing solar photons to be more efficiently used. The difficulty, however, is that the conversion efficiency of existing phosphors exhibiting up- or down-conversion is low, and is typically narrow band. One upconversion technique is to incorporate lanthanide-doped materials (Er3+ or a combination), taking advantage of their luminescence to convert infrared radiation to visible light. Upconversion process occurs when two infrared photons are absorbed by rare-earth ions to generate a (high-energy) absorbable photon. As example, the energy transfer upconversion process (ETU), consists in successive transfer processes between excited ions in the near infrared. The upconverter material could be placed below the solar cell to absorb the infrared light that passes through the silicon. Useful ions are most commonly found in the trivalent state. Er+ ions have been the most used. Er3+ ions absorb solar radiation around 1.54 µm. Two Er3+ ions that have absorbed this radiation can interact with each other through an upconversion process. The excited ion emits light above the Si bandgap that is absorbed by the solar cell and creates an additional electron–hole pair that can generate current. However, the increased efficiency was small. In addition, fluoroindate glasses have low phonon energy and have been proposed as suitable matrix doped with Ho3+ Dye-sensitized solar cells (DSSCs) are made of low-cost materials and do not need elaborate manufacturing equipment, so they can be made in a DIY fashion. In bulk it should be significantly less expensive than older solid-state cell designs. DSSC's can be engineered into flexible sheets and although its conversion efficiency is less than the best thin film cells, its price/performance ratio may be high enough to allow them to compete with fossil fuel electrical generation. Typically a ruthenium metalorganic dye (Ru-centered) is used as a monolayer of light-absorbing material. The dye-sensitized solar cell depends on a mesoporous layer of nanoparticulate titanium dioxide to greatly amplify the surface area (200–300 m2/g TiO 2, as compared to approximately 10 m2/g of flat single crystal). The photogenerated electrons from the light absorbing dye are passed on to the n-type TiO 2 and the holes are absorbed by an electrolyte on the other side of the dye. The circuit is completed by a redox couple in the electrolyte, which can be liquid or solid. This type of cell allows more flexible use of materials and is typically manufactured by screen printing or ultrasonic nozzles, with the potential for lower processing costs than those used for bulk solar cells. However, the dyes in these cells also suffer from degradation under heat and UV light and the cell casing is difficult to seal due to the solvents used in assembly. The first commercial shipment of DSSC solar modules occurred in July 2009 from G24i Innovations. Quantum dot solar cells (QDSCs) are based on the Gratzel cell, or dye-sensitized solar cell architecture, but employ low band gap semiconductor nanoparticles, fabricated with crystallite sizes small enough to form quantum dots (such as CdS, CdSe, Sb 3, PbS, etc.), instead of organic or organometallic dyes as light absorbers. Due to the toxicity associated with Cd and Pb based compounds there are also a series of "green" QD sensitizing materials in development (such as CuInS2, CuInSe2 and CuInSeS). QD's size quantization allows for the band gap to be tuned by simply changing particle size. They also have high extinction coefficients and have shown the possibility of multiple exciton generation. In a QDSC, a mesoporous layer of titanium dioxide nanoparticles forms the backbone of the cell, much like in a DSSC. This TiO 2 layer can then be made photoactive by coating with semiconductor quantum dots using chemical bath deposition, electrophoretic deposition or successive ionic layer adsorption and reaction. The electrical circuit is then completed through the use of a liquid or solid redox couple. The efficiency of QDSCs has increased to over 5% shown for both liquid-junction and solid state cells, with a reported peak efficiency of 11.91%. In an effort to decrease production costs, the Prashant Kamat research group demonstrated a solar paint made with TiO 2 and CdSe that can be applied using a one-step method to any conductive surface with efficiencies over 1%. However, the absorption of quantum dots (QDs) in QDSCs is weak at room temperature. The plasmonic nanoparticles can be utilized to address the weak absorption of QDs (e.g., nanostars). Adding an external infrared pumping sources to excite intraband and interband transition of QDs is another solution. Organic/polymer solar cells Organic solar cells and polymer solar cells are built from thin films (typically 100 nm) of organic semiconductors including polymers, such as polyphenylene vinylene and small-molecule compounds like copper phthalocyanine (a blue or green organic pigment) and carbon fullerenes and fullerene derivatives such as PCBM. They can be processed from liquid solution, offering the possibility of a simple roll-to-roll printing process, potentially leading to inexpensive, large-scale production. In addition, these cells could be beneficial for some applications where mechanical flexibility and disposability are important. Current cell efficiencies are, however, very low, and practical devices are essentially non-existent. Energy conversion efficiencies achieved to date using conductive polymers are very low compared to inorganic materials. However, Konarka Power Plastic reached efficiency of 8.3% and organic tandem cells in 2012 reached 11.1%. The active region of an organic device consists of two materials, one electron donor and one electron acceptor. When a photon is converted into an electron hole pair, typically in the donor material, the charges tend to remain bound in the form of an exciton, separating when the exciton diffuses to the donor-acceptor interface, unlike most other solar cell types. The short exciton diffusion lengths of most polymer systems tend to limit the efficiency of such devices. Nanostructured interfaces, sometimes in the form of bulk heterojunctions, can improve performance. In 2011, MIT and Michigan State researchers developed solar cells with a power efficiency close to 2% with a transparency to the human eye greater than 65%, achieved by selectively absorbing the ultraviolet and near-infrared parts of the spectrum with small-molecule compounds. Researchers at UCLA more recently developed an analogous polymer solar cell, following the same approach, that is 70% transparent and has a 4% power conversion efficiency. These lightweight, flexible cells can be produced in bulk at a low cost and could be used to create power generating windows. In 2013, researchers announced polymer cells with some 3% efficiency. They used block copolymers, self-assembling organic materials that arrange themselves into distinct layers. The research focused on P3HT-b-PFTBT that separates into bands some 16 nanometers wide. Adaptive cells change their absorption/reflection characteristics depending to respond to environmental conditions. An adaptive material responds to the intensity and angle of incident light. At the part of the cell where the light is most intense, the cell surface changes from reflective to adaptive, allowing the light to penetrate the cell. The other parts of the cell remain reflective increasing the retention of the absorbed light within the cell. In 2014, a system was developed that combined an adaptive surface with a glass substrate that redirect the absorbed to a light absorber on the edges of the sheet. The system also includes an array of fixed lenses/mirrors to concentrate light onto the adaptive surface. As the day continues, the concentrated light moves along the surface of the cell. That surface switches from reflective to adaptive when the light is most concentrated and back to reflective after the light moves along. For the past years, researchers have been trying to reduce the price of solar cells while maximizing efficiency. Thin-film solar cell is a cost-effective second generation solar cell with much reduced thickness at the expense of light absorption efficiency. Efforts to maximize light absorption efficiency with reduced thickness have been made. Surface texturing is one of techniques used to reduce optical losses to maximize light absorbed. Currently, surface texturing techniques on silicon photovoltaics are drawing much attention. Surface texturing could be done in multiple ways. Etching single crystalline silicon substrate can produce randomly distributed square based pyramids on the surface using anisotropic etchants. Recent studies show that c-Si wafers could be etched down to form nano-scale inverted pyramids. Multicrystalline silicon solar cells, due to poorer crystallographic quality, are less effective than single crystal solar cells, but mc-Si solar cells are still being used widely due to less manufacturing difficulties. It is reported that multicrystalline solar cells can be surface-textured to yield solar energy conversion efficiency comparable to that of monocrystalline silicon cells, through isotropic etching or photolithography techniques. Incident light rays onto a textured surface do not reflect back out to the air as opposed to rays onto a flat surface. Rather some light rays are bounced back onto the other surface again due to the geometry of the surface. This process significantly improves light to electricity conversion efficiency, due to increased light absorption. This texture effect as well as the interaction with other interfaces in the PV module is a challenging optical simulation task. A particularly efficient method for modeling and optimization is the OPTOS formalism. In 2012, researchers at MIT reported that c-Si films textured with nanoscale inverted pyramids could achieve light absorption comparable to 30 times thicker planar c-Si. In combination with anti-reflective coating, surface texturing technique can effectively trap light rays within a thin film silicon solar cell. Consequently, required thickness for solar cells decreases with the increased absorption of light rays. Solar cells are commonly encapsulated in a transparent polymeric resin to protect the delicate solar cell regions for coming into contact with moisture, dirt, ice, and other conditions expected either during operation or when used outdoors. The encapsulants are commonly made from polyvinyl acetate or glass. Most encapsulants are uniform in structure and composition, which increases light collection owing to light trapping from total internal reflection of light within the resin. Research has been conducted into structuring the encapsulant to provide further collection of light. Such encapsulants have included roughened glass surfaces, diffractive elements, prism arrays, air prisms, v-grooves, diffuse elements, as well as multi-directional waveguide arrays. Prism arrays show an overall 5% increase in the total solar energy conversion. Arrays of vertically aligned broadband waveguides provide a 10% increase at normal incidence, as well as wide-angle collection enhancement of up to 4%, with optimized structures yielding up to a 20% increase in short circuit current. Active coatings that convert infrared light into visible light have shown a 30% increase. Nanoparticle coatings inducing plasmonic light scattering increase wide-angle conversion efficiency up to 3%. Optical structures have also been created in encapsulation materials to effectively "cloak" the metallic front contacts. This section needs additional citations for verification. (June 2014) (Learn how and when to remove this template message) Solar cells share some of the same processing and manufacturing techniques as other semiconductor devices. However, the stringent requirements for cleanliness and quality control of semiconductor fabrication are more relaxed for solar cells, lowering costs. Polycrystalline silicon wafers are made by wire-sawing block-cast silicon ingots into 180 to 350 micrometer wafers. The wafers are usually lightly p-type-doped. A surface diffusion of n-type dopants is performed on the front side of the wafer. This forms a p–n junction a few hundred nanometers below the surface. Anti-reflection coatings are then typically applied to increase the amount of light coupled into the solar cell. Silicon nitride has gradually replaced titanium dioxide as the preferred material, because of its excellent surface passivation qualities. It prevents carrier recombination at the cell surface. A layer several hundred nanometers thick is applied using PECVD. Some solar cells have textured front surfaces that, like anti-reflection coatings, increase the amount of light reaching the wafer. Such surfaces were first applied to single-crystal silicon, followed by multicrystalline silicon somewhat later. A full area metal contact is made on the back surface, and a grid-like metal contact made up of fine "fingers" and larger "bus bars" are screen-printed onto the front surface using a silver paste. This is an evolution of the so-called "wet" process for applying electrodes, first described in a US patent filed in 1981 by Bayer AG. The rear contact is formed by screen-printing a metal paste, typically aluminium. Usually this contact covers the entire rear, though some designs employ a grid pattern. The paste is then fired at several hundred degrees Celsius to form metal electrodes in ohmic contact with the silicon. Some companies use an additional electro-plating step to increase efficiency. After the metal contacts are made, the solar cells are interconnected by flat wires or metal ribbons, and assembled into modules or "solar panels". Solar panels have a sheet of tempered glass on the front, and a polymer encapsulation on the back. Manufacturers and certification Solar cells are manufactured in volume in Japan, Germany, China, Taiwan, Malaysia and the United States, whereas Europe, China, the U.S., and Japan have dominated (94% or more as of 2013) in installed systems. Other nations are acquiring significant solar cell production capacity. Global PV cell/module production increased by 10% in 2012 despite a 9% decline in solar energy investments according to the annual "PV Status Report" released by the European Commission's Joint Research Centre. Between 2009 and 2013 cell production has quadrupled. Due to heavy government investment, China has become the dominant force in solar cell manufacturing. Chinese companies produced solar cells/modules with a capacity of ~23 GW in 2013 (60% of global production). - Anomalous photovoltaic effect - Autonomous building - Black silicon - Energy development - Electromotive force (Solar cell) - Flexible substrate - Green technology - Inkjet solar cell - List of photovoltaics companies - List of types of solar cells - Maximum power point tracking - Metallurgical grade silicon - P–n junction - Plasmonic solar cell - Printed electronics - Quantum efficiency - Renewable energy - Roll-to-roll processing - Shockley-Queisser limit - Solar Energy Materials and Solar Cells (journal) - Solar module quality assurance - Solar roof - Solar shingles - Solar tracker - Solar panel - Theory of solar cells - Solar Cells. chemistryexplained.com - "Solar cells -- performance and use". solarbotics.net. - "Technology Roadmap: Solar Photovoltaic Energy" (PDF). IEA. 2014. Archived (PDF) from the original on 7 October 2014. Retrieved 7 October 2014. - "Photovoltaic System Pricing Trends – Historical, Recent, and Near-Term Projections, 2014 Edition" (PDF). NREL. 22 September 2014. p. 4. Archived (PDF) from the original on 29 March 2015. - Gevorkian, Peter (2007). Sustainable energy systems engineering: the complete green building design resource. McGraw Hill Professional. ISBN 978-0-07-147359-0. - "The Nobel Prize in Physics 1921: Albert Einstein", Nobel Prize official page - Lashkaryov, V. E. (1941) Investigation of a barrier layer by the thermoprobe method Archived 28 September 2015 at the Wayback Machine, Izv. Akad. Nauk SSSR, Ser. Fiz. 5, 442–446, English translation: Ukr. J. Phys. 53, 53–56 (2008) - "Light sensitive device" U.S. Patent 2,402,662 Issue date: June 1946 - "April 25, 1954: Bell Labs Demonstrates the First Practical Silicon Solar Cell". APS News. American Physical Society. 18 (4). April 2009. - Tsokos, K. A. (28 January 2010). Physics for the IB Diploma Full Colour. Cambridge University Press. ISBN 978-0-521-13821-5. - Perlin 1999, p. 50. - Perlin 1999, p. 53. - Williams, Neville (2005). Chasing the Sun: Solar Adventures Around the World. New Society Publishers. p. 84. ISBN 9781550923124. - Jones, Geoffrey; Bouamane, Loubna (2012). "Power from Sunshine": A Business History of Solar Energy (PDF). Harvard Business School. pp. 22–23. - Perlin 1999, p. 54. - The National Science Foundation: A Brief History, Chapter IV, NSF 88-16, 15 July 1994 (retrieved 20 June 2015) - Herwig, Lloyd O. (1999). "Cherry Hill revisited: Background events and photovoltaic technology status". AIP Conference Proceedings. National Center for Photovoltaics (NCPV) 15th Program Review Meeting. AIP Conference Proceedings. 462. p. 785. Bibcode:1999AIPC..462..785H. doi:10.1063/1.58015. - Deyo, J. N., Brandhorst, H. W., Jr., and Forestieri, A. F., Status of the ERDA/NASA photovoltaic tests and applications project, 12th IEEE Photovoltaic Specialists Conf., 15–18 Nov. 1976 - Reed Business Information (18 October 1979). The multinational connections-who does what where. Reed Business Information. ISSN 0262-4079. - Buhayar, Noah (28 January 2016) Warren Buffett controls Nevada’s legacy utility. Elon Musk is behind the solar company that’s upending the market. Let the fun begin. Bloomberg Businessweek - "Sunny Uplands: Alternative energy will no longer be alternative". The Economist. 21 November 2012. Retrieved 28 December 2012. - $1/W Photovoltaic Systems DOE whitepaper August 2010 - Solar Stocks: Does the Punishment Fit the Crime?. 24/7 Wall St. (6 October 2011). Retrieved 3 January 2012. - Parkinson, Giles. "Plunging Cost Of Solar PV (Graphs)". Clean Technica. Retrieved 18 May 2013. - "Snapshot of Global PV 1992–2014" (PDF). International Energy Agency — Photovoltaic Power Systems Programme. 30 March 2015. Archived from the original on 30 March 2015. - "Solar energy - Renewable energy - Statistical Review of World Energy - Energy economics - BP". bp.com. - Yu, Peng; Wu, Jiang; Liu, Shenting; Xiong, Jie; Jagadish, Chennupati; Wang, Zhiming M. (2016-12-01). "Design and fabrication of silicon nanowires towards efficient solar cells". Nano Today. 11 (6): 704–737. doi:10.1016/j.nantod.2016.10.001. - Mann, Sander A.; de Wild-Scholten, Mariska J.; Fthenakis, Vasilis M.; van Sark, Wilfried G.J.H.M.; Sinke, Wim C. (2014-11-01). "The energy payback time of advanced crystalline silicon PV modules in 2020: a prospective study". Progress in Photovoltaics: Research and Applications. 22 (11): 1180–1194. doi:10.1002/pip.2363. ISSN 1099-159X. - "BP Global – Reports and publications – Going for grid parity". Archived from the original on 8 June 2011. Retrieved 4 August 2012.. Bp.com. Retrieved 19 January 2011. - BP Global – Reports and publications – Gaining on the grid. Bp.com. August 2007. - The Path to Grid Parity. bp.com - Peacock, Matt (20 June 2012) Solar industry celebrates grid parity, ABC News. - Baldwin, Sam (20 April 2011) Energy Efficiency & Renewable Energy: Challenges and Opportunities. Clean Energy SuperCluster Expo Colorado State University. U.S. Department of Energy. - ENF Ltd. (8 January 2013). "Small Chinese Solar Manufacturers Decimated in 2012 | Solar PV Business News | ENF Company Directory". Enfsolar.com. Retrieved 1 June 2013. - "What is a solar panel and how does it work?". Energuide.be. Sibelga. Retrieved 3 January 2017. - Martin, Chris (30 December 2016). "Solar Panels Now So Cheap Manufacturers Probably Selling at Loss". Bloomberg View. Bloomberg LP. Retrieved 3 January 2017. - Shankleman, Jessica; Martin, Chris (3 January 2017). "Solar Could Beat Coal to Become the Cheapest Power on Earth". Bloomberg View. Bloomberg LP. Retrieved 3 January 2017. - Kumar, Ankush (2017-01-03). "Predicting efficiency of solar cells based on transparent conducting electrodes". Journal of Applied Physics. 121 (1): 014502. Bibcode:2017JAP...121a4502K. doi:10.1063/1.4973117. ISSN 0021-8979. - "Solar Cell Efficiency | PVEducation". www.pveducation.org. Retrieved 2018-01-31. - "T.Bazouni: What is the Fill Factor of a Solar Panel". Archived from the original on 15 April 2009. Retrieved 17 February 2009. - Rühle, Sven (2016-02-08). "Tabulated Values of the Shockley-Queisser Limit for Single Junction Solar Cells". Solar Energy. 130: 139–147. Bibcode:2016SoEn..130..139R. doi:10.1016/j.solener.2016.02.015. - Vos, A. D. (1980). "Detailed balance limit of the efficiency of tandem solar cells". Journal of Physics D: Applied Physics. 13 (5): 839. Bibcode:1980JPhD...13..839D. doi:10.1088/0022-3727/13/5/018. - Bullis, Kevin (13 June 2014) Record-Breaking Solar Cell Points the Way to Cheaper Power. MIT Technology Review - Dimroth, Frank; Tibbits, Thomas N.D.; Niemeyer, Markus; Predan, Felix; Beutel, Paul; Karcher, Christian; Oliva, Eduard; Siefer, Gerald; Lackner, David; et al. (2016). "Four-Junction Wafer Bonded Concentrator Solar Cells". IEEE Journal of Photovoltaics. 6 (1): 343–349. doi:10.1109/jphotov.2015.2501729. - Janz, Stefan; Reber, Stefan (14 September 2015). "20% Efficient Solar Cell on EpiWafer". Fraunhofer ISE. Retrieved 15 October 2015. - Drießen, Marion; Amiri, Diana; Milenkovic, Nena; Steinhauser, Bernd; Lindekugel, Stefan; Benick, Jan; Reber, Stefan; Janz, Stefan (2016). "Solar Cells with 20% Efficiency and Lifetime Evaluation of Epitaxial Wafers". Energy Procedia. 92: 785–790. doi:10.1016/j.egypro.2016.07.069. ISSN 1876-6102. - Zyg, Lisa (4 June 2015). "Solar cell sets world record with a stabilized efficiency of 13.6%". Phys.org. - 30.2 Percent Efficiency – New Record for Silicon-based Multi-junction Solar Cell — Fraunhofer ISE. Ise.fraunhofer.de (2016-11-09). Retrieved 2016-11-15. - Essig, Stephanie; Allebé, Christophe; Remo, Timothy; Geisz, John F.; Steiner, Myles A.; Horowitz, Kelsey; Barraud, Loris; Ward, J. Scott; Schnabel, Manuel (September 2017). "Raising the one-sun conversion efficiency of III–V/Si solar cells to 32.8% for two junctions and 35.9% for three junctions". Nature Energy. 2 (9): 17144. Bibcode:2017NatEn...217144E. doi:10.1038/nenergy.2017.144. ISSN 2058-7546. - Gaucher, Alexandre; Cattoni, Andrea; Dupuis, Christophe; Chen, Wanghua; Cariou, Romain; Foldyna, Martin; Lalouat, Loı̈c; Drouard, Emmanuel; Seassal, Christian; Roca i Cabarrocas, Pere; Collin, Stéphane (2016). "Ultrathin Epitaxial Silicon Solar Cells with Inverted Nanopyramid Arrays for Efficient Light Trapping". Nano Letters. 16 (9): 5358–64. Bibcode:2016NanoL..16.5358G. doi:10.1021/acs.nanolett.6b01240. PMID 27525513. - Chen, Wanghua; Cariou, Romain; Foldyna, Martin; Depauw, Valerie; Trompoukis, Christos; Drouard, Emmanuel; Lalouat, Loic; Harouri, Abdelmounaim; Liu, Jia; Fave, Alain; Orobtchouk, Régis; Mandorlo, Fabien; Seassal, Christian; Massiot, Inès; Dmitriev, Alexandre; Lee, Ki-Dong; Cabarrocas, Pere Roca i (2016). "Nanophotonics-based low-temperature PECVD epitaxial crystalline silicon solar cells". Journal of Physics D: Applied Physics. 49 (12): 125603. Bibcode:2016JPhD...49l5603C. doi:10.1088/0022-3727/49/12/125603. ISSN 0022-3727. - Kobayashi, Eiji; Watabe, Yoshimi; Hao, Ruiying; Ravi, T. S. (2015). "High efficiency heterojunction solar cells on n-type kerfless mono crystalline silicon wafers by epitaxial growth". Applied Physics Letters. 106 (22): 223504. Bibcode:2015ApPhL.106v3504K. doi:10.1063/1.4922196. ISSN 0003-6951. - Kim, D.S.; et al. (18 May 2003). String ribbon silicon solar cells with 17.8% efficiency (PDF). Proceedings of 3rd World Conference on Photovoltaic Energy Conversion, 2003. 2. pp. 1293–1296. ISBN 978-4-9901816-0-4. - Wayne McMillan, "The Cast Mono Dilemma" Archived 5 November 2013 at the Wayback Machine, BT Imaging - Pearce, J.; Lau, A. (2002). "Net Energy Analysis for Sustainable Energy Production from Silicon Based Solar Cells" (PDF). Solar Energy. p. 181. doi:10.1115/SED2002-1051. ISBN 978-0-7918-1689-9. Archived from the original (PDF) on 2010-06-22. - Edoff, Marika (March 2012). "Thin Film Solar Cells: Research in an Industrial Perspective". AMBIO. 41 (2): 112–118. doi:10.1007/s13280-012-0265-6. ISSN 0044-7447. PMC 3357764. PMID 22434436. - Fthenakis, Vasilis M. (2004). "Life cycle impact analysis of cadmium in CdTe PV production" (PDF). Renewable and Sustainable Energy Reviews. 8 (4): 303–334. doi:10.1016/j.rser.2003.12.001. - "IBM and Tokyo Ohka Kogyo Turn Up Watts on Solar Energy Production", IBM - Collins, R. W.; Ferlauto, A. S.; Ferreira, G. M.; Chen, C.; Koh, J.; Koval, R. J.; Lee, Y.; Pearce, J. M.; Wronski, C. R. (2003). "Evolution of microstructure and phase in amorphous, protocrystalline, and microcrystalline silicon studied by real time spectroscopic ellipsometry". Solar Energy Materials and Solar Cells. 78 (1–4): 143. doi:10.1016/S0927-0248(02)00436-1. - Pearce, J. M.; Podraza, N.; Collins, R. W.; Al-Jassim, M. M.; Jones, K. M.; Deng, J.; Wronski, C. R. (2007). "Optimization of open circuit voltage in amorphous silicon solar cells with mixed-phase (amorphous+nanocrystalline) p-type contacts of low nanocrystalline content" (PDF). Journal of Applied Physics. 101 (11): 114301–114301–7. Bibcode:2007JAP...101k4301P. doi:10.1063/1.2714507. Archived from the original (PDF) on 13 June 2009. - Yablonovitch, Eli; Miller, Owen D.; Kurtz, S. R. (2012). "The opto-electronic physics that broke the efficiency limit in solar cells". 2012 38th IEEE Photovoltaic Specialists Conference. p. 001556. doi:10.1109/PVSC.2012.6317891. ISBN 978-1-4673-0066-7. - "Photovoltaics Report" (PDF). Fraunhofer ISE. 28 July 2014. Archived (PDF) from the original on 31 August 2014. Retrieved 31 August 2014. - Oku, Takeo; Kumada, Kazuma; Suzuki, Atsushi; Kikuchi, Kenji (June 2012). "Effects of germanium addition to copper phthalocyanine/fullerene-based solar cells". Central European Journal of Engineering. 2 (2): 248–252. Bibcode:2012CEJE....2..248O. doi:10.2478/s13531-011-0069-7. - Triple-Junction Terrestrial Concentrator Solar Cells. (PDF) Retrieved 3 January 2012. - Clarke, Chris (19 April 2011) San Jose Solar Company Breaks Efficiency Record for PV. Optics.org. Retrieved 19 January 2011. - Cariou, Romain; Chen, Wanghua; Maurice, Jean-Luc; Yu, Jingwen; Patriarche, Gilles; Mauguin, Olivia; Largeau, Ludovic; Decobert, Jean; Roca i Cabarrocas, Pere (2016). "Low temperature plasma enhanced CVD epitaxial growth of silicon on GaAs: a new paradigm for III-V/Si integration". Scientific Reports. 6: 25674. Bibcode:2016NatSR...625674C. doi:10.1038/srep25674. ISSN 2045-2322. PMC 4863370. PMID 27166163. - "NREL effiiciency chart". Archived from the original on 22 January 2016. - Kosasih, Felix Utama; Ducati, Caterina (May 2018). "Characterising degradation of perovskite solar cells through in-situ and operando electron microscopy". Nano Energy. 47: 243–256. doi:10.1016/j.nanoen.2018.02.055. - "Radiation energy transducing device". Mori Hiroshi, Hayakawa Denki Kogyo KK. 1961-10-03. - (A1) ES 453575 (A1) A. Luque: "Procedimiento para obtener células solares bifaciales" filing date 05.05.1977 - (A) US 4169738 (A) A. Luque: "Double-sided solar cell with self-refrigerating concentrator" filing date 21.11.1977 - Luque, A.; Cuevas, A.; Eguren, J. (1978). "Solar-Cell Behavior under Variable Surface Recombination Velocity and Proposal of a Novel Structure". Solid-State Electronics. 21 (5): 793–794. Bibcode:1978SSEle..21..793L. doi:10.1016/0038-1101(78)90014-X. - Cuevas, A.; Luque, A.; Eguren, J.; Alamo, J. del (1982). "50 Per cent more output power from an albedo-collecting flat panel using bifacial solar cells". Solar Energy. 29 (5): 419–420. Bibcode:1982SoEn...29..419C. doi:10.1016/0038-092x(82)90078-0. - "International Technology Roadmap for Photovoltaic (ITRPV) - Home". www.itrpv.net. Retrieved 2018-02-20. - Sun, Xingshu; Khan, Mohammad Ryyan; Deline, Chris; Alam, Muhammad Ashraful (2018). "Optimization and performance of bifacial solar modules: A global perspective". Applied Energy. 212: 1601–1610. doi:10.1016/j.apenergy.2017.12.041. - Khan, M. Ryyan; Hanna, Amir; Sun, Xingshu; Alam, Muhammad A. (2017). "Vertical bifacial solar farms: Physics, design, and global optimization". Applied Energy. 206: 240–248. doi:10.1016/j.apenergy.2017.08.042. - Zhao, Binglin; Sun, Xingshu; Khan, Mohammad Ryyan; Alam, Muhammad Ashraful (2018-02-19). "Purdue Bifacial Module Calculator". doi:10.4231/d3542jb3c. - Luque, Antonio, and Antonio Martí. "Increasing the Efficiency of Ideal Solar Cells by Photon Induced Transitions at Intermediate Levels." Physical Review Letters 78.26 (1997): 5014-017. Web. - Okada, Yoshitaka, Tomah Sogabe, and Yasushi Shoji. "Chapter 13: "Intermediate Band Solar Cells"" Advanced Concepts in Photovoltaics. Ed. Arthur J. Nozik, Gavin Conibeer, and Matthew C. Beard. Vol. No. 11. Cambridge, UK: Royal Society of Chemistry, 2014. 425-54. Print. RSC Energy and Environment Ser. - Researchers use liquid inks to create better solar cells, Phys.org, 17 September 2014, Shaun Mason - Hernández-Rodríguez, M.A.; Imanieh, M.H.; Martín, L.L.; Martín, I.R. (September 2013). "Experimental enhancement of the photocurrent in a solar cell using upconversion process in fluoroindate glasses exciting at 1480nm". Solar Energy Materials and Solar Cells. 116: 171–175. doi:10.1016/j.solmat.2013.04.023. - Dye Sensitized Solar Cells. G24i.com (2 April 2014). Retrieved 20 April 2014. - Sharma, Darshan; Jha, Ranjana; Kumar, Shiv (2016-10-01). "Quantum dot sensitized solar cell: Recent advances and future perspectives in photoanode". Solar Energy Materials and Solar Cells. 155: 294–322. doi:10.1016/j.solmat.2016.05.062. ISSN 0927-0248. - Semonin, O. E.; Luther, J. M.; Choi, S.; Chen, H.-Y.; Gao, J.; Nozik, A. J.; Beard, M. C. (2011). "Peak External Photocurrent Quantum Efficiency Exceeding 100% via MEG in a Quantum Dot Solar Cell". Science. 334 (6062): 1530–3. Bibcode:2011Sci...334.1530S. doi:10.1126/science.1209845. PMID 22174246. - Kamat, Prashant V. (2012). "Boosting the Efficiency of Quantum Dot Sensitized Solar Cells through Modulation of Interfacial Charge Transfer". Accounts of Chemical Research. 45 (11): 1906–15. doi:10.1021/ar200315d. PMID 22493938. - Santra, Pralay K.; Kamat, Prashant V. (2012). "Mn-Doped Quantum Dot Sensitized Solar Cells: A Strategy to Boost Efficiency over 5%". Journal of the American Chemical Society. 134 (5): 2508–11. doi:10.1021/ja211224s. PMID 22280479. - Moon, Soo-Jin; Itzhaik, Yafit; Yum, Jun-Ho; Zakeeruddin, Shaik M.; Hodes, Gary; GräTzel, Michael (2010). "Sb2S3-Based Mesoscopic Solar Cell using an Organic Hole Conductor". The Journal of Physical Chemistry Letters. 1 (10): 1524. doi:10.1021/jz100308q. - Zn–Cu–In–Se Quantum Dot Solar Cells with a Certified Power Conversion Efficiency of 11.6% Jun Du, Zhonglin Du, Jin-Song Hu, Zhenxiao Pan, Qing Shen, Jiankun Sun, Donghui Long, Hui Dong, Litao Sun, Xinhua Zhong, and Li-Jun Wan 2016 138 (12), 4201-4209 DOI: 10.1021/jacs.6b00615 - Solar Cell Research || The Prashant Kamat lab at the University of Notre Dame. Nd.edu (22 February 2007). Retrieved 17 May 2012. - Genovese, Matthew P.; Lightcap, Ian V.; Kamat, Prashant V. (2012). "Sun-BelievableSolar Paint. A Transformative One-Step Approach for Designing Nanocrystalline Solar Cells". ACS Nano. 6 (1): 865–72. doi:10.1021/nn204381g. PMID 22147684. - Yu, Peng; Wu, Jiang; Gao, Lei; Liu, Huiyun; Wang, Zhiming (2017-03-01). "InGaAs and GaAs quantum dot solar cells grown by droplet epitaxy". Solar Energy Materials and Solar Cells. 161: 377–381. doi:10.1016/j.solmat.2016.12.024. - Wu, Jiang; Yu, Peng; Susha, Andrei S.; Sablon, Kimberly A.; Chen, Haiyuan; Zhou, Zhihua; Li, Handong; Ji, Haining; Niu, Xiaobin (2015-04-01). "Broadband efficiency enhancement in quantum dot solar cells coupled with multispiked plasmonic nanostars". Nano Energy. 13: 827–835. doi:10.1016/j.nanoen.2015.02.012. - Konarka Power Plastic reaches 8.3% efficiency. pv-tech.org. Retrieved 7 May 2011. - Mayer, A.; Scully, S.; Hardin, B.; Rowell, M.; McGehee, M. (2007). "Polymer-based solar cells". Materials Today. 10 (11): 28. doi:10.1016/S1369-7021(07)70276-6. - Lunt, R. R.; Bulovic, V. (2011). "Transparent, near-infrared organic photovoltaic solar cells for window and energy-scavenging applications". Applied Physics Letters. 98 (11): 113305. Bibcode:2011ApPhL..98k3305L. doi:10.1063/1.3567516. - Rudolf, John Collins (20 April 2011). "Transparent Photovoltaic Cells Turn Windows Into Solar Panels". green.blogs.nytimes.com. - "UCLA Scientists Develop Transparent Solar Cell". Enviro-News.com. 24 July 2012. Archived from the original on 27 July 2012. - Lunt, R. R.; Osedach, T. P.; Brown, P. R.; Rowehl, J. A.; Bulović, V. (2011). "Practical Roadmap and Limits to Nanostructured Photovoltaics". Advanced Materials. 23 (48): 5712–27. doi:10.1002/adma.201103404. PMID 22057647. - Lunt, R. R. (2012). "Theoretical limits for visibly transparent photovoltaics". Applied Physics Letters. 101 (4): 043902. Bibcode:2012ApPhL.101d3902L. doi:10.1063/1.4738896. - Guo, C.; Lin, Y. H.; Witman, M. D.; Smith, K. A.; Wang, C.; Hexemer, A.; Strzalka, J.; Gomez, E. D.; Verduzco, R. (2013). "Conjugated Block Copolymer Photovoltaics with near 3% Efficiency through Microphase Separation". Nano Letters. 13 (6): 2957–63. Bibcode:2013NanoL..13.2957G. doi:10.1021/nl401420s. PMID 23687903. - "Organic polymers create new class of solar energy devices". Kurzweil Accelerating Institute. 31 May 2013. Retrieved 1 June 2013. - Bullis, Kevin (30 July 2014) Adaptive Material Could Cut the Cost of Solar in Half. MIT Technology Review - Campbell, Patrick; Green, Martin A. (Feb 1987). "Light Trapping Properties of Pyramidally textured surfaces". Journal of Applied Physics. 62 (1): 243–249. Bibcode:1987JAP....62..243C. doi:10.1063/1.339189. - Zhao, Jianhua; Wang, Aihua; Green, Martin A. (May 1998). "19.8% efficient "honeycomb" textured multicrystalline and 24.4% monocrystalline silicon solar cells". Applied Physics Letters. 73 (14): 1991–1993. Bibcode:1998ApPhL..73.1991Z. doi:10.1063/1.122345. - Hauser, H.; Michl, B.; Kubler, V.; Schwarzkopf, S.; Muller, C.; Hermle, M.; Blasi, B. (2011). "Nanoimprint Lithography for Honeycomb Texturing of Multicrystalline Silicon". Energy Procedia. 8: 648–653. doi:10.1016/j.egypro.2011.06.196. - Tucher, Nico; Eisenlohr, Johannes; Gebrewold, Habtamu; Kiefel, Peter; Höhn, Oliver; Hauser, Hubert; Goldschmidt, Jan Christoph; Bläsi, Benedikt (2016-07-11). "Optical simulation of photovoltaic modules with multiple textured interfaces using the matrix-based formalism OPTOS". Optics Express. 24 (14): A1083–A1093. Bibcode:2016OExpr..24A1083T. doi:10.1364/OE.24.0A1083. PMID 27410896. - Mavrokefalos, Anastassios; Han, Sang Eon.; Yerci, Selcuk; Branham, M.S.; Chen, Gang. (June 2012). "Efficient Light Trapping in Inverted Nanopyramid Thin Crystalline Silicon Membranes for Solar Cell Applications". Nano Letters. 12 (6): 2792–2796. Bibcode:2012NanoL..12.2792M. doi:10.1021/nl2045777. PMID 22612694. - Jaus, J.; Pantsar, H.; Eckert, J.; Duell, M.; Herfurth, H.; Doble, D. (2010). "Light management for reduction of bus bar and gridline shadowing in photovoltaic modules". 2010 35th IEEE Photovoltaic Specialists Conference. p. 000979. doi:10.1109/PVSC.2010.5614568. ISBN 978-1-4244-5890-5. - Mingareev, I.; Berlich, R.; Eichelkraut, T. J.; Herfurth, H.; Heinemann, S.; Richardson, M. C. (2011-06-06). "Diffractive optical elements utilized for efficiency enhancement of photovoltaic modules". Optics Express. 19 (12): 11397–404. Bibcode:2011OExpr..1911397M. doi:10.1364/OE.19.011397. PMID 21716370. - Uematsu, T; Yazawa, Y; Miyamura, Y; Muramatsu, S; Ohtsuka, H; Tsutsui, K; Warabisako, T (2001-03-01). "Static concentrator photovoltaic module with prism array". Solar Energy Materials and Solar Cells. PVSEC 11 - PART III. 67 (1–4): 415–423. doi:10.1016/S0927-0248(00)00310-X. - Chen, Fu-hao; Pathreeker, Shreyas; Kaur, Jaspreet; Hosein, Ian D. (2016-10-31). "Increasing light capture in silicon solar cells with encapsulants incorporating air prisms to reduce metallic contact losses". Optics Express. 24 (22): A1419. Bibcode:2016OExpr..24A1419C. doi:10.1364/oe.24.0a1419. PMID 27828526. - Korech, Omer; Gordon, Jeffrey M.; Katz, Eugene A.; Feuermann, Daniel; Eisenberg, Naftali (2007-10-01). "Dielectric microconcentrators for efficiency enhancement in concentrator solar cells". Optics Letters. 32 (19): 2789. Bibcode:2007OptL...32.2789K. doi:10.1364/OL.32.002789. - Hosein, Ian D.; Lin, Hao; Ponte, Matthew R.; Basker, Dinesh K.; Saravanamuttu, Kalaichelvi (2013-11-03). Enhancing Solar Energy Light Capture with Multi-Directional Waveguide Lattices. Renewable Energy and the Environment. pp. RM2D.2. doi:10.1364/OSE.2013.RM2D.2. ISBN 978-1-55752-986-2. - Biria, Saeid; Chen, Fu Hao; Pathreeker, Shreyas; Hosein, Ian D. (2017-12-22). "Polymer Encapsulants Incorporating Light-Guiding Architectures to Increase Optical Energy Conversion in Solar Cells". Advanced Materials. 30 (8): 1705382. doi:10.1002/adma.201705382. ISSN 0935-9648. PMID 29271510. - Biria, Saeid; Chen, Fu-Hao; Hosein, Ian D. (2019). "Enhanced Wide-Angle Energy Conversion Using Structure-Tunable Waveguide Arrays as Encapsulation Materials for Silicon Solar Cells". Physica Status Solidi A. 0 (2): 1800716. doi:10.1002/pssa.201800716. ISSN 1862-6319. - Huang, Zhiyuan; Li, Xin; Mahboub, Melika; Hanson, Kerry M.; Nichols, Valerie M.; Le, Hoang; Tang, Ming L.; Bardeen, Christopher J. (2015-08-12). "Hybrid Molecule–Nanocrystal Photon Upconversion Across the Visible and Near-Infrared". Nano Letters. 15 (8): 5552–5557. Bibcode:2015NanoL..15.5552H. doi:10.1021/acs.nanolett.5b02130. PMID 26161875. - Schumann, Martin F.; Langenhorst, Malte; Smeets, Michael; Ding, Kaining; Paetzold, Ulrich W.; Wegener, Martin (2017-07-04). "All-Angle Invisibility Cloaking of Contact Fingers on Solar Cells by Refractive Free-Form Surfaces". Advanced Optical Materials. 5 (17): 1700164. doi:10.1002/adom.201700164. ISSN 2195-1071. - Langenhorst, Malte; Schumann, Martin F.; Paetel, Stefan; Schmager, Raphael; Lemmer, Uli; Richards, Bryce S.; Wegener, Martin; Paetzold, Ulrich W. (2018-08-01). "Freeform surface invisibility cloaking of interconnection lines in thin-film photovoltaic modules". Solar Energy Materials and Solar Cells. 182: 294–301. doi:10.1016/j.solmat.2018.03.034. ISSN 0927-0248. - Fitzky, Hans G. and Ebneth, Harold (24 May 1983) U.S. Patent 4,385,102, "Large-area photovoltaic cell" - Pv News November 2012. Greentech Media. Retrieved 3 June 2012. - Jäger-Waldau, Arnulf (September 2013) PV Status Report 2013. European Commission, Joint Research Centre, Institute for Energy and Transport. - PV production grows despite a crisis-driven decline in investment. European Commission, Brussels, 30 September 2013 - PV Status Report 2013 | Renewable Energy Mapping and Monitoring in Europe and Africa (REMEA). Iet.jrc.ec.europa.eu (11 April 2014). Retrieved 20 April 2014. - "Solar Rises in Malaysia During Trade Wars Over Panels". New York Times. 12 December 2014. - Plunging Cost Of Solar PV (Graphs). CleanTechnica (7 March 2013). Retrieved 20 April 2014. - Falling silicon prices shakes up solar manufacturing industry. Down To Earth (19 September 2011). Retrieved 20 April 2014. - Perlin, John (1999). From space to Earth: the story of solar electricity. Earthscan. p. 50. ISBN 978-0-937948-14-9. |Wikimedia Commons has media related to Photovoltaics.| |Wikimedia Commons has media related to solar cell.| - PV Lighthouse Calculators and Resources for photovoltaic scientists and engineers - Photovoltaics CDROM online - Solar cell manufacturing techniques - Renewable Energy: Solar at Curlie - Solar Energy Laboratory at University of Southampton - NASA's Photovoltaic Info - Green, M. A.; Emery, K.; Hishikawa, Y.; Warta, W. (2010). "Solar cell efficiency tables (version 36)". Progress in Photovoltaics: Research and Applications. 18 (5): 346. doi:10.1002/pip.1021. - "Electric Energy From Sun Produced by Light Cell" Popular Mechanics, July 1931 article on various 1930s research on solar cells
Charles Darwin proposed and provided evidence for the scientific theory that all species have evolved. Charles Darwin proved over time species evolved from one or a few common ancestors through the process of natural selection. Charles Darwin's theory became widely accepted by the scientific community in the 1930s, and now forms the basis of modern evolutionary theory. In modified form, Darwin's theory remains a cornerstone of biology, as it provides a unifying explanation for the diversity of life. Charles Darwin was born on 12 February 1809 - and died on 19 April 1882. Charles Darwin was an English Naturalist. At the age of 51, Charles Darwin had just published On the Origin of Species. |Born||February 12, 1809| Mount House, Shrewsbury, Shropshire, England |Died||April 19, 1882 (aged 73)| Down House, Kent, England |Institutions||Royal Geographical Society| |Alma mater||University of Edinburgh| University of Cambridge |Known for||The Origin of Species| |Notable prizes||Royal Medal (1853)| Wollaston Medal (1859) Copley Medal (1864) |Religion||Church of England, though Unitarian family background, agnostic after 1851.| Charles Darwin developed his interest in natural history at Edinburgh University while studying first Medicine, then theology. His five-year voyage on the Beagle established him as a geologist whose observations and theories supported Charles Lyell's uniformitarian ideas, and publication of his journal of the voyage made him famous as a popular author. Puzzled by the geographical distribution of wildlife and fossils he collected on the voyage, Charles Darwin investigated the transmutation of species and conceived his theory of natural selection in 1838. Having seen others attacked as heretics for such ideas, he confided only in his closest friends and continued his extensive research to meet anticipated objections. In 1858, Alfred Russel Wallace sent him an essay describing a similar theory, causing the two to publish their theories early in a joint publication. His 1859 book On the Origin of Species established evolution by Common descent as the dominant scientific explanation of diversification in nature. He examined Human evolution and sexual selection in The Descent of Man, and Selection in Relation to Sex, followed by The Expression of the Emotions in Man and Animals. His research on plants was published in a series of books, and in his final book, he examined earthworms and their effect on soil. In recognition of Darwin's pre-eminence, he was buried in Westminster Abbey, close to John Herschel and Isaac Newton. Biography and early life of Charles Darwin. Charles Darwin was born in Shrewsbury, Shropshire, England on 12 February 1809 at his family home, the Mount. He was the fifth of six children of wealthy society doctor and financier Robert Darwin, and Susannah Darwin (née Wedgwood). He was the grandson of Erasmus Darwin on his father's side, and of Josiah Wedgwood on his mother's side. Both families were largely Unitarian, though the Wedgwoods were adopting Anglicanism. Robert Darwin, himself quietly a freethinker, made a nod toward convention by having baby Charles baptised in the Anglican Church. Nonetheless, Charles and his siblings attended the Unitarian chapel with their mother, and in 1817, Charles joined the day school run by its preacher. In July of that year, when Charles was eight years old, his mother died. From September 1818, he attended the nearby Anglican Shrewsbury School as a boarder. Charles Darwin spent the summer of 1825 helping his father treat the poor of Shropshire as an apprentice doctor. In the autumn, he went to the University of Edinburgh to study medicine, but he was revolted by the brutality of surgery and neglected his medical studies. He learned taxidermy from John Edmonstone, a freed black slave who told him exciting tales of the South American rainforest. Later, in The Descent of Man, he used this experience as evidence that "Negroes and Europeans" were closely related despite superficial differences in appearance. In Darwin's second year, he joined the Plinian Society, a student group interested in Natural history. He became a keen pupil of Robert Edmund Grant, a proponent of Jean-Baptiste Lamarck's theory of evolution by acquired characteristics, which Charles's grandfather Erasmus had also advocated. On the shores of the Firth of Forth, Darwin joined in Grant's investigations of the life cycle of marine animals. These studies found evidence for homology, the radical theory that all animals have similar organs which differ only in complexity, thus showing Common descent. In March 1827, Darwin made a presentation to the Plinian of his own discovery that the black spores often found in oyster shells were the eggs of a skate leech. He also sat in on Robert Jameson's natural history course, learning about stratigraphic geology, receiving training in classifying plants, and assisting with work on the extensive collections of the University Museum, one of the largest museums in Europe at the time. In 1827, his father, unhappy at his younger son's lack of progress, shrewdly enrolled him in a Bachelor of Arts course at Christ's College, Cambridge to qualify as a clergyman, expecting him to get a good income as an Anglican parson. However, Darwin preferred riding and shooting to studying. Along with his cousin William Darwin Fox, he became engrossed in the craze at the time for the competitive collecting of beetles. Fox introduced him to the Reverend John Stevens Henslow, professor of botany, for expert advice on beetles. Darwin subsequently joined Henslow's natural history course and became his favourite pupil, known to the dons as "the man who walks with Henslow". When exams drew near, Darwin focused on his studies and received private instruction from Henslow. Darwin was particularly enthusiastic about the writings of William Paley, including the argument for divine design in nature. It has been argued that Darwin's enthusiasm for Paley's religious adaptationism paradoxically played a role even later, when Darwin formulated his theory of natural selection. In his finals in January 1831, he performed well in theology and, having scraped through in classics, mathematics and physics, came tenth out of a pass list of 178. Residential requirements kept Darwin at Cambridge until June. Following Henslow's example and advice, he was in no rush to take Holy Orders. Inspired by Alexander von Humboldt's Personal Narrative, he planned to visit the Madeira Islands with some classmates after graduation to study natural history in the Tropics. To prepare himself, Darwin joined the geology course of the Reverend Adam Sedgwick and, in the summer, went with him to assist in mapping strata in Wales. After a fortnight with student friends at Barmouth, he returned home to find a letter from Henslow recommending Darwin as a suitable (if unfinished) naturalist for the unpaid position of gentleman's companion to Robert FitzRoy, the captain of HMS Beagle, which was to leave in four weeks on an expedition to chart the coastline of South America. His father objected to the planned two-year voyage, regarding it as a waste of time, but was persuaded by his brother-in-law, Josiah Wedgwood, to agree to his son's participation. Charles Darwin and the Journey of the Beagle. The Beagle survey took five years, two-thirds of which Darwin spent on land. He carefully noted a rich variety of geological features, fossils and living organisms, and methodically collected an enormous number of specimens, many of them new to science. At intervals during the voyage he sent specimens to Cambridge together with letters about his findings, and these established his reputation as a naturalist. His extensive detailed notes showed his gift for theorising and formed the basis for his later work. The journal he originally wrote for his family, published as The Voyage of the Beagle, summarises his findings and provides social, political and anthropological insights into the wide range of people he met, both native and colonial. While on board the ship, Darwin suffered badly from seasickness. In October 1833 he caught a fever in Argentina, and in July 1834, while returning from the Andes down to Valparaíso, he fell ill and spent a month in bed. Before they set out, FitzRoy gave Darwin the first volume of Charles Lyell's Principles of Geology, which explained landforms as the outcome of gradual processes over huge periods of time. On their first stop ashore at St Jago, Darwin found that a white band high in the volcanic rock cliffs consisted of baked coral fragments and shells. This matched Lyell's concept of land slowly rising or falling, giving Darwin a new insight into the geological history of the island which inspired him to think of writing a book on geology. He went on to make many more discoveries, some of them particularly dramatic. He saw stepped plains of shingle and seashells in Patagonia as raised beaches, and after experiencing an earthquake in Chile saw mussel-beds stranded above high tide showing that the land had just been raised. High in the Andes he saw several fossil trees that had grown on a sand beach, with seashells nearby. He theorised that coral atolls form on sinking volcanic mountains, and confirmed this when the Beagle surveyed the Cocos (Keeling) Islands. In South America, Darwin found and excavated rare fossils of gigantic extinct mammals in strata with modern seashells, indicating recent extinction and no change in climate or signs of catastrophe. Though he correctly identified one as a Megatherium and fragments of armour reminded him of the local armadillo, he assumed his finds were related to African or European species and it was a revelation to him after the voyage when Richard Owen showed that they were closely related to living creatures exclusively found in the Americas. Lyell's second volume, which argued against evolutionism and explained species distribution by "centres of creation", was sent out to Darwin. He puzzled over all he saw, and his ideas went beyond Lyell. In Argentina, he found that two types of rhea had separate but overlapping territories. On the Galápagos Islands, he collected mockingbirds and noted that they were different depending on which island they came from. He also heard that local Spaniards could tell from their appearance on which island tortoises originated, but thought the creatures had been imported by buccaneers. In Australia, the marsupial rat-kangaroo and the platypus seemed so unusual that Darwin thought it was almost as though two distinct Creators had been at work. In Cape Town he and FitzRoy met John Herschel, who had recently written to Lyell about that "mystery of mysteries", the origin of species. When organising his notes on the return journey, Darwin wrote that if his growing suspicions about the mockingbirds and tortoises were correct, "such facts undermine the stability of Species", then cautiously added "would" before "undermine". He later wrote that such facts "seemed to me to throw some light on the origin of species". Three natives who had been taken from Tierra del Fuego on the Beagle's previous voyage were taken back there to become missionaries. They had become "civilised" in England over the previous two years, yet their relatives appeared to Darwin to be "miserable, degraded savages". A year on, the mission had been abandoned and only Jemmy Button spoke with them to say he preferred his harsh previous way of life and did not want to return to England. Because of this experience, Darwin came to think that humans were not as far removed from animals as his friends then believed, and saw differences as relating to cultural advances towards civilisation rather than being racial. He detested the slavery he saw elsewhere in South America, and was saddened by the effects of European settlement on Aborigines in Australia and Maori in New Zealand. Captain FitzRoy was committed to writing the official Narrative of the Beagle voyages, and near the end of the voyage, he read Darwin's diary and asked him to rewrite this Journal to provide the third volume, on natural history. Inception of Charles Darwin's evolutionary theory. While Darwin was still on the voyage, Henslow fostered his former pupil's reputation by giving selected naturalists access to the fossil specimens and a pamphlet of Darwin's geological letters. When the Beagle returned on 2 October 1836, Darwin was a celebrity in scientific circles. After visiting his home in Shrewsbury and seeing relatives, Darwin hurried to Cambridge to see Henslow, who advised on finding naturalists available to describe and catalogue the collections, and agreed to take on the botanical specimens. Darwin's father organised investments, enabling his son to be a self-funded gentleman scientist, and an excited Darwin went round the London institutions being fêted and seeking experts to describe the collections. Zoologists had a huge backlog of work, and there was a danger of specimens just being left in storage. An eager Charles Lyell met Darwin for the first time on 29 October and soon introduced him to the up-and-coming anatomist Richard Owen, who had the facilities of the Royal College of Surgeons at his disposal to work on the fossil bones collected by Darwin. Owen's surprising results included gigantic sloths, a hippopotamus-like skull from the extinct rodent Toxodon, and armour fragments from a huge extinct armadillo (Glyptodon), as Darwin had initially surmised. The fossil creatures were unrelated to African animals, but closely related to living species in South America. In mid-December, Darwin moved to Cambridge to organise work on his collections and rewrite his Journal. He wrote his first paper, showing that the South American landmass was slowly rising, and with Lyell's enthusiastic backing read it to the Geological Society of London on 4 January 1837. On the same day, he presented his mammal and bird specimens to the Zoological Society. The ornithologist John Gould soon revealed that the Galapagos birds that Darwin had thought a mixture of blackbirds, "gross-beaks" and finches, were, in fact, twelve separate species of finches. On 17 February 1837, Darwin was elected to the Council of the Geographical Society, and in his presidential address, Lyell presented Owen's findings on Darwin's fossils, stressing geographical continuity of species as supporting his uniformitarian ideas. On 6 March 1837, Darwin moved to London to be close to this work, and joined the social whirl around scientists and savants such as Charles Babbage, who thought that God preordained life by natural laws rather than ad hoc miraculous creations. Darwin lived near his freethinking brother Erasmus, who was part of this Whig circle and whose close friend the writer Harriet Martineau promoted the ideas of Thomas Malthus underlying the Whig "Poor Law reforms" aimed at discouraging the poor from breeding beyond available food supplies. John Herschel's question on the origin of species was widely discussed. Medical men including Dr. Gully even joined Grant in endorsing transmutation of species, but to Darwin's scientist friends such radical heresy attacked the divine basis of the social order already under threat from recession and riots. Gould now revealed that the Galapagos mockingbirds from different islands were separate species, not just varieties, and the "wrens" were yet another species of finches. Darwin had not kept track of which islands the finch specimens were from, but found information from the notes of others on the Beagle, including FitzRoy, who had more carefully recorded their own collections. The zoologist Thomas Bell showed that the Galápagos tortoises were native to the islands. By mid-March, Darwin was convinced that creatures arriving in the islands had become altered in some way to form new species on the different islands, and investigated transmutation while noting his speculations in his "Red Notebook" which he had begun on the Beagle. In mid-July, he began his secret "B" notebook on transmutation, and on page 36 wrote "I think" above his first sketch of an evolutionary tree. Overwork, illness, and marriage of Charles Darwin. As well as launching into this intensive study of transmutation, Darwin became mired in more work. While still rewriting his Journal, he took on editing and publishing the expert reports on his collections, and with Henslow's help obtained a Treasury grant of £1,000 to sponsor this multi-volume Zoology of the Voyage of H.M.S. Beagle. He agreed to unrealistic dates for this and for a book on South American Geology supporting Lyell's ideas. Darwin finished writing his Journal around 20 June 1837 just as Queen Victoria came to the throne, but then had its proofs to correct. Charles Darwin's health suffered from the pressure. On 20 September 1837, he had "palpitations of the heart". On doctor's advice that a month of recuperation was needed, he went to Shrewsbury then on to visit his Wedgwood relatives at Maer Hall, but found them too eager for tales of his travels to give him much rest. His charming, intelligent and rather messy cousin Emma Wedgwood, nine months older than Darwin, was nursing his invalid aunt. His uncle Jos pointed out an area of ground where cinders had disappeared under loam and suggested that this might have been the work of earthworms. This inspired a talk which Darwin gave to the Geological Society on 1 November, the first demonstration of the role of earthworms in soil formation. William Whewell pushed Darwin to take on the duties of Secretary of the Geological Society. After first declining this extra work, he accepted the post in March 1838. Despite the grind of writing and editing, remarkable progress was made on transmutation. While keeping his developing ideas secret, Darwin took every opportunity to question expert naturalists and, unconventionally, people with practical experience such as farmers and pigeon fanciers. Over time his research drew on information from his relatives and children, the family butler, neighbours, colonists and former shipmates. He included mankind in his speculations from the outset, and on seeing an ape in the zoo on 28 March 1838 noted its child-like behaviour. The strain told, and by June he was being laid up for days on end with stomach problems, headaches and heart symptoms. For the rest of his life, he was repeatedly incapacitated with episodes of stomach pains, vomiting, severe boils, palpitations, trembling and other symptoms, particularly during times of stress, such as when attending meetings or dealing with controversy over his theory. The cause of Darwin's illness was unknown during his lifetime, and attempts at treatment had little success. Recent attempts at diagnosis have suggested Chagas disease caught from insect bites in South America, Ménière's disease, or various psychological illnesses as possible causes, without any conclusive results. On June 23 1838, he took a break from the pressure of work and went "geologising" in Scotland. He visited Glen Roy in glorious weather to see the parallel "roads", horizontal ledges cut into the hillsides. He thought that these were raised beaches: they were later shown to have been shorelines of a glacial lake. Fully recuperated, he returned to Shrewsbury in July. Used to jotting down daily notes on animal breeding, he scrawled rambling thoughts about career and prospects on two scraps of paper, one with columns headed "Marry" and "Not Marry". Advantages included "constant companion and a friend in old age ... better than a dog anyhow", against points such as "less money for books" and "terrible loss of time." Having decided in favour, he discussed it with his father, then went to visit Emma on July 29 1838. He did not get around to proposing, but against his father's advice he mentioned his ideas on transmutation. Continuing his research in London, Darwin's wide reading now included "for amusement" the 6th edition of Malthus's An Essay on the Principle of Population which calculates from the birth rate that human population could double every 25 years, but in practice growth is kept in check by death, disease, wars and famine. Darwin was well prepared to see at once that this also applied to de Candolle's "warring of the species" of plants and the struggle for existence among wildlife, explaining how numbers of a species kept roughly stable. As species always breed beyond available resources, favourable variations would make organisms better at surviving and passing the variations on to their offspring, while unfavourable variations would be lost. This would result in the formation of new species. On 28 September 1838 he noted this insight, describing it as a kind of wedging, forcing adapted structures into gaps in the economy of nature as weaker structures were thrust out. He now had a theory by which to work, and over the following months compared farmers picking the best breeding stock to a Malthusian Nature selecting from variants thrown up by "chance" so that "every part of (every) newly acquired structure is fully practised and perfected", and thought this analogy "the most beautiful part of my theory". On 11 November, he returned to Maer and proposed to Emma, once more telling her his ideas. She accepted, then in exchanges of loving letters she showed how she valued his openness, but her upbringing as a very devout Anglican led her to express fears that his lapses of faith could endanger her hopes to meet in the afterlife. While he was house-hunting in London, bouts of illness continued and Emma wrote urging him to get some rest, almost prophetically remarking "So don't be ill any more my dear Charley till I can be with you to nurse you." He found what they called "Macaw Cottage" (because of its gaudy interiors) in Gower Street, then moved his "museum" in over Christmas. The marriage was arranged for 24 January 1839, but the Wedgwoods set the date back. On the 24th, Darwin was honoured by being elected as Fellow of the Royal Society. On 29 January 1839, Darwin and Emma Wedgwood were married at Maer in an Anglican ceremony arranged to suit the Unitarians, then immediately caught the train to London and their new home. Charles Darwin preparing the theory of natural selection for publication. Darwin had found the basis of his theory of natural selection, but was aware of how much work remained to make it credible to his fiercely critical scientific colleagues. As Secretary of the Geological Society at its meeting on 19 December 1838, he saw Owen and Buckland display their hatred of evolution when destroying the reputation of his old Lamarckian teacher Grant. Work on his Beagle findings continued, and as well as consulting animal husbanders he carried out extensive experiments with plants, trying to find evidence answering all the arguments he anticipated when his theory was made public. When FitzRoy's Narrative was published in May 1839, Darwin's Journal and Remarks (The Voyage of the Beagle) as the third volume was such a success that later that year it was published on its own. Early in 1842, Darwin sent a letter about his ideas to Lyell, who was dismayed that his ally now denied "seeing a beginning to each crop of species". In May, Darwin's book on coral reefs was published after more than three years of work, and he then wrote a "pencil sketch" of his theory. To escape the pressures of London, the family moved to rural Down House in November. On 11 January 1844 Darwin wrote to his botanist friend Joseph Dalton Hooker about his theory, saying it was like confessing "a murder", but to his relief Hooker thought that "there might have been a gradual change of species" and expressed interest in Darwin's explanation. By July, Darwin had expanded his "sketch" into a 230-page "Essay". His fears that his ideas would be dismissed as Lamarckian Radicalism were reawakened by controversy over the anonymous publication in October of Vestiges of the Natural History of Creation, which was severely attacked by establishment scientists. However, the book was a best-seller and widened middle-class interest in transmutation, paving the way for Darwin as well as reminding him of the need to answer all difficulties before making his theory public. Darwin completed his third geological book in 1846, and embarked on a huge study of barnacles with the assistance of Hooker. In 1847, Hooker read the "Essay" and sent notes that provided Darwin with the calm critical feedback that he needed, but would not commit himself and questioned Darwin's opposition to continuing acts of creation. In an attempt to improve his chronic ill health, Darwin went to a spa in Malvern in 1849. To his surprise, he found that two months of water treatment helped. Then his treasured daughter Annie fell ill, reawakening his fears that his illness might be hereditary. After a long series of crises, she died and Darwin lost all faith in a beneficent God. Darwin's eight years of work on barnacles (Cirripedia) found "homologies" that supported his theory by showing that slightly changed body parts could serve different functions to meet new conditions. In 1853 it earned him the Royal Society's Royal Medal, and it made his reputation as a biologist. In 1854 he resumed work on his theory of species, and in November realised that divergence in the character of descendants could be explained by them becoming adapted to "diversified places in the economy of nature". Publication of the theory of evolution by Charles Darwin. By the spring of 1856, Darwin was investigating how species spread. Hooker increasingly doubted the traditional view that species were fixed, but their new ally Thomas Huxley was firmly against evolution. Lyell was intrigued by Darwin's speculations without realising their extent, and when he read a paper by Wallace on the Introduction of species, he saw similarities with Darwin's thoughts and urged him to publish to establish precedence. Though Darwin saw no threat, he began work on a short paper. Finding answers to difficult questions such as how seeds could travel across seawater held him up repeatedly and he expanded his plans to a "big book on species" titled Natural Selection. He continued his researches, obtaining information and specimens from naturalists worldwide including Wallace who was working in Borneo. In December 1857, Darwin received a letter from Wallace asking if the book would examine human origins. He responded that he would avoid that subject, "so surrounded with prejudices", while encouraging Wallace's theorising and adding that "I go much further than you." Darwin's book was half way when, on 18 June 1858, he received a paper from Wallace describing natural selection. Though shocked that he had been "forestalled", Darwin sent it on to Lyell, as requested, and, though Wallace had not asked for publication, offered to send it to any journal that Wallace chose. His family was in crisis with children in the village dying of scarlet fever, and he put matters in the hands of Lyell and Hooker. They agreed on a joint presentation at the Linnean Society on 1 July of On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection; however, Darwin's baby son died of the scarlet fever and he was too distraught to attend. There was little immediate attention to this announcement of the theory; the president of the Linnean left the meeting lamenting that the year had not been marked by any great discoveries. Later, Darwin could only recall one review; Professor Haughton of Dublin claimed that "all that was new in them was false, and what was true was old." Darwin struggled for thirteen months to produce an abstract of his "big book", suffering from ill health but getting constant encouragement from his scientific friends. Lyell arranged to have it published by John Murray. On the Origin of Species by Means of Natural Selection, or The Preservation of Favoured Races in the Struggle for Life (usually abbreviated to The Origin of Species) proved unexpectedly popular, with the entire stock of 1,250 copies oversubscribed when it went on sale to booksellers on 22 November 1859. In the book, Darwin set out "one long argument" of detailed observations, inferences and consideration of anticipated objections. His only allusion to human evolution was the understatement that "light will be thrown on the origin of man and his history". He avoided the then controversial term "evolution", but at the end of the book concluded that "endless forms most beautiful and most wonderful have been, and are being, evolved." His theory is simply stated in the introduction: As many more individuals of each species are born than can possibly survive; and as, consequently, there is a frequently recurring struggle for existence, it follows that any being, if it vary however slightly in any manner profitable to itself, under the complex and sometimes varying conditions of life, will have a better chance of surviving, and thus be naturally selected. From the strong principle of inheritance, any selected variety will tend to propagate its new and modified form. Reaction to the publication of Charles Darwin's work. There was wide public interest in Charles Darwin's book and a controversy which he monitored closely, keeping press cuttings of reviews, articles, satires, parodies and caricatures. Critical reviewers were quick to pick out the unstated implications of "men from monkeys", while amongst favourable responses Huxley's reviews included swipes at Richard Owen, leader of the scientific establishment Huxley was trying to overthrow. Owen's verdict was unknown until his April review condemned the book. The Church of England scientific establishment, including Darwin's old Cambridge tutors Sedgwick and Henslow, reacted against the book, though it was well received by a younger generation of professional naturalists. In 1860, the publication of Essays and Reviews by seven liberal Anglican theologians diverted clerical attention away from Darwin. An explanation of higher criticism and other heresies, it included the argument that miracles broke God's laws, so belief in them was atheistic-and praise for "Mr Darwin's masterly volume (supporting) the grand principle of the self-evolving powers of nature". The most famous confrontation took place at a meeting of the British Association for the Advancement of Science in Oxford. Professor John William Draper delivered a long lecture about Darwin and social progress, then Samuel Wilberforce, the Bishop of Oxford, argued against Darwin. In the ensuing debate Joseph Hooker argued strongly for Darwin and Thomas Huxley established himself as "Darwin's bulldog" - the fiercest defender of evolutionary theory on the Victorian stage. Both sides came away feeling victorious, but Huxley went on to make much of his claim that on being asked by Wilberforce whether he was descended from monkeys on his grandfather's side or his grandmother's side, Huxley muttered: "The Lord has delivered him into my hands" and replied that he "would rather be descended from an ape than from a cultivated man who used his gifts of culture and eloquence in the service of prejudice and falsehood". Darwin's illness kept him away from the public debates, though he read eagerly about them and mustered support through correspondence. Asa Gray persuaded a publisher in the United States to pay royalties, and Darwin imported and distributed Gray's pamphlet Natural Selection is not inconsistent with Natural theology. In Britain, friends including Hooker and Lyell took part in the scientific debates which Huxley pugnaciously led to overturn the dominance of clergymen and aristocratic amateurs under Owen in favour of a new generation of professional scientists. Owen made the mistake of (wrongly) claiming certain anatomical differences between ape and human brains, and accusing Huxley of advocating "Ape Origin of Man". Huxley gladly did just that, and his campaign over two years was devastatingly successful in ousting Owen and the "old guard". Darwin's friends formed The X Club and helped to gain him the honour of the Royal Society's Copley Medal in 1864. Broader public interest had already been stimulated by Vestiges, and the Origin of Species was translated into many languages and went through numerous reprints, becoming a staple scientific text accessible both to a newly curious middle class and to "working men" who flocked to Huxley's lectures. Darwin's theory also resonated with various movements at the Charles Darwin Descent of Man, sexual selection, and botany. Despite repeated bouts of illness during the last twenty-two years of his life, Darwin pressed on with his work. He had published an abstract of his theory, but more controversial aspects of his "big book" were still incomplete, including explicit evidence of humankind's descent from earlier animals, and exploration of possible causes underlying the development of society and of human mental abilities. He had yet to explain features with no obvious utility other than decorative beauty. His experiments, research and writing continued. When Darwin's daughter fell ill, he set aside his experiments with seedlings and domestic animals to accompany her to a seaside resort where he became interested in wild orchids. This developed into an innovative study of how their beautiful flowers served to control insect pollination and ensure cross fertilisation. As with the barnacles, homologous parts served different functions in different species. Back at home, he lay on his sickbed in a room filled with experiments on climbing plants. A reverent Ernst Haeckel who had spread the gospel of Darwinismus in Germany visited him. Wallace remained supportive, though he increasingly turned to spiritualism. Charles Darwin Variation of Plants and Animals Under Domestication, the first part of Darwin's planned "big book" (expanding on his "abstract" published as The Origin of Species), grew to two huge volumes, forcing him to leave out Human evolution and sexual selection, and sold briskly despite its size. A further book of evidences, dealing with natural selection in the same style, was largely written, but was not published until 1975. The question of human evolution had been taken up by his supporters (and detractors) shortly after the publication of The Origin of Species, but Darwin's own contribution to the subject came more than ten years later with the two-volume The Descent of Man, and Selection in Relation to Sex published in 1871. In the second volume, Darwin introduced in full his concept of sexual selection to explain the evolution of human culture, the differences between the human sexes, and the differentiation of human races, as well as the beautiful (and seemingly non-adaptive) plumage of birds. A year later Darwin published his last major work, The Expression of the Emotions in Man and Animals, which focused on the evolution of human psychology and its continuity with the behaviour of animals. He developed his ideas that the human mind and cultures were developed by natural and sexual selection, an approach which has been revived in the last three decades with the emergence of evolutionary psychology. As he concluded in Descent of Man, Darwin felt that, despite all of humankind's "noble qualities" and "exalted powers": "Man still bears in his bodily frame the indelible stamp of his lowly origin." His evolution-related experiments and investigations culminated in books on the movement of climbing plants, insectivorous plants, the effects of cross and self fertilisation of plants, different forms of flowers on plants of the same species, and The Power of Movement in Plants. In his last book, he returned to the effect earthworms have on soil formation. He died in Downe, Kent, England, on 19 April 1882. He had expected to be buried in St Mary's churchyard at Downe, but at the request of Darwin's colleagues, William Spottiswoode (President of the Royal Society) arranged for Darwin to be given a state funeral and buried in Westminster Abbey, close to John Herschel and Isaac Newton. Charles Darwin's children. |William Erasmus Darwin||(27 December 1839-1914)| |Anne Elizabeth Darwin||(2 March 1841-22 April 1851)| |Mary Eleanor Darwin||(23 September 1842-16 October 1842)| |Henrietta Emma "Etty" Darwin||(September 25 1843-1929)| |George Howard Darwin||(9 July 1845-7 December 1912)| |Elizabeth "Bessy" Darwin||(8 July 1847-1926)| |Francis Darwin||(16 August 1848-19 September 1925)| |Leonard Darwin||(15 January 1850-26 March 1943)| |Horace Darwin||(13 May 1851-29 September 1928)| |Charles Waring Darwin||(6 December 1856-28 June 1858)| The Darwins had ten children: two died in infancy, and Annie's death at the age of ten had a devastating effect on her parents. Charles was a devoted father and uncommonly attentive to his children. Whenever they fell ill he feared that they might have inherited weaknesses from inbreeding due to the close family ties he shared with his wife and cousin, Emma Wedgwood. He examined this topic in his writings, contrasting it with the advantages of crossing amongst many organisms. Despite his fears, most of the surviving children went on to have distinguished careers as notable members of the prominent Darwin-Wedgwood family. Of his surviving children, George, Francis and Horace became Fellows of the Royal Society, distinguished as Astronomer, botanist and civil engineer, respectively. His son Leonard, on the other hand, went on to be a soldier, politician, economist, eugenicist and mentor of the statistician and evolutionary biologist Ronald Fisher. Religious views of Charles Darwin. Though Charles Darwin's family background was Nonconformist, and his father, grandfather and brother were Freethinkers, at first he did not doubt the literal truth of the Bible. He attended a Church of England school, then at Cambridge studied Anglican theology to become a clergyman. He was convinced by William Paley's teleological argument that design in nature proved the existence of God, but during the Beagle voyage he questioned, for example, why beautiful deep-ocean creatures had been created where no one could see them, or how the ichneumon wasp paralysing caterpillars as live food for its eggs could be reconciled with Paley's vision of beneficent design. He was still quite orthodox and would quote the Bible as an authority on morality, but did not trust the history in the Old Testament. When investigating transmutation of species he knew that his naturalist friends thought this a bestial heresy undermining miraculous justifications for the social order, the kind of radical argument then being used by Dissenters and atheists to attack the Church of England's privileged position as the Established church. Though Darwin wrote of religion as a tribal survival strategy, he still believed that God was the ultimate lawgiver. His belief dwindled, and with the death of his daughter Annie in 1851, Darwin finally lost all faith in Christianity. He continued to help the local church with parish work, but on Sundays would go for a walk while his family attended church. He now thought it better to look at pain and suffering as the result of general laws rather than direct intervention by God. When asked about his religious views, he wrote that he had never been an atheist in the sense of denying the existence of a God, and that generally "an agnostic would be the more correct description of my state of mind." The "Lady Hope Story", published in 1915, claimed that Darwin had reverted back to Christianity on his sickbed. The claims were refuted by Darwin's children and have been dismissed as false by historians. His daughter, Henrietta, who was at his deathbed, said that he did not convert to Christianity. His last words were, in fact, directed at Emma: "Remember what a good wife you have been." Political interpretations of Charles Darwin's theory. Darwin's theories and writings, combined with Gregor Mendel's genetics (the "modern synthesis"), form the basis of all modern biology. However, Darwin's fame and popularity led to his name being associated with ideas and movements which at times had only an indirect relation to his writings, and sometimes went directly against his express comments. Charles Darwin and Eugenics. Following Darwin's publication of the Origin, his cousin, Francis Galton, applied the concepts to human society, starting in 1865 with ideas to promote "hereditary improvement" which he elaborated at length in 1869. In The Descent of Man Darwin agreed that Galton had demonstrated the probability that "talent" and "genius" in humans was inherited, but dismissed the social changes Galton proposed as too utopian. Neither Galton nor Darwin supported government intervention and thought that, at most, heredity should be taken into consideration by people seeking potential mates. In 1883, after Darwin's death, Galton began calling his social philosophy eugenics. In the 20th century, eugenics movements gained popularity in a number of countries and became associated with reproduction control programmes such as compulsory sterilisation laws, then were stigmatised after their usage in the rhetoric of Nazi Germany in its goals of genetic "purity". The ideas of Thomas Malthus and Herbert Spencer which applied ideas of evolution and "survival of the fittest" to societies, nations and businesses became popular in the late 19th and early 20th century, and were used to defend various, sometimes contradictory, ideological perspectives including laissez-faire economics, colonialism, racism and imperialism. The term "Social Darwinism" originated around the 1890s, but became popular as a derogatory term in the 1940s with Richard Hofstadter's critique of laissez-faire conservatism. The concepts predate Darwin's publication of the Origin in 1859: Malthus died in 1834 and Spencer published his books on economics in 1851 and on evolution in 1855. Darwin himself insisted that social policy should not simply be guided by concepts of struggle and selection in nature, and that sympathy should be extended to all races and nations. Commemoration of Charles Darwin. During Darwin's lifetime, many species and geographical features were given his name. An expanse of water adjoining the Beagle Channel was named Darwin Sound by Robert FitzRoy after Darwin's prompt action saved them from being marooned on a nearby shore when a collapsing glacier caused a large wave that would have swept away their boats, and the nearby Mount Darwin in the Andes was named in celebration of Darwin's 25th birthday. When the Beagle was surveying Australia in 1839, Darwin's friend John Lort Stokes sighted a natural harbour which the ship's captain Wickham named Port Darwin. The settlement of Palmerston founded there in 1869 was officially renamed Darwin in 1911. It became the capital city of Australia's Northern Territory, which also boasts Charles Darwin University and Charles Darwin National Park. The 14 species of finches he collected in the Galápagos Islands are affectionately named "Darwin's finches" in honour of his legacy. Darwin College, Cambridge, founded in 1964, was named in honour of the Darwin family, partially because they owned some of the land it was on. In 1992, Darwin was ranked #16 on Michael H. Hart's list of the most influential figures in history. Darwin came fourth in the 100 Greatest Britons poll sponsored by the BBC and voted for by the public. In 2000 Darwin's image appeared on the Bank of England ten pound note, replacing Charles Dickens. His impressive, luxuriant beard (which was reportedly difficult to forge) was said to be a contributory factor to the bank's choice. As a humorous celebration of evolution, the annual Darwin Award is bestowed on individuals who "improve our gene pool by removing themselves from it." Darwin has been the subject of many exhibitions, including the "Darwin" exhibition organised by the American Museum of Natural History in New York City in 2006 and shown in various cities in the US. Numerous biographies of Darwin have appeared, and the 1980 biographical novel The Origin by Irving Stone gives a closely researched fictional account of Darwin's life from the age of 22 onwards. Works of Charles Darwin. Darwin was a prolific author, and even without publication of his works on evolution would have had a considerable reputation as the author of The Voyage of the Beagle, as a geologist who had published extensively on South America and had solved the puzzle of the formation of coral atolls, and as a biologist who had published the definitive work on barnacles. While The Origin of Species dominates perceptions of his work, The Descent of Man, and Selection in Relation to Sex and The Expression of Emotions in Man and Animals had considerable impact, and his books on plants including The Power of Movement in Plants were innovative studies of great importance, as was his final work on The Formation of Vegetable Mould Through the Action of Worms. His writings are currently available at The Complete Work of Charles Darwin Online - the Table of Contents provides links to all of his publications, including alternative editions, contributions to books & periodicals, correspondence, life and letters, autobiography, as well as a complete bibliography and catalogue of his manuscripts. The works are free to read, but not public domain, and include publications still under copyright. For unencumbered versions of his major works, see Works by Charles Darwin at Project Gutenberg. Why not also search for... Glossary. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97.
Invasion of Poland ww2dbasePoland had been reborn as an independent nation after World War I and the collapse of Austria-Hungary, Russia, and Germany. Polish borders had been partly re-established by the Versailles Treaty but a series of armed conflicts with Germany, Czechoslovakia, Lithuania, and Ukrainian nationalists, as well as a major war with the Soviet Union, gave the borders their final shape. ww2dbaseDuring the course of the Polish-Soviet War (1919-20), Poland had been forced to rely on her own resources as help from the Western Allies had been slow in coming or had actively blocked by pro-communist unions in Europe. Because of the Polish-Soviet war and continuing Soviet efforts at infiltration thereafter, Polish military and political planning focused primarily on a future conflict with the Soviets. To this end, the Poles developed alliances with Rumania and Latvia. Poland's policy toward Germany was based on her alliance with France, but Polish-Czech relations remained cool. The problem with the French alliance, as far as the Poles were concerned, was the instability in French politics which resulted in constant indecision about the eastern alliances. As governments rose and fell in regular succession, French policies toward Poland and other allies changed. ww2dbaseGerman military leaders had begun planning for war with Poland as early as the mid 1920s. Recovering the ethnically Polish territory of Pomerania, Poznan, and Silesia, as well as the largely German Free City of Danzig were the major objectives. Nevertheless, the restrictions of Versailles and Germany's internal weakness made such plans impossible to realize. Hitler's rise to power in 1933 capitalized on German's desire to regain lost territories, to which Nazi leaders added the goal of destroying an independent Poland. According to author Alexander Rossino, prior to the war Hitler was at least as anti-Polish as anti-Semitic in his opinions. That same year, Poland's Marshal Jozef Pilsudski proposed to the French a plan for a joint invasion to remove Hitler from power, which the French vetoed as mad warmongering. ww2dbaseIn 1934, however, the Germans signed a non-aggression pact with Poland, providing a kind of breathing space for both countries. German efforts to woo Poland into an anti-Soviet alliance were politely deferred as Poland attempted to keep her distance from both powerful neighbors. As German power began to grow, however, and Hitler increasingly threatened his neighbors, the Poles and French began to revitalize their alliance. ww2dbaseThe Munich Pact dramatically increased Poland's danger. At the last minute, the Poles and Czechs had attempted to patch up their differences. The Czechs would give up disputed territory taken in 1919 and half ownership in the Skoda arms works in exchange for Polish military intervention in the case of German attack. The Munich Pact, however, closed this option and Poland sent its troops to forcibly occupy the territory of Teschen and the nearby Bohumin rail junction to keep it out of German hands. ww2dbaseAfter Hitler violated the Munich treaty, Poland was able to extract guarantees of military assistance from France, and significantly, Britain. In March 1939, Hitler began to make demands on Poland for the return of territory in the Polish Corridor, cessation of Polish rights in Danzig, and annexation of the Free City to Germany. These Poland categorically rejected. As negotiations continued, both sides prepared for war. ww2dbaseEditor's addition: German demands sent to Poland on 25 Aug 1939 were the following. - The return of Danzig to Germany - Rail and road access across the corridor between Germany and East Prussia - The cession to Germany any Polish territory formerly of pre-WW1 Germany that hosted 75% or more ethnic Germans - An international board to discuss the cession of the Polish Corridor to Germany ww2dbaseHitler, however, again altered the strategic landscape again in August 1939 when Germany and the Soviet Union signed a non-aggression pact which contained secret protocols designed to partition Poland and divide up most of eastern Europe between the two dictators. ww2dbasePoland's strategic position in 1939 was weak, but not hopeless. German control over Slovakia added significantly to Poland's already overly long frontier. German forces could attack Poland from virtually any direction. ww2dbasePoland's major weakness, however, was its lack of a modernized military. In the 1920s, Poland had had the world's first all-metal air force, but had since fallen behind other powers. Poland was a poor, agrarian nation without significant industry. While Polish weapons design was often equal or superior to German and Soviet design, it simply lacked the capacity to produce equipment in the needed quantities. One example was the P-37 Łos bomber, which at start of the war was the world's best medium bomber. Another example was the "Ur" anti-tank rifle which was the first weapon to use tungsten-core ammunition. ww2dbaseTo motorize a single division to German standards would have required use of all the civilian cars and trucks in the country. This occurred despite heroic efforts by Polish society to create a modern military which included fundraising among civilians and the Polish communities in the USA to buy modern equipment. As a percentage of GNP, Polish defense spending in the 1930s was second in Europe, behind the Soviet Union but ahead of Germany. Yet, in real dollar terms, the budget of the Luftwaffe alone in 1939 was ten times greater than the entire Polish defense budget. Yet even this did not give the full picture, since the Polish defense budget included money to upgrade roads and bridges and to build arms factories. ww2dbaseThe Polish leadership was also hamstrung by political rifts and by the legacy of Pilsudski's authoritarian rule which had retarded the development of modern strategic thinking and command. The top leadership was held by Marshal Edward Smigly-Rydz, who had been an able corps commander in 1920 but lacked the ability to command a complex modern army. Yet there were many able officers, such as Gen. Tadeusz Kutrzeba and Gen. Kazimierz Sosnkowski. Although overburdened by military brass, Poland had a solid corps of junior officers. The Polish Air Force, by contrast, was a very strong service. ww2dbasePoland's one major advantage was in intelligence, beginning in the early 1930s, a group of young mathematicians had managed to break the German military codes of the supposedly unbreakable Enigma encoding machine. Until 1938, virtually all German radio traffic could be read by Polish intelligence. Thereafter, the Germans began to add new wrinkles to their systems, complicating the task. On the eve of the war, the Poles could read about ten percent of Wehrmacht and Luftwaffe traffic and nothing from the Kriegsmarine. However, the German military police frequencies continued to use the older system and were fully readable. This was augmented by human intelligence efforts. By September 1, 1939, the Polish high command knew the location and disposition of 90 percent of German combat units on the eastern front. ww2dbasePolish doctrine had developed during the Polish-Soviet War and emphasized maneuver with little reliance placed on static defenses, aside from a few key points. Unfortunately, the Polish army's ability to maneuver was far less than the more mechanized German army. ww2dbaseMuch mythology surrounds Poland's use of cavalry, mostly due to Nazi propaganda absorbed by Western historians. About 10 percent of the Polish army was horse cavalry, a smaller percentage than the U.S. army in 1939. Poland had more tanks than Italy, a country with a well developed automotive industry. Polish cavalry were used as form of mobile infantry and rarely fought mounted, and never with lances. The cavalry attracted high-caliber recruits and the forces trained alongside tanks and possessed greater tank-fighting ability than comparable infantry units. Their use was also envisioned in any conflict with the USSR in eastern Poland where the terrain was mainly forest, swamp, and mountain. ww2dbasePoland's primary strategic goal was to draw France and Britain into the war on her side in the event of an attack by Germany. Poland's defense strategy in 1939, developed by Gen. Kutrzeba, envisioned a fighting withdrawal to the southeastern part of the country, the "Rumanian bridgehead." There, the high command stockpiled reserve supplies of equipment and fuel. In the rougher terrain north of the Rumanian and Hungarian borders, the army would make its stand. If all went well, an Anglo-French counterattack in the west would reduce German pressure and Polish forces could be re-supplied by the allies through friendly Rumania. ww2dbaseHitler's political tactics, however, forced a modification of this plan. Fearing the Germans might attempt to seize the Polish Corridor or Danzig and then declare the war over, Polish forces were ordered closer to the border to ensure that any German attack would be immediately engaged in major combat. In so doing they would ensure that Poland's allies could not wriggle out of their treaty obligations. ww2dbaseFor its part, Germany's planners sought to deliver a rapid knock out blow to Poland within the first two weeks. German forces would launch deep armored attacks into Poland along two main routes: ?od?-Piotrkow-Warsaw and from Prussia across the Narew River into eastern Mazovia. There would be secondary attacks in the south and against the Polish coastal defenses in the north. The primary objective would be to cut off Polish forces in northern and western Poland and seize the capital. [Editor's addition: To further deter France from entering the soon-to-begin German-Polish conflict, Hitler made several public visits to the West Wall on the German-French border beginning from Aug 1938 to survey the construction of bunkers, blockhouses, and other fortifications. The Nazi propaganda machine elaborated on these visits to form a picture of an invincible defensive line to deter French attacks when Germany invades Poland.] ww2dbaseOn paper, Poland's full mobilized army would have numbered about 2.5 million. Due to allied pressure and mismanagement, however, only about 600,000 Polish troops were in place to meet the German invasion on September 1, 1939. These forces were organized into 7 armies and 5 independent operational groups. The typical Polish infantry division was roughly equal in numbers to its German counterpart, but weaker in terms of anti-tank guns, artillery support, and transport. Poland had 30 active and 7 reserve divisions. In addition there were 12 cavalry brigades and one mechanized cavalry brigade. These forces were supplemented by units of the Border Defense Corps (KOP), an elite force designed to secure the frontiers from infiltration and engage in small unit actions, diversion, sabotage, and intelligence gathering. There was also a National Guard used for local defense and equipped with older model weapons. Armored train groups and river flotillas operated under army command. ww2dbaseGerman forces were organized in two Army Groups, with a total of 5 armies. The Germans fielded about 1.8 million troops. The Germans had 2600 tanks against the Polish 180, and over 2,000 aircraft against the Polish 420. German forces were supplemented by a Slovak brigade. ww2dbaseArmed clashes along the border became increasingly frequent in August 1939 as Abwehr operations worked to penetrate Polish forward areas and were opposed by the Polish Border Defense Corps, an elite unit originally designed to halt Soviet penetration of the eastern frontier. These clashes alarmed the French who urged the Poles to avoid "provoking" Hitler. ww2dbasePolish forces had been partly mobilized in secret in the summer of 1939. Full mobilization was to be declared in late August, but was halted at French insistence. Mobilization was again declared on August 30, but halted to French threats to withhold assistance, and then re-issued the following day. As a result of this, only about a third of Polish forces were equipped and in place on Sept. 1. ww2dbaseOn August 31, operational Polish air units were dispersed to secret airfields. The navy's three most modern destroyers executed Operation Peking and slipped out of the Baltic Sea to join the Royal Navy. Polish submarines dispersed to commence minelaying operations. ww2dbaseAs Hitler gathered his generals, he ordered them to "kill without pity or mercy all men, women, and children of Polish descent or language... only in this way can we achieve the living space we need." Mobile killing squads Einsatzgruppen would follow the main body of troops, shooting POWs and any Poles who might organize resistance. On the night of August 31, Nazi agents staged a mock Polish attack on a German radio station in Silesia, dressing concentration camp prisoners in Polish uniforms and then shooting them. Hitler declared that Germany would respond to "Polish aggression." ww2dbaseThe invasion began at 4.45 A.M. The battleship Schleswig-Holstein was moored at the port of the Free City of Danzig on a "courtesy visit" near the Polish military transit station of Westerplatte. The station was on a sandy, narrow peninsula in the harbor, garrisoned by a small force of 182 men. At quarter to five on September 1, 1939, the giant guns of the battleship opened up on the Polish outpost at point-blank range. As dawn broke, Danzig SS men advanced on Westerplatte expecting to find only the pulverized remains of the Polish garrison. Instead, they found the defenders very much alive. In moments the German attack was cut to pieces. Further attacks followed. Polish defenders dueled the mighty battleship with a small field gun. At the Polish Post Office in Danzig, postal workers and Polish boy scouts held off Nazi forces for most of the day before surrendering. The post office defenders were summarily executed. A similar fate awaited Polish railway workers south of the city after they foiled an attempt to use an armored train to seize a bridge over the Vistula. ww2dbaseBattle for the Borders ww2dbaseGerman forces and their Danzig and Slovak allies attacked Poland across most sectors of the border. In the north, they attacked the Polish Corridor. In southern and central Poland, Nazi armored spearheads attacked toward Łódź and Kraków. In the skies, German planes commenced terror bombing of cities and villages. Nazi armies massacred civilians and used women and children as human shields. Everywhere were scenes of savage fighting and unbelievable carnage. Polish forces defending the borders gave a good account of themselves. At Mokra, near Częstochowa, the Nazi 4th Panzer Division attacked two regiments of the Wolynska Cavalry Brigade. The Polish defenders drew the Germans into a tank trap and destroyed over 50 tanks and armored cars. ww2dbaseThe battle in the Polish Corridor was especially intense. It was here that the myth of the Polish cavalry charging German tanks was born. As Gen. Heinz Guderian's panzer and motorized forces pressed the weaker Polish forces back, a unit of Pomorska Cavalry Brigade slipped through German lines late in the day on Sept. 1 in an effort to counterattack and slow the German advance. The unit happened on a German infantry battalion making camp. The Polish cavalry mounted a saber charge, sending the Germans fleeing at that moment, a group of German armored cars arrived on the scene and opened fire on the cavalry, killing several troopers and forcing the rest to retreat. Nazi propagandists made this into "cavalry charging tanks" and even made a movie to embellish their claims. While historians remembered the propaganda, they forgot that on September 1, Gen. Guderian had to personally intervene to stop the German 20th motorized division from retreating under what it described as "intense cavalry pressure." This pressure was being applied by the Polish 18th Lancer Regiment, a unit one tenth its size. ww2dbaseWhere the Poles were in position, they usually got the better of the fight, but due to the delay in mobilization, their forces were too few to defend all sectors. The effectiveness of German mechanized forces proved to be their ability to bypass Polish strong points, cutting them off and isolating them. By September 3, although the country was cheered by the news that France and Britain had declared war on Germany, the Poles were unable to contain the Nazi breakthroughs. Army Łódź, despite furious resistance, was pushed back and lost contact with its neighboring armies. German tanks drove through the gap directly toward Warsaw. In the Polish Corridor, Polish forces tried to stage a fighting withdrawal but suffered heavy losses to German tanks and dive bombers. In the air, the outnumbered Polish fighter command fought with skill and courage, especially around Warsaw. Nevertheless, Nazi aircraft systematically targeted Polish civilians, especially refugees. Bombing and shelling sent tens of thousands of people fleeing for their lives, crowding the roads, hindering military traffic. ww2dbaseEditor's addition: Realizing that escaping civilians crowded up important transportation routes and disrupted Polish military movement, the Germans began to broadcast fake Polish news programs that either falsely reported the position of German armies or to encourage civilians of certain areas to evacuate. With both methods, the Germans were able to exploit the fear of the Polish civilians and render Polish transportation systems nearly useless. ww2dbaseThe effects of the Poles' lack of mobility and the fateful decision to position forces closer to the border now began to tell. On September 5, the Polish High Command, fearing Warsaw was threatened, decided to relocate to southeastern Poland. This proved a huge mistake as the commanders soon lost contact with their major field armies. Warsaw itself was thrown into panic at the news. ww2dbaseAlthough the situation was grim, it was not yet hopeless. Following the High Command's departure, the mayor of Warsaw Stefan Starzyński and General Walerian Czuma rallied the city's defenders. Citizen volunteers built barricades and trenches. An initial German attack on the city's outskirts was repulsed. ww2dbaseThe fast German advance took little account of Army Poznań under the command of Gen. Kutrzeba which had been bypassed on the Nazis' quick drive toward Warsaw. On September 8-9, Army Poznań counterattacked from the north against the flank of the German forces moving on Warsaw. The Nazi advance halted in the face of the initial Polish success on the River Bzura. The Nazis' superiority in tanks and aircraft, however, allowed them to regroup and stop Army Poznań's southward push. The counterattack turned into a battle of encirclement. Although some forces managed to escape to Warsaw, by September 13, the Battle of Bzura was over and Polish forces destroyed. The delay, however, had allowed Warsaw to marshal its defenses, turning the perimeter of the city into a series of makeshift forts. In the south, German forces had captured Kraków early in the campaign but their advance slowed down as they approached Lwow. The defenders of Westerplatte had surrendered after seven days of fighting against overwhelming odds, but the city of Gdynia and the Hel Peninsula still held as Polish coastal batteries kept German warships at bay. ww2dbaseBy the middle of September, Polish losses had been severe and the German advance had captured half of the country. The high command's fateful decision to leave Warsaw had resulted in more than a week of confusion, rescued only by the courage of Army Poznań's doomed counterattack. By the middle of September, however, Polish defenses were stiffening. Local commanders and army-level generals now directed defenses around the key bastions of Warsaw, the Seacoast, and Lwow. German losses began to rise (reaching their peak during the third week of the campaign). Small Polish units isolated by the rapid advance regrouped and struck at vulnerable rear-area forces. ww2dbaseThis thin ray of hope, however, was extinguished on September 17 when Red Army forces crossed Poland's eastern border as Stalin moved to assist his Nazi ally and to seize his share of Polish territory. Nearly all Polish troops had been withdrawn from the eastern border to fight the Nazi onslaught. Only a few units of the Border Defense Corps aided by local volunteers stood in the way of Stalin's might. Although often outnumbered 100 to 1, these forces refused to surrender. ww2dbaseOne such force commanded by Lt. Jan Bolbot was attacked by tens of thousands of Red Army troops in their bunkers near Sarny. Bolbot's surrounded men mowed down thousands of Soviet attackers who advanced in human waves. Finally, communist forces piled debris around the bunkers and set them on fire. Lt. Bolbot, who remained in telephone contact with his commander, reported that the neighboring bunker had been breached and he could see hand to hand fighting there. He told his commander that his own bunker was on fire and filling with thick smoke but all his men were still at their posts and shooting back. Then the line went dead. The entire Sarny garrison fought to the last man. Bolbot was posthumously awarded the Virtuti Militari, Poland's highest military decoration. ww2dbasePolish defenses in the southeast fell apart as formations were ordered to fall back across the relatively friendly Rumanian and Hungarian borders to avoid capture. Fighting raged around Warsaw, the fortress of Modlin, and on the seacoast. On September 28, Warsaw capitulated. Polish forces on the Hel Peninsula staved off surrender until October 1. In the marshes of east central Poland, Group Polesie continued to mount effective resistance until October 5. When this final organized force gave up, its ammunition was gone and its active duty soldiers were outnumbered by the prisoners it had taken. ww2dbaseThroughout the first two and half weeks of September 1939, Germany threw its entire air force, all of panzer forces, and all of its frontline infantry and artillery against Poland. Its border with France was held by a relatively thin force of second and third string divisions. The French army, from its secure base behind the Maginot Line, had overwhelming superiority in men, tanks, aircraft, and artillery. A concerted push into western Germany would have been a disaster for Hitler. Yet the French stood aside and did nothing. The British were equally inactive, sending their bombers to drop propaganda leaflets over a few German cities. Had the Allies acted, the bloodiest and most terrible war in human history could have been averted. ww2dbaseManaging Editor C. Peter Chen's Addition ww2dbaseThe Western Betrayal ww2dbaseSince Britain and France had given Germany a freehand in annexing Czechoslovakia, some people of Central and Eastern Europe placed a distrust on the democratic nations of Western Europe. They used the word "betrayal" to describe their western allies who failed to fulfill their treaty responsibilities to stand by the countries they swore to protect. Britain and France's lack of initial response to the German invasion convinced them that their western allies had indeed betrayed them. ww2dbaseBritain simply did not wish to give up the notion that Germany could be courted as a powerful ally. After a note was sent from London to Berlin regarding to the invasion of her ally, Lord Halifax followed up by sending British Ambassador in Berlin Nevile Henderson a note stating that the note was "in the nature of a warning and is not to be considered as an ultimatum." Deep in its pacifist fantasies, Britain did not consider the violation of her allies borders a valid cause for war. France's response to the invasion was similar, expressing a willingness to negotiate though refusing to send any deadline for a German response. At 1930 London time on 1 Sep 1939, the British parliament gathered for a statement from Prime Minister Neville Chamberlain, expecting a declaration of war as dictated by the terms of the pact between Britain and Poland, or minimally the announcement of an ultimatum for Berlin. Instead, Chamberlain noted that Hitler was a busy man and might not had the time to review the note from Berlin yet. When he sat down after his speech, there were no cheers; even the parliament characterized by its support for appeasement was stunned by Chamberlain's lack of action. ww2dbaseAs Britain and France idled, the German Luftwaffe bombed Polish cities. They submitted messages to Berlin noting that if German troops were withdrawn, they were willing to forget the whole ordeal and return things to the status quo. It was a clear violation of the military pacts that they had signed with Poland. Finally, on 3 Sep, after thousands of Polish military and civilian personnel had already perished, Britain declared war on Germany at 1115. France followed suit at 1700 on the same day. Even after they had declared war, however, the sentiment did not steer far from that of appeasement. The two western Allies remained mostly idle. While Poland desperately requested the French Army to advance into Germany to tie down German divisions and requested Britain to bomb German industrial centers, Britain and especially France did nothing in fear of German reprisals. In one of the biggest "what-if" scenarios of WW2, even Wilhelm Keitel noted that had France reacted by conducting a full-scale invasion of Germany, Germany would have fallen immediately. "We soldiers always expected an attack by France during the Polish campaign, and were very surprised that nothing happened.... A French attack would have encountered only a German military screen, not a real defense", he said. The invasion was not mounted; instead, token advances were made under the order of Maurice Gamelin of France, where a few divisions marched into Saarbrücken and immediately withdrawn. The minor French expedition was embellished in Gamelin's communique as an invasion, and falsely gave the impression that France was fully committed and was meeting stiff German resistance. While the Polish ambassy in London reported several times that Polish civilians were being targeted by German aerial attacks, Britain continued to insist that the German military had been attacking only military targets. ww2dbaseSource: The Last Lion ww2dbaseOccupation and Escape ww2dbaseBoth German and Soviet occupations began with murder and brutality. Many prisoners of war were executed on the spot or later during the war. Countless civilians were also shot or sent to concentration camps, including political leaders, clergy, boy scouts, professors, teachers, government officials, doctors, and professional athletes. Among them was Mayor Starzynski of Warsaw who had rallied his city to resist the Nazi onslaught. In the German sector, Jews were singled out for special brutality. ww2dbaseMany small army units continued to fight from remote forests. Among the most famous was the legendary "Major Hubal," the pseudonym of Major Henryk Dobrzański. Major Hubal and his band of 70-100 men waged unrelenting guerilla warfare on both occupiers until they were cornered by German forces in April 1940 and wiped out. Hubal's body was burned by the Germans and buried in secret so he would not become a martyr, but others soon took his place. ww2dbasePOWs captured by the Germans were to be sent to labor and prison camps. Many soldiers escaped and disappeared into the local population. Those who remained in German custody were frequently abused, used for slave labor, or shot. POWs captured by the Soviets suffered an even worse fate. Officers were separated from the enlisted men and an estimated 22,000 were massacred by the Soviets. Enlisted men were often sent to Siberian gulags where many died. ww2dbaseLarge numbers of Polish soldiers had fled into neighboring Hungary and Rumania where they were interned. While both countries were officially allied to Germany, both had strong sympathy for the Poles. This was especially true in Hungary. Polish soldiers began to disappear from internment camps as bribable or sympathetic guards and officials pretended to look the other way. Individually and in small groups, they made their way to France and Britain. German diplomats raged at their Hungarian and Rumanian counterparts, but officials in neither country had much interest in enforcing Berlin’s decrees. As a result, within months a new Polish army had begun to form in the West. John Radzilowski, Traveller's History of Poland (2006) E. Kozlowski, Wojna Obronna Polski, 1939 (1979) Jan Gross, Revolution from Abroad (1988) Last Major Update: Jul 2006 Invasion of Poland Interactive Map Invasion of Poland Timeline |3 Apr 1939||Adolf Hitler, on his own authority, ordered the armed forces to prepare "Case White" for the invasion and occupation of the whole of Poland later in the summer.| |7 May 1939||German Generals Rundstedt, Manstein, and other General Staff members presented to Hitler an invasion plan for Danzig and Poland.| |15 Jun 1939||The German Army presented a plan to Adolf Hitler for the invasion of Poland, with much of the strategy focusing on concentrated surprise attacks to quickly eliminate Polish opposition.| |10 Aug 1939||Reinhard Heydrich ordered SS Officer Alfred Naujocks to fake an attack on a radio station near Gleiwitz, Germany, which was on the border with Poland. "Practical proof is needed for these attacks of the Poles for the foreign press as well as German propaganda", said Heydrich, according to Naujocks.| |14 Aug 1939||Adolf Hitler announced to his top military commanders that Germany was to enter in a war with Poland at the end of Aug 1939, and that the United Kingdom and France would not enter the fray, especially if Poland could be decisively wiped out in a week or two.| |17 Aug 1939||The Germany military was ordered to supply the SS organization with 150 Polish Army uniforms.| |22 Aug 1939||With a non-aggression pact nearly secured with the Soviet Union, German leader Adolf Hitler ordered the Polish invasion to commence on 26 Aug 1939. He told his top military commanders to be brutal and show no compassion in the upcoming war.| |24 Aug 1939||In Berlin, Germany, journalist William Shirer noted in his diary "it looks like war" based on his observations throughout the day.| |25 Aug 1939||In the morning, Adolf Hitler sent a message to Benito Mussolini, noting that the reason why Italy was not informed of the Molotov-Ribbentrop Pact was because Hitler had not imagined the negotiations would conclude so quickly. He also revealed to him that war was to commence soon, but failed to let him know that the planned invasion date was on the following day. Later on the same day, however, Hitler hesitated in the face of the Anglo-Polish mutual defense agreement; he would quickly decide to postpone the invasion date. Meanwhile, in Berlin, Germany, journalist William Shirer noted in his diary that war seems to be imminent.| |26 Aug 1939||Some German units ordered to lead the invasion of Poland, originally planned for this date, did not receive the message that the invasion had been postponed in the previous evening and crossed the borders, attacking Polish defenses with rifles, machine guns, and grenades; they would be withdrawn back into Germany within hours. Because Poland had experienced so much German provocation in the past few days, Polish leadership brushed off the attacks as another series of provocation, despite having reports that the attacks wore regular uniforms. In the late afternoon, Adolf Hitler set the new invasion date at 1 Sep 1939.| |28 Aug 1939||Citizens in Berlin, Germany observed troops moving toward the east.| |29 Aug 1939||Adolf Hitler summoned the three leading representatives of the German armed forces, Walther von Brauchitsch, Hermann Göring, and Erich Raeder together with senior Army commanders to his mountain villa at Obersalzberg in southern Germany, where he announced the details of the recently-signed Soviet-German non-aggression pact, the plan to isolate and destroy Poland, and the formation of a buffer state in conquered Poland against the Soviet Union.| |31 Aug 1939||The formal order for the German invasion of Poland was given; specific instructions were made for German troops on the western border to avoid conflict with the United Kingdom, France, and the Low Countries.| |1 Sep 1939||German Foreign Minister Joachim von Ribbentrop warned Adolf Hitler that the invasion of Poland would compel France to fight. Hitler (exceptionally irritable, bitter and sharp with anyone advising caution) replied: "I have at last decided to do without the opinions of people who have misinformed me on a dozen occasions... I shall rely on my own judgement."| |1 Sep 1939||Using the staged Gleiwitz radio station attack as an excuse, Germany declared war on Poland. Meanwhile, the radio station in Minsk, Byelorussia increased the frequency of station identification and extended its playing time in an attempt to help German aviators navigate. Among the opening acts of the European War, the German Luftwaffe bombed the town of Wielu in Poland, causing 1,200 civilian casualties.| |2 Sep 1939||During the day, British Prime Minister Neville Chamberlain and French Prime Minister Édouard Daladier issued a joint ultimatum to Germany, demanding the withdraw of troops from Poland within 12 hours. During the late hours of the night, Chamberlain attempted to convince Dalalier to carry out the threat from the earlier ultimatum by declaring war on Germany early in the next morning.| |3 Sep 1939||At 0900 hours, British Ambassador in Germany Nevile Henderson delivered the British declaration of war to German Foreign Minister Joachim von Ribbentrop, effective at 1100 hours; British Commonwealth nations of New Zealand and Australia followed suit. France would also declare war later on this day, effective at 1700 hours. In the afternoon, Adolf Hitler issued an order to his generals, again stressing that German troops must not attack British and French positions. Finally, Hitler also sent a message to the Soviet Union, asking the Soviets to jointly invade Poland.| |3 Sep 1939||At 1115 hours, British Prime Minsiter Neville Chamberlain announced over radio that because Germany had failed to withdraw troops from Poland by 1100 hours, a state of war now existed between the United Kingdom and Germany.| |5 Sep 1939||German Army units crossed the Vistula River in Poland. Meanwhile, Soviet Foreign Minister Vyacheslav Molotov responded to the German invitation to jointly invade Poland in the positive, but noted that the Soviet forces would need several days to prepare; he also warned the Germans not to cross the previously agreed upon line separating German and Soviet spheres of influence.| |6 Sep 1939||German troops captured the Upper Silesian industrial area in Poland.| |7 Sep 1939||German troops captured Kraków, Poland.| |7 Sep 1939||In western Poland, the German 30th Infantry Division crossed the Warta River (German: Warthe) on bridges erected by German engineers.| |7 Sep 1939||The city of Lodz, Poland was captured by the German 8th Army after the Lodz Army failed to halt their advance.| |8 Sep 1939||German troops neared the suburbs of Warsaw, and the Polish government evacuated to Lublin.| |8 Sep 1939||Polish defenders at Westerplatte, Danzig surrendered.| |9 Sep 1939||Battle of the Bzura, also known as Battle of Kutno to the Germans, began; it was to become the largest battle in the Poland campaign. Elsewhere, German forces captured Lodz and Radom. South of Radom, Stuka dive-bombers of Colonel Gunter Schwarzkopff's St.G.77 finished off the great Polish attempt to cross the Vistula River, crushing the last pockets of resistance in conjunction with tanks; "Wherever they went", reported one Stuka pilot after the action, "we came across throngs of Polish troops, against which our 110-lb fragmentation bombs were deadly. After that we went almost down to the deck firing our machine guns. The confusion was indescribable." At Warsaw, German attempts to enter the city were repulsed. In Moscow, Russia, Soviet Foreign Minister Vyacheslav Molotov informed the German ambassador that Soviet forces would be ready to attack Poland within a few days.| |10 Sep 1939||German troops made a breakthrough near Kutno and Sandomierz in Poland.| |13 Sep 1939||The 60,000 survivors in the Radom Pocket in Poland surrendered.| |15 Sep 1939||German troops captured Gdynia, Poland. Meanwhile, Polish troops failed to break out of the Kutno Pocket. At Warsaw, with it surrounded by German troops, the Polish Army was ordered to the Romanian border to hold out until the Allies arrive; the Romanian government offered asylum to all Polish civilians who could make it across the border; Polish military personnel who crossed the border, however, would be interned. In Berlin, Germany, German Foreign Minister Joachim von Ribbentrop asked the Soviet Union for a definite date and time when Soviet forces would attack Poland.| |16 Sep 1939||Polish troops counterattacked, destroying 22 tanks of Leibstandarte SS "Adolf Hitler" regiment. Elsewhere in Poland, German troops captured Brest-Litovsk (now in Belarus). In Moscow, Russia, Soviet Foreign Minister Vyacheslav Molotov proposed that the Soviet Union would enter the war with the reason of protection of Ukrainians and Byelorussians; Germany complained that it singled out Germany as the lone aggressor.| |17 Sep 1939||In Poland, German troops captured Kutno west of Warsaw. East of Warsaw, Heinz Guderian's XIX Panzerkorps of Army Group North made contact with XXII Panzerkorps of Army Group South, just to the south of Brest-Litovsk; virtually the whole Polish Army (or what remained of it) was now trapped within a gigantic double pincer. In Russia, Joseph Stalin declared that the government of Poland no longer existed, thus all treaties between the two states were no longer valid; Soviet troops poured across the border to join Germany in the invasion, ostensibly to protect Ukrainian and Byelorussian interests from potential German aggression.| |18 Sep 1939||A Soviet-German joint victory parade was held in Brest-Litovsk in Eastern Poland (now in Belarus).| |19 Sep 1939||West of Warsaw, Poland, at the bend of the Vistula River, German troops imprisoned 170,000 Polish troops as they surrendered.| |20 Sep 1939||German General Johannes Blaskowitz noted in his order of the day that, at the Battle of the Bzura in Poland, also known as Battle of Kutno to the Germans, his troops was fighting "in one of the biggest and most destructive battles of all times." Elsewhere, German troops withdrew to the agreed demarcation line in Poland, with Soviet forces moving in behind them. Finally, also on this day, the remaining Polish garrison in Grodno managed to kill 800 Soviet troops and at least 10 tanks.| |21 Sep 1939||60,000 survivors of the Polish Southern Army surrendered at Tomaszov and Zamosz, Poland.| |22 Sep 1939||Battle of the Bzura, also known as Battle of Kutno to the Germans, ended in Polish defeat; it was the largest battle of the Polish campaign during which more than 18,000 Polish troops and about 8,000 German troops were killed. At Lvov, over 210,000 Poles surrender to the Soviets, but at the Battle of Kodziowce the Soviets suffered heavy casualties. Also on this day, the Soviet NKVD began gathering Polish officers for deportation.| |22 Sep 1939||Following the Battle of Bzura, Polish General Tadeusz Kutrzeba arrived in Warsaw, Poland where he was briefly appointed as the Deputy Commander of the Warsaw Army. However, his valiant efforts proved futile. The commander of the Warsaw Army, Juliusz Rómmel, could see the writing on the wall and implored his colleague to begin surrender talks with the Germans. Kutrzeba, later captured by the Germans, spent the rest of the war in various prisoners of war camps until he was liberated by the Americans in Apr 1945.| |23 Sep 1939||German Foreign Minister Joachim von Ribbentrop expressed approval for the Soviet proposal on the partition of Poland. Meanwhile, at Krasnobrod, Poland, three squadrons of the Nowgrodek Cavalry Brigade attacked and surprised the German 8th Infantry Division which had entrenched on a hill. The German made a disorderly retreat to a nearby town, hotly pursued by the Polish cavalry. Despite heavy losses from machine-gun fire the Poles secured the town, capturing the German divisional headquarters including General Rudolf Koch-Erpach and about 100 other German soldiers. In addition forty Polish prisoners were freed. During the action Lieutenant Tadeusz Gerlecki, commanding the second squadron, defeated a German cavalry unit - one of the last battles in military history between opposing cavalry.| |25 Sep 1939||Warsaw, Poland suffered heavy Luftwaffe bombing and artillery bombardment as Adolf Hitler arrived to observe the attack. To the east, Soviet troops captured Bialystok, Poland. Meanwhile, Joseph Stalin proposed to the Germans that the Soviet Union would take Lithuania which was previously within the German sphere of influence; in exchange, the Soviets would give the portions of Poland near Warsaw which were previously within the Soviet sphere of influence but had already been overrun by German troops.| |27 Sep 1939||Warsaw, Poland fell to the Germans after two weeks of siege. Near Grabowiec, Soviets executed 150 Polish policemen.| |28 Sep 1939||At Brest-Litovsk, Poland (now Belarus), Germans and Soviets signed the agreement denoting their common border in Poland.| |29 Sep 1939||With the formal surrender of Poland, including the last 35,000 besieged troops in Modlin, the Germany and Soviet Union finished dividing up Poland.| |6 Oct 1939||The final Polish forces surrendered near Kock and Lublin after fighting both Germans and Soviets.| |10 Oct 1939||Adolf Hitler announced the victorious end to the Polish campaign and called on France and England to end hostilities, which was ignored by both governments.| |30 Oct 1939||An act was signed in Moscow, Russia which formally annexed occupied Polish territories.| Did you enjoy this article? Please consider supporting us on Patreon. Even $1 per month will go a long way! Thank you. Share this article with your friends: Stay updated with WW2DB: Visitor Submitted Comments All visitor submitted comments are opinions of those making the submissions and do not reflect views of WW2DB. » Anders, Władysław » Bismarck, Georg » Blaskowitz, Johannes » Bock, Fedor » Briesen, Kurt » Bryan, Julien » Catlos, Ferdinand » Chuikov, Vasily » Dietl, Eduard » Falkenhorst, Nikolaus » Frantisek, Josef » Galland, Adolf » Gehlen, Reinhard » Giesler, Paul » Golikov, Filipp » Grabmann, Walter » Greim, Robert » Guderian, Heinz » Heydte, Friedrich » Hoth, Hermann » Ilk, Iro » Jodl, Alfred » Kesselring, Albert » Kleist, Paul » Kluge, Günther » Komorowski, Tadeusz » Kuznetsov, Vasily » Küchler, Georg » Le Suire, Karl » List, Wilhelm » Löhr, Alexander » Manstein, Erich » Model, Walter » Paulus, Friedrich » Reichenau, Walther » Roettig, Wilhelm » Rundstedt, Gerd » Rydz-Śmigły, Edward » Schmidt, Kurt » Schörner, Ferdinand » Stettner von Grabenhofen, Walter » Timoshenko, Semyon » Weichs, Maximilian » Weidling, Helmuth » Zeitzler, Kurt » Wilhelm Gustloff » Führer Directive 1 » German-Soviet Treaty of Friendship, Cooperation and Demarcation » No. 106: Speech by Herr Hitler to the Reichstag » No. 107: Herr Hitler's Proclamation to the German Army » No. 119: Memorandum from Joachim von Ribbentrop to Nevile Henderson » No. 120: Speech by Chamberlain to the House of Commons » No. 121: Herr Hitler's Proclamation to the German People and the German Army » No. 144: Chamberlain's Message to the German People » No. 57-59, 69, 75, 76, 79-83, 87, 88, 91, 92, 99, 102, 103, 109-111, 114, 118: Messages Between Henderson and Halifax on Potential War » No. 63, 66, 67, 70-73, 84-86, 90, 93-97, 100, 101, 112, 113, 115: Messages between Kennard and Halifax on Potential War » No. 98: Message from Weizsäcker to Henderson - » 1,062 biographies - » 331 events - » 36,878 timeline entries - » 1,042 ships - » 333 aircraft models - » 185 vehicle models - » 345 weapon models - » 104 historical documents - » 192 facilities - » 463 book reviews - » 25,756 photos - » 299 maps Chiang Kaishek, 31 Jul 1937
Solving Quadratic Equations By Factoring Worksheet: Tips And Tricks As a math student, solving quadratic equations by factoring may be a challenging task for you. However, with the right resources and techniques, you can master this concept and excel in your math class. In this article, we will provide you with a comprehensive guide on how to solve quadratic equations by factoring using a worksheet. What is a Quadratic Equation? A quadratic equation is a type of polynomial equation that contains a variable of degree 2. In other words, it is an equation that involves a variable raised to the power of two, such as x^2. The standard form of a quadratic equation is ax^2 + bx + c = 0, where a, b, and c are constants and x is the variable. How to Solve Quadratic Equations by Factoring? Factoring is a method of solving quadratic equations by finding two binomials that multiply to the quadratic expression. To solve a quadratic equation by factoring, follow these steps: - Move all terms to one side of the equation, so that the equation is in standard form: ax^2 + bx + c = 0. - Factor the quadratic expression into two binomials. - Set each binomial equal to zero and solve for x. - Check your answers by plugging them back into the original equation. Why Use a Solving Quadratic Equations by Factoring Worksheet? A Solving Quadratic Equations by Factoring worksheet is an excellent resource for students to practice their skills and improve their understanding of the concept. Worksheets provide students with a structured approach to solving quadratic equations by factoring and allow them to work through problems at their own pace. Worksheets also provide students with immediate feedback on their progress, allowing them to identify areas where they need to improve. - Q: What is the difference between factoring and solving quadratic equations? - A: Factoring is a method of solving quadratic equations, while solving quadratic equations involves finding the values of x that make the equation true. - Q: What are the benefits of factoring quadratic equations? - A: Factoring quadratic equations allows you to solve for the values of x quickly and efficiently. It also helps you understand the relationship between the factors and the quadratic expression. - Q: What are the common mistakes students make when solving quadratic equations by factoring? - A: Common mistakes include forgetting to move all terms to one side of the equation, making errors in factoring, and forgetting to check the solutions. - Q: How can I improve my factoring skills? - A: Practice is key to improving your factoring skills. Work through as many problems as possible, and use resources such as worksheets, textbooks, and online tutorials to improve your understanding of the concept. - Q: What are some real-life applications of quadratic equations? - A: Quadratic equations are used in many fields, including physics, engineering, finance, and computer science. They are used to model a wide range of phenomena, from the motion of projectiles to the growth of populations. - Q: What is the quadratic formula? - A: The quadratic formula is a formula that provides the solutions to any quadratic equation. It is given by: x = (-b ± sqrt(b^2 – 4ac)) / 2a. - Q: How do I know when to use factoring or the quadratic formula? - A: You can use factoring to solve quadratic equations that can be factored into two binomials. If the equation cannot be factored, or if factoring is too difficult, you can use the quadratic formula to solve for the values of x. - Q: What are some common mistakes students make when using the quadratic formula? - A: Common mistakes include forgetting to use the negative sign in the formula, making errors in the calculation, and forgetting to simplify the solution. Pros of Solving Quadratic Equations by Factoring Worksheet There are several benefits to using a Solving Quadratic Equations by Factoring worksheet, including: - Allows you to practice your factoring skills - Provides immediate feedback on your progress - Helps you identify areas where you need to improve - Allows you to work through problems at your own pace - Prepares you for exams and quizzes Tips for Solving Quadratic Equations by Factoring Here are some tips to help you improve your factoring skills: - Practice, practice, practice - Use resources such as worksheets, textbooks, and online tutorials - Make sure you understand the concept of factoring before attempting to solve quadratic equations - When factoring, look for common factors and use the distributive property - Double-check your work to avoid making mistakes Solving quadratic equations by factoring may seem intimidating at first, but with practice and the right resources, you can master this concept and excel in your math class. Using a Solving Quadratic Equations by Factoring worksheet is an excellent way to improve your skills and prepare for exams and quizzes. Remember to always double-check your work and seek help when needed. Table of Contents
(Right) Graphical representation of the nuclear reactor showing the core (pink cylinder) and the position of the detector inside the tendon gallery (yellow box), 24 meters from the core. (Left) Setting up of the detector. Credit: Institute for Basic Science Dubbed as ”ghost particles,” neutrinos have no electric charge and their masses are so tiny that they are difficult to observe. The sun, nuclear reactors, supernovae explosions create them, when their nuclei are going through a radioactive decay, known as beta decay. The Center for Underground Physics, within the Institute for Basic Science (IBS) led the Neutrino Experiment for Oscillation at Short Baseline (NEOS) to study the most elusive neutrinos, the so-called ’sterile neutrinos’. Their results are now available in the journal Physical Review Letters. Neutrinos detected up to now come in three types, or flavors: electron neutrino, muon neutrino, and tau neutrino. Neutrinos can change from one type to another, through a phenomenon called neutrino oscillation. Interestingly, previous experiments measured these oscillations and found an anomaly in the data: the number of measured neutrinos is around 7% lower than the predicted value. Researchers have proposed that these disappearing neutrinos, transform into a fourth type of neutrinos, that is the sterile neutrinos. The experiment took place inside the Hanbit Nuclear Power Plant in Yeonggwang (South Korea), a standard nuclear reactor that is expected to produces 5.1020 neutrinos per second, as by-products of the reaction that generates nuclear energy. Firstly, the scientists had to overcome the problem of background signals present in the atmosphere, that could hinder the neutrino detection. One solution was to install the detector underground, as close as possible to the core of the reactor, where the beta decay reaction is taking place. In this case, the neutrino detector was installed 24 meters from the core, in a structure called tendon gallery. The detector was protected by several layers of lead blocks, which shield the detector from gamma rays, and of borated polyethylene to block neutrons. (a) Data collected from the NEOS experiments are compared with theoretical model (H-M-V) and a previous experiment (Daya Bay) conducted in China. Experiments and theory match for all energies, but there are some differences in the expected and calculated results at energies between 4 and 6 MeV. (b) In particular, a peak at 5 MeV, dubbed “the 5 MeV bump”, which was measured in the NEOS experiment, but not predicted in the theoretical model, is still unexplained. (c) The same peak is present in the data from the Daya Bay experiment. Credit: Institute for Basic Science Scientists measured the number of electron neutrinos using a detector, which contains a called liquid scintillator, that produces a light signal when a neutrino interacts with it. They then compared their results with data obtained from other experiments and theoretical calculations. In some cases NEOS results agreed with the previous data, but in other cases they differed. For example, the data show that there is an unexplained abundance of neutrinos with energy of 5 MeV (Mega-electron Volts), dubbed ”the 5 MeV bump”, much higher than the one predicted from theoretical models. The experiment succeeded in measuring electron neutrinos with great precision and low background signals. However, sterile neutrinos were not detected and remain some of the most mysterious particles of our Universe. The results also show that it is necessary to set up new limits for the detection of sterile neutrinos, since the oscillations that convert electron neutrinos into sterile neutrinos are probably less than previously shown. ”These results do not mean that sterile neutrinos do not exist, but that they are more challenging to find than what was previously thought,” explains OH Yoomin, one of the authors of this study. Explore further:New results confirm standard neutrino theory More information: Y.J. Ko, B.R. Kim, J.Y. Kim, B.Y. Han, C.H. Jang, E.J. Jeon, K.K. Joo, H.J. Kim, H.S. Kim, Y.D. Kim, Jaison Lee, J.Y. Lee, M.H. Lee, Y.M. Oh, H.K. Park, H.S. Park, K.S. Park, K.M. Seo, Kim Siyeon, and G.M. Sun. Sterile Neutrino Search at the NEOS Experiment. Physical Review Letters. March 21, 2017. DOI: 10.1103/PhysRevLett.118.121802 Journal reference:Physical Review Letters Provided by:Institute for Basic Science
What Do People Know About Radon? Students complete and discuss a radon survery. They calculate the average for each question based on the response. They graph the responses and analyze the information. 3 Views 0 Downloads - Folder Types - Activities & Projects - Graphics & Images - Handouts & References - Lab Resources - Learning Games - Lesson Plans - Primary Sources - Printables & Templates - Professional Documents - PD Courses - Study Guides - Performance Tasks - Graphic Organizers - Writing Prompts - Constructed Response Items - AP Test Preps - Lesson Planet Articles - Interactive Whiteboards - Home Letters - Unknown Types - All Resource Types - Show All See similar resources: Drawing Quadratic GraphsLesson Planet: Curated OER Curve through the points. The resource, created as a review for the General Certificate of Secondary Education Math test, gives scholars the opportunity to refresh their quadratic graphing skills. Pupils fill out function tables to find... 8th - 11th Math CCSS: Adaptable Graphing InequalitiesLesson Planet: Curated OER If your class struggles with graphing inequalities, or if your lecture could use some help from Sal, this video would be a great help. Sal's instruction is measured and patient, and viewers will walk away feeling more confident and... 8 mins 7th - 11th Math Solving Quadratic Equations by GraphingLesson Planet: Curated OER With his trusty graphing calculator, Sal begins his lecture on solving quadratic functions by graphing. The video is a good way to show young mathematicians not only how to use the graphing calculator (which Sal describes as "archaic"... 11 mins 7th - 11th Math Graph a Direct Variation Equation (Negative Slope) Example 2Lesson Planet: Curated OER Bypass the table and go straight to the graph. A short video shows one example of graphing a direct variation equation with a negative slope by using the same method as graphing an equation in slope-intercept form. Pupils see it's easy... 2 mins 8th - 11th Math CCSS: Adaptable Reading Bar ChartsLesson Planet: Curated OER Not just a bar graph—it's a double bar graph! Young scholars learn to read a double bar graph as they view the instructional video. The video highlights the individual data as well as shows a comparison between the two sets of data... 3 mins 7th - 10th Math CCSS: Adaptable Scatter GraphsLesson Planet: Curated OER No scatter brains here—only graphs! A thorough video lesson begins by describing the process of plotting data to create a scatter plot. The instructor goes on to explain how to draw a line of best fit and then use it to make some... 8 mins 7th - 10th Math CCSS: Adaptable Drawing Graphs Using Xy TablesLesson Planet: Curated OER What does a linear equation look like in a table? A video lesson practices graphing linear equations by making an input-output table. The instructor emphasizes the pattern in the table and relates that to the graph of the function. 8 mins 7th - 9th Math CCSS: Adaptable
Life expectancy is a statistical measure of how long an organism may live, based on the year of their birth, their current age and other demographic factors including sex. At a given age, life expectancy is the average number of years that is likely to be lived by a group of individuals (of age x) exposed to the same mortality conditions until they die. The most commonly used measure of life expectancy is life expectancy at age zero; that is, at birth (LEB), which can be defined in two ways: while cohort LEB is the mean length of life of an actual birth cohort (all individuals born a given year) and can be computed only for cohorts that were born many decades ago, so that all their members died, period LEB is the mean length of life of a hypothetical cohort assumed to be exposed since birth until death of all their members to the mortality rates observed at a given year. National LEB figures reported by statistical national agencies and international organizations are indeed estimates of period LEB. In the Bronze and Iron Age LEB was 26 years; the 2010 world LEB was 67.2. For recent years in Swaziland LEB is about 49 years while in Japan is about 83 years. The combination of high infant mortality and deaths in young adulthood from accidents, epidemics, plagues, wars, and childbirth, particularly before modern medicine was widely available, significantly lowers LEB. But for those who survive early hazards, a life expectancy of sixty or seventy would not be uncommon. For example, a society with a LEB of 40 may have few people dying at age 40: most will die before 30 years of age or very few after 55. In countries with high infant mortality rates, LEB is highly sensitive to the rate of death in the first few years of life. Because of this sensitivity to infant mortality, LEB can be subjected to gross misinterpretation, leading one to believe that a population with a low LEB will necessarily have a small proportion of older people. For example, in a hypothetical stationary population in which half the population dies before the age of five, but everybody else dies at exactly 70 years old, LEB will be about 36 years, while about 25% of the population will be between the ages of 50 and 70. Another measure, such as life expectancy at age 5 (e5), can be used to exclude the effect of infant mortality to provide a simple measure of overall mortality rates other than in early childhood—in the hypothetical population above, life expectancy at age 5 would be another 65 years. Aggregate population measures, such as the proportion of the population in various age groups, should also be used alongside individual-based measures like formal life expectancy when analyzing population structure and dynamics. Mathematically, life expectancy is the mean number of years of life remaining at a given age, assuming constant mortality rates. It is denoted by ,[a] which means the average number of subsequent years of life for someone now aged , according to a particular mortality experience. Longevity and life expectancy are not synonyms. Life expectancy is defined statistically as the average number of years remaining for an individual or a group of people at a given age. Longevity refers to the characteristics of the relatively long life span of some members of a population. Moreover, because life expectancy is an average, a particular person may well die many years before or many years after their "expected" survival. The term "maximum life span" has a quite different meaning and is more related to longevity. Life expectancy is also used in plant or animal ecology; life tables (also known as actuarial tables). The term life expectancy may also be used in the context of manufactured objects, although the related term shelf life is used for consumer products and the terms "mean time to breakdown" (MTTB) and "mean time between failures" (MTBF) are used in engineering. - 1 Human patterns - 2 Evolution and aging rate - 3 Calculation - 4 Healthy life expectancy - 5 Forecasting - 6 Policy uses - 7 Life expectancy vs. life span - 8 See also - 9 Notes - 10 References - 11 Further reading - 12 External links Human beings at birth are expected to live on average 49.42 years in Swaziland and 82.6 years in Japan, although Japan's recorded life expectancy may have been very slightly increased by counting many infant deaths as stillborn. An analysis published in 2011 in The Lancet attributes Japanese life expectancy to equal opportunities and public health as well as diet. The oldest confirmed recorded age for any human is 122 years (see Jeanne Calment). This is referred to as the "maximum life span", which is the upper boundary of life, the maximum number of years any human is known to have lived. Variation over time The following information is derived from Encyclopædia Britannica, 1961, and other sources, some with questionable accuracy. Unless otherwise stated, it represents estimates of the life expectancies of the world population as a whole. In many instances, life expectancy varied considerably according to class and gender. Life expectancy at birth takes account of infant mortality, but not pre-natal mortality. |Era||Life expectancy at birth |Life expectancy at older age| |Paleolithic||33||Based on the data from recent hunter-gatherer populations, it is estimated that at age 15, life expectancy was an additional 39 years (total age 54).| |Neolithic||20 -33 | |Bronze Age and Iron Age||26| |Classical Rome||20–30||If a child survived to age 10, life expectancy was an additional 37.5 years, a total of 47.5 years.| |Pre-Columbian North America||25–30| |Medieval Islamic Caliphate||35+||Average lifespan of scholars was 59–84.3 years in the Middle East and 69–75 in Islamic Spain.| |Late medieval English peerage||30||At age 21, life expectancy was an additional 43 years (total age 64).| |Early Modern England||33–40| |1900 world average||31| |1950 world average||48| |2010 world average||67.2| Life expectancy increases with age as the individual survives the higher mortality rates associated with childhood. For instance, the table above listed the life expectancy at birth among 13th-century English nobles at 30. Having survived until the age of 21, a male member of the English aristocracy in this period could expect to live: - 1200–1300: to age 64 - 1300–1400: to age 45 (because of the bubonic plague) - 1400–1500: to age 69 - 1500–1550: to age 71 In general, the available data indicate that longer lifespans became more common recently in human evolution. This increased longevity is attributed by some writers to cultural adaptations rather than genetic evolution, although some research indicates that during the Neolithic Revolution natural selection favored increased longevity. Nevertheless, all researchers acknowledge the effect of cultural adaptations upon life expectancy. 17th-century English life expectancy was only about 35 years, largely because infant and child mortality remained high. Life expectancy was under 25 years in the early Colony of Virginia, and in seventeenth-century New England, about 40 per cent died before reaching adulthood. During the Industrial Revolution, the life expectancy of children increased dramatically. The under-5 mortality rate in London decreased from 745 in 1730–1749 to 318 in 1810–1829. Public health measures are credited with much of the recent increase in life expectancy. During the 20th century, despite a brief drop due to the 1918 flu pandemic starting around that time the average lifespan in the United States increased by more than 30 years, of which 25 years can be attributed to advances in public health. There are great variations in life expectancy between different parts of the world, mostly caused by differences in public health, medical care, and diet. The impact of AIDS on life expectancy is particularly notable in many African countries. According to projections made by the United Nations (UN) in 2002, the life expectancy at birth for 2010–2015 (if HIV/AIDS did not exist) would have been: - 70.7 years instead of 31.6 in Botswana - 69.9 years instead of 41.5 in South Africa - 70.5 years instead of 31.8 in Zimbabwe The UN's predictions were too pessimistic. Actual life expectancy in Botswana declined from 65 in 1990 to 49 in 2000 before increasing to 66 in 2011. In South Africa, life expectancy was 63 in 1990, 57 in 2000, and 58 in 2011. And in Zimbabwe, life expectancy was 60 in 1990, 43 in 2000, and 54 in 2011. In the United States, African-American people have shorter life expectancies than their European-American counterparts. For example, white Americans born in 2010 are expected to live until age 78.9, but black Americans only until age 75.1. This 3.8-year gap, however, is the lowest it has been since at least 1975. The greatest difference was 7.1 years in 1993. In contrast, Asian-American women live the longest of all ethnic groups in the United States, with a life expectancy of 85.8 years. The life expectancy of Hispanic Americans is 81.2 years. Cities also experience a wide range of life expectancy based on neighborhood breakdowns. This is largely due to economic clustering and poverty conditions that tend to associate based on geographic location. Multi-generational poverty found in struggling neighborhoods also contributes. In United States cities such as Cincinnati, the life expectancy gap between low income and high income neighborhoods touches 20 years. Economic circumstances also affect life expectancy. For example, in the United Kingdom, life expectancy in the wealthiest areas is several years longer than in the poorest areas. This may reflect factors such as diet and lifestyle, as well as access to medical care. It may also reflect a selective effect: people with chronic life-threatening illnesses are less likely to become wealthy or to reside in affluent areas. In Glasgow, the disparity is amongst the highest in the world: life expectancy for males in the heavily deprived Calton area stands at 54, which is 28 years less than in the affluent area of Lenzie, which is only 8 km away. A 2013 study found a pronounced relationship between economic inequality and life expectancy. However, a study by José A. Tapia Granados and Ana Diez Roux at the University of Michigan found that life expectancy actually increased during the Great Depression, and during recessions and depressions in general. The authors suggest that when people are working extra hard during good economic times, they undergo more stress, exposure to pollution, and likelihood of injury among other longevity-limiting factors. Life expectancy is also likely to be affected by exposure to high levels of highway air pollution or industrial air pollution. This is one way that occupation can have a major effect on life expectancy. Coal miners (and in prior generations, asbestos cutters) often have shorter than average life expectancies. Other factors affecting an individual's life expectancy are genetic disorders, drug use, tobacco smoking, excessive alcohol consumption, obesity, access to health care, diet and exercise. Women tend to have a lower mortality rate at every age. In the womb, male fetuses have a higher mortality rate (babies are conceived in a ratio estimated to be from 107 to 170 males to 100 females, but the ratio at birth in the United States is only 105 males to 100 females). Among the smallest premature babies (those under 2 pounds or 900 g), females again have a higher survival rate. At the other extreme, about 90% of individuals aged 110 are female. The difference in life expectancy between men and women in the United States dropped from 7.8 years in 1979 to 5.3 years in 2005, with women expected to live to age 80.1 in 2005. Also, data from the UK shows the gap in life expectancy between men and women decreasing in later life. This may be attributable to the effects of infant mortality and young adult death rates. In the past, mortality rates for females in child-bearing age groups were higher than for males at the same age. This is no longer the case, and female human life expectancy is considerably higher than that of males. The reasons for this are not entirely certain. Traditional arguments tend to favor socio-environmental factors: historically, men have generally consumed more tobacco, alcohol and drugs than women in most societies, and are more likely to die from many associated diseases such as lung cancer, tuberculosis and cirrhosis of the liver. Men are also more likely to die from injuries, whether unintentional (such as occupational war or car accidents) or intentional (suicide). Men are also more likely to die from most of the leading causes of death (some already stated above) than women. Some of these in the United States include: cancer of the respiratory system, motor vehicle accidents, suicide, cirrhosis of the liver, emphysema, prostate cancer, and coronary heart disease. These far outweigh the female mortality rate from breast cancer and cervical cancer. Some argue that shorter male life expectancy is merely another manifestation of the general rule, seen in all mammal species, that larger (size) individuals (within a species) tend, on average, to have shorter lives. This biological difference occurs because women have more resistance to infections and degenerative diseases. In her extensive review of the existing literature, Kalben concluded that the fact that women live longer than men was observed at least as far back as 1750 and that, with relatively equal treatment, today males in all parts of the world experience greater mortality than females. Of 72 selected causes of death, only 6 yielded greater female than male age-adjusted death rates in 1998 in the United States. With the exception of birds, for almost all of the animal species studied, males have higher mortality than females. Evidence suggests that the sex mortality differential in people is due to both biological/genetic and environmental/behavioral risk and protective factors. There is a recent suggestion that mitochondrial mutations that shorten lifespan continue to be expressed in males (but less so in females) because mitochondria are inherited only through the mother. By contrast, natural selection weeds out mitochondria that reduce female survival; therefore such mitochondria are less likely to be passed on to the next generation. This thus suggests that females tend to live longer than males. The authors claim that this is a partial explanation. In developed countries, starting around 1880, death rates decreased faster among women, leading to differences in mortality rates between males and females. Before 1880 death rates were the same. In people born after 1900, the death rate of 50- to 70-year-old men was double that of women of the same age. Cardiovascular disease was the main cause of the higher death rates among men. Men may be more vulnerable to cardiovascular disease than women, but this susceptibility was evident only after deaths from other causes, such as infections, started to decline. In developed countries, the number of centenarians is increasing at approximately 5.5% per year, which means doubling the centenarian population every 13 years, pushing it from some 455,000 in 2009 to 4.1 million in 2050. Japan is the country with the highest ratio of centenarians (347 for every 1 million inhabitants in September 2010). Shimane prefecture had an estimated 743 centenarians per million inhabitants. In the United States, the number of centenarians grew from 32,194 in 1980 to 71,944 in November 2010 (232 centenarians per million inhabitants). Evolution and aging rate Various species of plants and animals, including humans, have different lifespans. Evolutionary theory states that organisms that, by virtue of their defenses or lifestyle, live for long periods whilst avoiding accidents, disease, predation, etc., are likely to have genes that code for slow aging — which often translates to good cellular repair. This is theorized to be true because if predation or accidental deaths prevent most individuals from living to an old age, then there will be less natural selection to increase intrinsic life span. The finding was supported in a classic study of opossums by Austad; however, the opposite relationship was found in an equally prominent study of guppies by Reznick. One prominent and very popular theory states that lifespan can be lengthened by a tight budget for food energy called caloric restriction. Caloric restriction observed in many animals (most notably mice and rats), shows a near doubling of life span due to a very limited calorific intake. Support for this theory has been bolstered by several new studies linking lower basal metabolic rate to increased life expectancy. This is the key to why animals like giant tortoises can live so long. Studies of humans with 100+ year life spans have shown a link to decreased thyroid activity, resulting in their lowered metabolic rate. In a broad survey of zoo animals, no relationship was found between the fertility of the animal and its life span. This section may be too technical for most readers to understand. (October 2014) The starting point for calculating life expectancy is the age-specific death rates of the population members. If a large amount of data is available, the age-specific death rates can be simply taken as the mortality rates actually experienced at each age (i.e. the number of deaths divided by the number of years "exposed to risk" in each data cell). However it is customary to apply smoothing to iron out as far as possible the random statistical fluctuations from one year of age to the next. In the past, a very simple model used for this purpose was the Gompertz function, although these days more sophisticated methods are used. The most common methods used for this purpose nowadays are: - to fit a mathematical formula, such as an extension of the Gompertz function, to the data, - for relatively small amounts of data, to look at an established mortality table previously derived for a larger population and make a simple adjustment to it (e.g. multiply by a constant factor) to fit the data. - with a large amount of data, one looks at the mortality rates actually experienced at each age, and applies smoothing (e.g. by cubic splines). While the data required are easily identified in the case of humans, the computation of life expectancy of industrial products and wild animals involves more indirect techniques. The life expectancy and demography of wild animals are often estimated by capturing, marking and recapturing them. The life of a product, more often termed shelf life, is also computed using similar methods. In the case of long-lived components, such as those used in critical applications, e.g. in aircraft, methods like accelerated aging are used to model the life expectancy of a component. The age-specific death rates are calculated separately for separate groups of data that are believed to have different mortality rates (e.g. males and females, and perhaps smokers and non-smokers if data is available separately for those groups) and are then used to calculate a life table, from which one can calculate the probability of surviving to each age. In actuarial notation, the probability of surviving from age to age is denoted and the probability of dying during age (i.e. between ages and ) is denoted . For example, if 10% of a group of people alive at their 90th birthday die before their 91st birthday, then the age-specific death probability at age 90 would be 10%. Note that this is a probability rather than a mortality rate. The expected future lifetime of a life age in whole years (the curtate expected lifetime of (x)) is denoted by the symbol .[a] It is the conditional expected future lifetime (in whole years), assuming survival to age . If denotes the curtate future lifetime at , then : Substituting in the sum and simplifying, we get the equivalent formula: If we make the assumption that, on average, people live a half year in the year of death, then the complete expectation of future lifetime at age is .[clarification needed] Life expectancy is by definition an arithmetic mean. It can also be calculated by integrating the survival curve from ages 0 to positive infinity (or equivalently to the maximum lifespan, sometimes called 'omega'). For an extinct or completed cohort (all people born in year 1850, for example), of course, it can simply be calculated by averaging the ages at death. For cohorts with some survivors, it is estimated by using mortality experience in recent years. These estimates are called period cohort life expectancies. It is important to note that this statistic is usually based on past mortality experience, and assumes that the same age-specific mortality rates will continue into the future. Thus such life expectancy figures need to be adjusted for temporal trends before calculating how long a currently living individual of a particular age is expected to live. Period life expectancy remains a commonly used statistic to summarize the current health status of a population. However, for some purposes, such as pensions calculations, it is usual to adjust the life table used, thus assuming that age-specific death rates will continue to decrease over the years, as they have usually done in the past. This is often done by simply extrapolating past trends; however, some models do exist to account for the evolution of mortality (e.g. the Lee–Carter model). As discussed above, on an individual basis, there are a number of factors that have been shown to correlate with a longer life. Factors that are associated with variations in life expectancy include family history, marital status, economic status, physique, exercise, diet, drug use including smoking and alcohol consumption, disposition, education, environment, sleep, climate, and health care. Healthy life expectancy In order to assess the quality of these additional years of life, 'healthy life expectancies' have been calculated for the last 30 years. Since 2001, the World Health Organization has published statistics called Healthy life expectancy (HALE), defined as the average number of years that a person can expect to live in "full health", excluding the years lived in less than full health due to disease and/or injury. Since 2004, Eurostat publishes annual statistics called Healthy Life Years (HLY) based on reported activity limitations. The United States of America uses similar indicators in the framework of their nationwide health promotion and disease prevention plan "Healthy People 2010". An increasing number of countries are using health expectancy indicators to monitor the health of their population. Forecasting life expectancy and mortality forms an important subdivision of demography. Future trends in life expectancy have huge implications for old-age support programs like U.S. Social Security and pension systems, because the cash flow in these systems depends on the number of recipients still living (along with the rate of return on the investments or the tax rate in PAYGO systems). With longer life expectancies, these systems see increased cash outflow; if these systems underestimate increases in life-expectancies, they won't be prepared for the large payments that will inevitably occur as humans live longer and longer. Life expectancy forecasting usually is based on two different approaches: - Forecasting the life expectancy directly, generally using ARIMA or other time series extrapolation procedures: This approach has the advantage of simplicity, but it cannot account for changes in mortality at specific ages, and the forecasted number cannot be used to derive other life table results. Analyses and forecasts using this approach can be done with any common statistical/ mathematical software package, like EViews, R, SAS, Stata, Matlab, or SPSS. - Forecasting age specific death rates and computing the life expectancy from the results with life table methods: This approach is usually more complex than simply forecasting life expectancy because the analyst must deal with correlated age specific mortality rates, but it seems to be more robust than simple one-dimensional time series approaches. This approach also yields a set of age specific rates that may be used to derive other measures, like survival curves or life expectancies at different ages. The most important approach within this group is the Lee-Carter model, which uses the singular value decomposition on a set of transformed age-specific mortality rates to reduce their dimensionality to a single time series, forecasts that time series, and then recovers a full set of age-specific mortality rates from that forecasted value. Software for this approach include Professor Rob J. Hyndman's R package called `demography` and UC Berkeley's LCFIT system. Life expectancy is also used in describing the physical quality of life of an area or, for an individual, when determining the value of a life settlement, a life insurance policy sold for a cash asset. Disparities in life expectancy are often cited as demonstrating the need for better medical care or increased social support. A strongly associated indirect measure is income inequality. For the top 21 industrialised countries, counting each person equally, life expectancy is lower in more unequal countries (r = -.907). There is a similar relationship among states in the US (r = -.620). Life expectancy vs. life span Life expectancy differs from maximum life span. Life expectancy is an average, computed over all people including those who die shortly after birth, those who die in early adulthood in childbirth or in wars, and those who live unimpeded until old age, whereas lifespan is an individual-specific concept and maximum lifespan is an upper bound rather than an average. It can be argued that it is better to compare life expectancies of the period after childhood to get a better handle on life span. Life expectancy can change dramatically after childhood, as is demonstrated by the Roman Life Expectancy table where at birth the life expectancy was 21 but by the age of 5 it jumped to 42. Studies like Plymouth Plantation; "Dead at Forty" and Life Expectancy by Age, 1850–2004 similarly show a dramatic increase in life expectancy once adulthood was reached. - Calorie restriction - DNA damage theory of aging - Glasgow effect - Healthcare inequality - Indefinite lifespan - Life table - List of countries by life expectancy - List of long-living organisms - Maximum life span - Medieval demography - Mortality rate - Population Pyramid - Lindy Effect Increasing life expectancy a. ^ ^ In standard actuarial notation, ex refers to the expected future lifetime of (x) in whole years, while ex with a circle above the e denotes the complete expected future lifetime of (x), including the fraction. - S. Shryok, J. S. Siegel et al. The Methods and Materials of Demography. Washington, DC, US Bureau of the Census, 1973 - Laden, Greg (May 1, 2011). "Falsehood: "If this was the Stone Age, I'd be dead by now"". ScienceBlogs. Retrieved August 31, 2014.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Arthur Sullivan; Steven M. Sheffrin (2012). Economics: Principles in action. Pearson Prentice Hall. p. 473. ISBN 0-13-063085-3.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - John S. Millar and Richard M. Zammuto (1983). "Life Histories of Mammals: An Analysis of Life Tables". Ecology. Ecological Society of America. 64 (4): 631–635. doi:10.2307/1937181. JSTOR 1937181.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Eliahu Zahavi,Vladimir Torbilo & Solomon Press (1996) Fatigue Design: Life Expectancy of Machine Parts. CRC Press. ISBN 0-8493-8970-4 - Ansley J. Coale; Judith Banister (December 1996). "Five decades of missing females in China". Proceedings of the American Philosophical Society. 140 (4): 421–450. doi:10.2307/2061752. JSTOR 987286. PMID 7828766.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Boseley, Sarah (August 30, 2011). "Japan's life expectancy 'down to equality and public health measures'". The Guardian. London. Retrieved August 31, 2011. Japan has the highest life expectancy in the world but the reasons says an analysis, are as much to do with equality and public health measures as diet. [...] According to a paper in a Lancet series on healthcare in Japan [...]<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Ikeda, Nayu (August 2011). "What has made the population of Japan healthy?". The Lancet. 378 (9796): 1094–105. doi:10.1016/S0140-6736(11)61055-6. PMID 21885105. Reduction in health inequalities with improved average population health was partly attributable to equal educational opportunities and financial access to care.Unknown parameter |coauthors=ignored (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Santrock, John (2007). Life Expectancy. A Topical Approach to: Life-Span Development(pp. 128-132). New York, New York: The McGraw-Hill Companies, Inc. - Hillard Kaplan, Kim Hill, Jane Lancaster, and A. Magdalena Hurtado (2000). "A Theory of Human Life History Evolution: Diet, Intelligence and Longevity" (PDF). Evolutionary Anthropology. 9 (4): 156–185. doi:10.1002/1520-6505(2000)9:4<156::AID-EVAN5>3.0.CO;2-7. Retrieved September 12, 2010.CS1 maint: multiple names: authors list (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Galor, Oded & Moav, Omer (2007). "The Neolithic Revolution and Contemporary Variations in Life Expectancy" (PDF). Brown University Working Paper. Retrieved September 12, 2010.CS1 maint: multiple names: authors list (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Angel Lawrence J. (1984), "Health as a crucial factor in the changes from hunting to developed farming in the eastern Mediterranean", Proceedings of meeting on Paleopathology at the Origins of Agriculture: 51–73<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Galor, Oded & Moav, Omer (2005). "Natural Selection and the Evolution of Life Expectancy" (PDF). Brown University Working Paper. Retrieved November 4, 2010.CS1 maint: multiple names: authors list (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Mortality". Britannica.com. Retrieved November 4, 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Frier, Bruce W. (2001). "More is worse: some observations on the population of the Roman empire". In Scheidel, Walter (ed.). Debating Roman Demography. Leiden: Brill. pp. 144–145. ISBN 9789004115255.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Cokayne, Karen (January 11, 2013). Experiencing Old Age in Ancient Rome. Routledge. p. 3. ISBN 9781136000065.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Pre-European Exploration, Prehistory through 1540". Encyclopediaofarkansas.net. October 5, 2010. Retrieved November 4, 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Conrad, Lawrence I. (2006). The Western Medical Tradition. Cambridge University Press. p. 137. ISBN 0-521-47564-3.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Ahmad, Ahmad Atif (2007), "Authority, Conflict, and the Transmission of Diversity in Medieval Islamic Law by R. Kevin Jaques", Journal of Islamic Studies, 18=issue=2: 246–248 , doi:10.1093/jis/etm005<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Bulliet, Richard W. (1983), "The Age Structure of Medieval Islamic Education", Studia Islamica, 57: 105–117 , doi:10.2307/1595484<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Shatzmiller, Maya (1994), Labour in the Medieval Islamic World, Brill Publishers, p. 66, ISBN 9004098968<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Time traveller's guide to Medieval Britain". Channel4.com. Retrieved November 4, 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "A millennium of health improvement". BBC News. December 27, 1998. Retrieved November 4, 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Expectations of Life" by H.O. Lancaster (page 8) - "PowerPoint Presentation" (PDF). Retrieved November 4, 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - CIA—The World Factbook—Rank Order—Life expectancy at birth - Caspari, Rachel & Lee, Sang-Hee (July 27, 2004). "Older age becomes common late in human evolution". Proceedings of the National Academy of Sciences. 101 (20): 10895–10900. doi:10.1073/pnas.0402857101. PMC 503716. PMID 15252198. Retrieved September 12, 2010.CS1 maint: multiple names: authors list (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Steve Jones, Robert Martin & David Pilbeam, eds. (1994). "The Cambridge Encyclopedia of Human Evolution". Cambridge: Cambridge University Press: 242. ISBN 0-521-32370-3. Cite journal requires |journal=(help)CS1 maint: uses editors parameter (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> Also ISBN 0-521-46786-1 (paperback) - Caspari, R., & Lee, S-l (2006). "Is Human Longevity a Consequence of Cultural Change or Modern Biology?" (PDF). American Journal of Physical Anthropology. 129 (4): 512–517. doi:10.1002/ajpa.20360. PMID 16342259. Retrieved September 12, 2010.CS1 maint: multiple names: authors list (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Medicine & Health", Stratfordhall.org. - "Death in Early America". Digital History. - "Modernization - Population Change". Encyclopædia Britannica. - Mabel C. Buer, Health, Wealth and Population in the Early Days of the Industrial Revolution, London: George Routledge & Sons, 1926, page 30 ISBN 0-415-38218-1 - BBC—History—The Foundling Hospital. Published: May 1, 2001. - CDC (1999). "Ten great public health achievements—United States, 1900–1999". MMWR Morb Mortal Wkly Rep. 48 (12): 241–3. PMID 10220250.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> Reprinted in: "From the Centers for Disease Control and Prevention. Ten great public health achievements—United States, 1900–1999". JAMA. 281 (16): 1481. 1999. doi:10.1001/jama.281.16.1481. PMID 10227303.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - The World Bank - Life expectancy at birth, total (years) - "World Population Prospects — The 2002 Revision", 2003, page 24 - Life expectancy by country, Global Health Observatory Data Repository, World Health Organization - "Wealth & Health of Nations". Gapminder. Retrieved June 26, 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Life Expectancy | Visual Data". BestLifeRates.org. Retrieved June 26, 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Deaths: Final Data for 2010", National Vital Statistics Reports, authored by Sherry L. Murphy, Jiaquan Xu, and Kenneth D. Kochanek, volume 61, number 4, page 12, 8 May 2013 - United States Department of Health and Human Services, Office of Minority Health - Asian American/Pacific Islander Profile. Retrieved October 1, 2013 - "The Root Causes of Poverty". Waterfields. Retrieved March 4, 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Department of Health -Tackling health inequalities: Status report on the Programme for Action - "Social factors key to ill health". BBC News. August 28, 2008. Retrieved August 28, 2008.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "GP explains life expectancy gap". BBC News. August 28, 2008. Retrieved August 28, 2008.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Fletcher, Michael A. (March 10, 2013). "Research ties economic inequality to gap in life expectancy". Washington Post. Retrieved March 23, 2013.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Did The Great Depression Have A Silver Lining? Life Expectancy Increased By 6.2 Years". September 29, 2009. Retrieved April 3, 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Kalben, Barbara Blatt. "Why Men Die Younger: Causes of Mortality Differences by Sex". Society of Actuaries", 2002, p. 17.http://www.soa.org/library/monographs/life/why-men-die-younger-causes-of-mortality-differences-by-sex/2001/january/m-li01-1-05.pdf - Hitti, Miranda (February 28, 2005). "U.S. Life Expectancy Best Ever, Says CDC". eMedicine. WebMD. Retrieved January 18, 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Life expectancy - care quality indicators". QualityWatch. Nuffield Trust & Health Foundation. Retrieved April 16, 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - World Health Organization (2004). "Annex Table 2: Deaths by cause, sex and mortality stratum in WHO regions, estimates for 2002" (PDF). The world health report 2004 - changing history. Retrieved November 1, 2008.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Telemores, sexual size dimorphism and gender gap in life expectancy". Jerrymondo.tripod.com. Retrieved November 4, 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Samaras, Thomas T. und Heigh, Gregory H.: How human size affects longevity and mortality from degenerative diseases. Townsend Letter for Doctors & Patients 159: 78-85, 133-139 - Kalben, Barbara Blatt. "Why Men Die Younger: Causes of Mortality Differences by Sex". Society of Actuaries", 2002.http://www.soa.org/news-and-publications/publications/other-publications/monographs/m-li01-1-toc.aspx - Fruit flies offer DNA clue to why women live longer - Evolutionary biologist, PZ Myers agrees. Mother’s Curse - "When Did Women Start to Outlive Men?". Retrieved July 8, 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - United Nations "World Population Ageing 2009"; ST/ESA/SER.A/295, Population Division, Department of Economic and Social Affairs, United Nations, New York, Oct. 2010, liv + 73 pp. - Japan Times "Centenarians to Hit Record 44,000". The Japan Times, September 15, 2010. Okinawa 667 centenarians per 1 million inhabitants in September 2010, had been for a long time the Japanese prefecture with the largest ratio of centenarians, partly because it also had the largest loss of young and middle-aged population during the Pacific War. - "Resident Population. National Population Estimates for the 2000s. Monthly Postcensal Resident Population, by single year of age, sex, race, and Hispanic Origin", Bureau of the Census (updated monthly). Different figures, based on earlier assumptions (104,754 centenarians on Nov.1, 2009) are provided in "Older Americans Month: May 2010", Bureau of the Census, Facts for Features, March 2, 2010, 5 pp. - "Mortality rate three times as high among mental health service users than in general population" Health and Social Care Gov. UK. 2013 - "Morbidity and Mortality in People With Serious Mental Illness" (PDF). National Association of State Mental Health Program Directors. 2006.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - "Life expectancy of patients with mental disorders" May 18, 2011. British Journal of Psychiatry. Lead author: Dr Kristian Wahlbeck - "Mortality in Schizophrenia and Other Psychoses" September 27, 2014. Schizophrenia Bulletin. Lead author: Dr Ulrich Reininghaus - "Life expectancy and cardiovascular mortality in persons with schizophrenia."...antipsychotic drugs may have adverse effects" 2012 - Williams G (1957). "Pleiotropy, natural selection, and the evolution of senescence". Evolution. Society for the Study of Evolution. 11 (4): 398–411. doi:10.2307/2406060. JSTOR 2406060.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Austad SN (1993). "Retarded senescence in an insular population of Virginia opossums". J. Zool. London. 229 (4): 695–708. doi:10.1111/j.1469-7998.1993.tb02665.x.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Reznick DN, Bryant MJ, Roff D, Ghalambor CK, Ghalambor DE (2004). "Effect of extrinsic mortality on the evolution of senescence in guppies". Nature. 431 (7012): 1095–1099. doi:10.1038/nature02936. PMID 15510147.CS1 maint: multiple names: authors list (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Mitteldorf J, Pepper J (2007). "How can evolutionary theory accommodate recent empirical results on organismal senescence?". Theory in Biosciences. 126 (1): 3–8. doi:10.1007/s12064-007-0001-0. PMID 18087751.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Kirkwood TE (1977). "Evolution of aging". Nature. 270 (5635): 301–304. doi:10.1038/270301a0. PMID 593350.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Ricklefs RE, Cadena CD (2007). "Lifespan is unrelated to investment in reproduction in populations of mammals and birds in captivity". Ecol. Lett. 10 (10): 867–872. doi:10.1111/j.1461-0248.2007.01085.x. PMID 17845285.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Anderson, Robert N. (1999) Method for constructing complete annual U.S. life tables. Vital and health statistics. Series 2, Data evaluation and methods research ; no. 129 (DHHS publication ; no. (PHS) 99-1329) PDF - Linda J Young; Jerry H Young (1998) Statistical ecology : a population perspective. Kluwer Academic Publishers. p. 310 - R. Cunningham, T. Herzog, and R. London (2008). Models for Quantifying Risk (Third ed.). Actex. ISBN 978-1-56698-676-2.CS1 maint: multiple names: authors list (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> page 92. - Ronald D. Lee and Lawrence Carter. 1992. "Modeling and Forecasting the Time Series of U.S. Mortality," Journal of the American Statistical Association 87 (September): 659-671. - "International Human Development Indicators — UNDP". Hdrstats.undp.org. Retrieved November 4, 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Has the relation between income inequality and life expectancy disappeared? Evidence from Italy and top industrialised countries J Epidemiol Community Health 2005;59:158-162. - Inequality in income and mortality in the United States: analysis of mortality and potential pathways BMJ 1996;312:999. - Wanjek, Christopher (2002). Bad Medicine: Misconceptions and Misuses Revealed, from Distance Healing to Vitamin O. Wiley. pp. 70–71. ISBN 0-471-43499-X<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Wanjek, Christopher (2002). Bad Medicine: Misconceptions and Misuses Revealed, from Distance Healing to Vitamin O. Wiley. p. 71. ISBN 0-471-43499-X.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - Leonid A. Gavrilov & Natalia S. Gavrilova (1991), The Biology of Life Span: A Quantitative Approach. New York: Harwood Academic Publisher, ISBN 3-7186-4983-7 - Kochanek, Kenneth D., Elizabeth Arias, and Robert N. Anderson (2013), How Did Cause of Death Contribute to Racial Differences in Life Expectancy in the United States in 2010?. Hyattsville, Md.: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Health Statistics. |Wikimedia Commons has media related to Life expectancy.| - Charts for all countries - Our World In Data – Life Expectancy – Visualizations of how life expectancy around the world has changed historically (by Max Roser). Includes life expectancy for different age groups. Charts for all countries, world maps, and links to more data sources. - Global Agewatch has the latest internationally comparable statistics on life expectancy from 195 countries. - Rank Order - Life expectancy at birth from the CIA's World Factbook. - CDC year-by-year life expectancy figures for USA from the USA Centers for Disease Controls and Prevention, National Center for Health Statistics. - Life expectancy in Roman times from the University of Texas. - Animal lifespans: Animal Lifespans from Tesarta Online (Internet Archive); The Life Span of Animals from Dr Bob's All Creatures Site.
ELECTRIC POWER PLANTS An electric power plant is an industrial facility that produces or generates electric power. The heart of any electric power plant is a generator. Many electric power plants have one or more generators. A generator is a rotating machine which converts kinetic fluid energy over mechanical energy and to electric power. The energy source can vary, where most power plants worldwide burn fossil fuels such as coal, oil, and natural gas to generate electricity. Clean energy sources include nuclear power, and increasing use of renewables such as solar, wind, wave, geothermal, and hydroelectric. All electric power plants can be separated by three effects: by a heat source, by a prime mover, and by duty. Electric power plants separated by heat source can be: - Fossil-fuel electric power plants use either a steam turbine generator or a combustion turbine. These plants use coal-fired power station produces heat by burning coal in a steam boiler. - Nuclear electric power plants use the heat generated in a nuclear fission reactor’s core to create steam which then operates a steam turbine and generator. - Geothermal electric power plants use steam extracted from hot underground rocks. - Biomass-fueled electric power plants burn waste from sugar cane, municipal solid waste, landfill methane, or other forms of biomass to produce electric energy. - Solar thermal electric power plants use sunlight to boil water and produce steam which is further applied for electric power generation. Electric power plants separated by prime mover can be: - Steam turbine electric power plants use expanded steam in the turbine to generate electricity by moving a generator governor. - Gas turbine electric power plants use gas pressure from combustion to directly operate the turbine. Request More Information - Combined cycle electric power plants use both, a gas turbine and steam turbine. In this case steam turbine uses the hot exhaust gas from the gas turbine to produce electricity. - Small co-generation electric power plant units (like manufacturing plants, hospitals and others) use internal combustion reciprocating engines to produce power, mostly used as backup power in case of a power outage. Electric power plants separated by duty can be: - Base load electric power plants (such as large modern coal-fired and nuclear generating stations, or hydro plants) run nearly continually to provide that component of system load that doesn’t vary during a day or week. - Peaking electric power plants (such as cycle gas turbines and reciprocating internal combustion engines) meet the daily peak load, which may only be for one or two hours each day. - Load following electric power plants follow the variations in the daily and weekly load. Primary Process Control Improvements for Electric Power Plants Prior to any advanced process control (APC) project, even in electric power plants, base-level PID tuning and optimization is a critical pre-requisite step. Unless base-level PID control loops are well tuned, advanced process control (APC) cannot work well, since advanced process control (APC) will be manipulating the set points of the base-level PID control loops. Therefore, the first necessary step in the overall process control improvement procedure for electric power plants is PID tuning and optimization of primary or base-level PID controllers. The benefits of PID tuning and optimization in electric power plants is the reduction of the oscillation amplitude or increase of the controller action by a factor of 2 or 3. This allows to enable smoother running of the electric power plants with increased stability in all control loops avoiding unnecessary electric power plants problems such as: damage and/or to fast wear and tear of the equipment, plant irregular shutdowns or off spec product properties and/or grades. Many engineers are worried about causing shut-downs and operating problems when tuning PID controllers on equipment in the electric power plants. Trial-and-error PID tuning methods can be ineffective and even catastrophic since these processes are super-fast and very unforgiving. PiControl Solutions LLC has extensive experience in PID tuning and optimization for PID controllers in the electric power plants.We understand and know how to tackle with typical PID control loop problems and have customized PID tuning and optimization software tools to help optimize all electric power plant controllers. Our unique and novel closed-loop system identification technology makes it possible to tune and optimize base-level PID control loops quickly, efficiently and precisely. With our closed-loop technology we can perform system identification and PID tuning optimization of the following critical base-level PID controllers easily. Moreover, all process and data analysis and PID tuning and optimization work can be easily performed remotely by PiControl Solutions LLC process control engineers. Advanced Process Control (APC) Improvements for Electric Power Plants PiControl Solutions LLC has extensive experience in advanced process control optimization for electric power plants. We understand the economics factors that drive the profit margin and have customized multivariable closed-loop system identification and advanced process control (APC) design and optimization tools to help optimize and improve the electric power plants. Because of the relatively small size of many electric power plants, it is more cost-effective to implement DCS-based APC (advanced process control) rather than model predictive control (MPC) techniques. DCS-based APC (advanced process control) approach is fast, cost effective, all inside the existing DCS/PLC, avoiding the complications of OPC/other data communication links from computer to DCS. We focus to analyze the process and provide the right economic advanced control solution for each electric power plant. Our DCS-based APC (advanced process control) methodology has proven particularly successful in the electric power plants. Our DCS-based APC (advanced process control) design will result in the following electric power plant benefits: - Improved, stable and more efficient performance of Furnace combustion process using one or more fuels - Adaptive control performance using various feeders for coal flow control - Improved process control of Furnace pressure system - Automated start-up and shutdown of a Furnace combustion process - Designed one and three-element control system for Boiler operation unit - Mathematically calculated three-element feedforward control logic for steam production process - Designed or improved Single or multiple Boiler Master pressure control performance - Improved stability and increased performance of a single or a multiple serially connected Superheater units - Improved single or multiple Desuperheater unit control performance - Improved process control of Steam or Gas single or multiple stage Turbine - Improved performance and improved design of a Turbine Load or Speed control logic - Mathematically calculated feedforward control logic for improved Speed Turbine controller - Optimal design of a Turbine Follow (TF) Mode control logic - Optimal design of a Boiler Follow (BF) Mode control logic - Optimal design of a Coordinated Control (CC) Mode control logic - Improved stability and increased performance of a Reheater unit - Designed one and three-element control system for Deaerator operation unit - Improved stability and increased performance of a High- and Low-pressure vessel units After DCS work on advanced process control (APC) schemes is complete and all advanced process control (APC) parameters are calculated and optimized, PiControl Solutions LLC will over factory acceptance test (FAT) make sure that the advanced process control (APC) design is complete, correct and operable. After completion of process control (APC) control project, PiControl Solutions LLC will conduct dedicated process control training for electric power plant company. PiControl Solutions has designed, optimized and started more than several well-proven electric power plant control. PiControl Solutions expects to see a minimum 2-4 % improvement in the electric power plants operation by increasing the efficiency of an electric power production using minimum possible energy, and a 30 % or more reduction in oscillation amplitudes due to optimized and advanced control. The oscillation reduction will enable smoother but faster running of the electric power plant with increased stability in all control loops. PiControl Solutions is the only process control and automation company in the world which can perform any advanced process control (APC) project completely remotely. Nowadays, every country uses high speed and reliable internet connection and with a help of medium to high resolution cheap web-cameras or even over widely used remote meeting and screen sharing applications it is possible to do complete design, tuning and optimization, FAT test and start-up of any advanced process control project. Over this low cost online/remote approach huge travel and accommodation costs can be saved and human health and safety can be kept on the high level. For more information and details, please send us an email: info@PiControlSolutions.com or call the Tel: (832) 495 6436.
Calculate the velocity of an object. Enter the objects initial position, final position, and time elapsed to determine the velocity. - Acceleration Calculator - Force Calculator - Momentum Calculator - Instantaneous Velocity Calculator - Angular Velocity Calculator The formula for determining velocity can look something like this. Velocity = (Position 2 – Position 1)/ Time (s) Where positions 1 and 2 are described as a coordinate system in the x,y, and z plane. This change in position can also be considered as the total distance. Velocity is known as the time derivative of displacement. This means that mathematically, and conceptually, it’s the rate of change in position with respect to time. It’s important to recall that it’s with respect to time because, without that, velocity would mean nothing. Using the derivative to calculate velocity is usually used when the position is described in some sort of an equation. Acceleration is the derivative of velocity. If you think about this, it means that acceleration is the rate of change of velocity. Or in other words, the rate of change of the rate of change. Velocity, just like acceleration, is all part of Newton’s second law of motion, which is derived from the formula for force, F=ma. Where F is force, m is mass, and a is acceleration. Velocity is also a property involved in momentum. How to Calculate Velocity Let’s take a look at an example of how to calculate velocity in a physics problem. Let’s say we have a baseball. We throw it from point A to point B and time it as it moves. - First, we need to measure the distance from point A to point B. It’s important to only measure the horizontal distance as that is the velocity we will be calculating. Recall that velocity is a vector quantity so it has magnitude and direction. For this example, we will assume 25m as the distance. - Next, we need to write down the time it took to move from point A to Point B. For this example, we measure 25 seconds. - Finally, enter the information into the equation mentioned above. Velocity = 25m/25s = 1m/s - Analyze the results and apply them to addition problems. As we mentioned before, velocity, as well as other vector properties like acceleration and force, can be communicated as a function or equation. For example, acceleration could be displayed as a = 25 x^2 + 10, Where x is the position. To then calculate velocity from this function you can take the first derivative with respect to time. This resulting answer would be v = 25x/s, where x is the position. Velocity is often described as the rate of change in the position of an object or system. Velocity is a vector quantity which means that it has a magnitude and a direction. For example, an object could be speeding along at 30m/s in the x-direction and 0m/s in the Y direction. Speed is the magnitude, and motion is the direction.
The Hellenistic period covers the period of ancient Greek (Hellenic) history and Mediterranean history between the death of Alexander the Great in 323 BC and the emergence of the Roman Empire as signified by the Battle of Actium in 31 BC and the subsequent conquest of Ptolemaic Egypt the following year. At this time, Greek cultural influence and power was at its peak in Europe, Africa and Asia, experiencing prosperity and progress in the arts, exploration, literature, theatre, architecture, music, mathematics, philosophy, and science. It is often considered a period of transition, sometimes even of decadence or degeneration, compared to the enlightenment of the Greek Classical era. The Hellenistic period saw the rise of New Comedy, Alexandrian poetry, the Septuagint and the philosophies of Stoicism and Epicureanism. Greek science was advanced by the works of the mathematician Euclid and the polymath Archimedes. The religious sphere expanded to include new gods such as the Greco-Egyptian Serapis, eastern deities such as Attis and Cybele and the Greek adoption of Buddhism. After Alexander the Great's invasion of the Persian Empire in 330 BC and its disintegration shortly after, the Hellenistic kingdoms were established throughout south-west Asia (Seleucid Empire, Kingdom of Pergamon), north-east Africa (Ptolemaic Kingdom) and South Asia (Greco-Bactrian Kingdom, Indo-Greek Kingdom). The Hellenistic period was characterized by a new wave of Greek colonization which established Greek cities and kingdoms in Asia and Africa. This resulted in the export of Greek culture and language to these new realms, spanning as far as modern-day India. Equally, however, these new kingdoms were influenced by the indigenous cultures, adopting local practices where beneficial, necessary, or convenient. Hellenistic culture thus represents a fusion of the Ancient Greek world with that of the Near East, Middle East, and Southwest Asia. This mixture gave rise to a common Attic-based Greek dialect, known as Koine Greek, which became the lingua franca through the Hellenistic world. Scholars and historians are divided as to what event signals the end of the Hellenistic era. The Hellenistic period may be seen to end either with the final conquest of the Greek heartlands by Rome in 146 BC following the Achean War, with the final defeat of the Ptolemaic Kingdom at the Battle of Actium in 31 BC, or even the move by Roman emperor Constantine the Great of the capital of the Roman Empire to Constantinople in 330 AD. "Hellenistic" is distinguished from "Hellenic" in that the first encompasses the entire sphere of direct ancient Greek influence, while the latter refers to Greece itself. See also: Names of the Greeks. The word originated from the German term hellenistisch, from Ancient Greek Ἑλληνιστής (Hellēnistḗs, "one who uses the Greek language"), from Ἑλλάς (Hellás, "Greece"); as if "Hellenist" + "ic"."Hellenistic" is a modern word and a 19th-century concept; the idea of a Hellenistic period did not exist in Ancient Greece. Although words related in form or meaning, e.g. Hellenist (Greek, Ancient (to 1453): Ἑλληνιστής, Hellēnistēs), have been attested since ancient times, it was Johann Gustav Droysen in the mid-19th century, who in his classic work Geschichte des Hellenismus (History of Hellenism), coined the term Hellenistic to refer to and define the period when Greek culture spread in the non-Greek world after Alexander's conquest. Following Droysen, Hellenistic and related terms, e.g. Hellenism, have been widely used in various contexts; a notable such use is in Culture and Anarchy by Matthew Arnold, where Hellenism is used in contrast with Hebraism. The major issue with the term Hellenistic lies in its convenience, as the spread of Greek culture was not the generalized phenomenon that the term implies. Some areas of the conquered world were more affected by Greek influences than others. The term Hellenistic also implies that the Greek populations were of majority in the areas in which they settled, but in many cases, the Greek settlers were actually the minority among the native populations. The Greek population and the native population did not always mix; the Greeks moved and brought their own culture, but interaction did not always occur. While a few fragments exist, there is no complete surviving historical work which dates to the hundred years following Alexander's death. The works of the major Hellenistic historians Hieronymus of Cardia (who worked under Alexander, Antigonus I and other successors), Duris of Samos and Phylarchus which were used by surviving sources are all lost. The earliest and most credible surviving source for the Hellenistic period is Polybius of Megalopolis (c. 200-118), a statesman of the Achaean League until 168 BC when he was forced to go to Rome as a hostage. His Histories eventually grew to a length of forty books, covering the years 220 to 167 BC. The most important source after Polybius is Diodorus Siculus who wrote his Bibliotheca historica between 60 and 30 BC and reproduced some important earlier sources such as Hieronymus, but his account of the Hellenistic period breaks off after the battle of Ipsus (301). Another important source, Plutarch's (c. 50—c. 120) Parallel Lives though more preoccupied with issues of personal character and morality, outlines the history of important Hellenistic figures. Appian of Alexandria (late 1st century AD-before 165) wrote a history of the Roman empire that includes information of some Hellenistic kingdoms. Other sources include Justin's (2nd century AD) epitome of Pompeius Trogus' Historiae Philipicae and a summary of Arrian's Events after Alexander, by Photios I of Constantinople. Lesser supplementary sources include Curtius Rufus, Pausanias, Pliny, and the Byzantine encyclopedia the Suda. In the field of philosophy, Diogenes Laertius' Lives and Opinions of Eminent Philosophers is the main source. See also: Philip II of Macedon, Alexander the Great and Wars of Alexander the Great. Ancient Greece had traditionally been a fractious collection of fiercely independent city-states. After the Peloponnesian War (431 - 404 BC), Greece had fallen under a Spartan hegemony, in which Sparta was pre-eminent but not all-powerful. Spartan hegemony was succeeded by a Theban one after the Battle of Leuctra (371 BC), but after the Battle of Mantinea (362 BC), all of Greece was so weakened that no one state could claim pre-eminence. It was against this backdrop that the ascendancy of Macedon began, under king Philip II. Macedon was located at the periphery of the Greek world, and although its royal family claimed Greek descent, the Macedonians themselves were looked down upon as semi-barbaric by the rest of the Greeks. However, Macedon had a relatively strong and centralised government, and compared to most Greek states, directly controlled a large area. Philip II was a strong and expansionist king and he took every opportunity to expand Macedonian territory. In 352 BC he annexed Thessaly and Magnesia. In 338 BC, Philip defeated a combined Theban and Athenian army at the Battle of Chaeronea after a decade of desultory conflict. In the aftermath, Philip formed the League of Corinth, effectively bringing the majority of Greece under his direct sway. He was elected Hegemon of the league, and a campaign against the Achaemenid Empire of Persia was planned. However, while this campaign was in its early stages, he was assassinated. Succeeding his father, Alexander took over the Persian war himself. During a decade of campaigning, Alexander conquered the whole Persian Empire, overthrowing the Persian king Darius III. The conquered lands included Asia Minor, Assyria, the Levant, Egypt, Mesopotamia, Media, Persia, and parts of modern-day Afghanistan, Pakistan, and the steppes of central Asia. The years of constant campaigning had taken their toll however, and Alexander died in 323 BC. After his death, the huge territories Alexander had conquered became subject to a strong Greek influence (Hellenization) for the next two or three centuries, until the rise of Rome in the west, and of Parthia in the east. As the Greek and Levantine cultures mingled, the development of a hybrid Hellenistic culture began, and persisted even when isolated from the main centres of Greek culture (for instance, in the Greco-Bactrian kingdom). It can be argued that some of the changes across the Macedonian Empire after Alexander's conquests and during the rule of the Diadochi would have occurred without the influence of Greek rule. As mentioned by Peter Green, numerous factors of conquest have been merged under the term Hellenistic Period. Specific areas conquered by Alexander's invading army, including Egypt and areas of Asia Minor and Mesopotamia "fell" willingly to conquest and viewed Alexander as more of a liberator than a victor. In addition, much of the area conquered would continue to be ruled by the Diadochi, Alexander's generals and successors. Initially the whole empire was divided among them; however, some territories were lost relatively quickly, or only remained nominally under Macedonian rule. After 200 years, only much reduced and rather degenerate states remained, until the conquest of Ptolemaic Egypt by Rome. When Alexander the Great died (June 10, 323 BC), he left behind a huge empire which was composed of many essentially autonomous territories called satrapies. Without a chosen successor there were immediate disputes among his generals as to who should be king of Macedon. These generals became known as the Diadochi (Greek, Ancient (to 1453): Διάδοχοι, Diadokhoi, meaning "Successors"). Meleager and the infantry supported the candidacy of Alexander's half-brother, Philip Arrhidaeus, while Perdiccas, the leading cavalry commander, supported waiting until the birth of Alexander's child by Roxana. After the infantry stormed the palace of Babylon, a compromise was arranged - Arrhidaeus (as Philip III) should become king, and should rule jointly with Roxana's child, assuming that it was a boy (as it was, becoming Alexander IV). Perdiccas himself would become regent (epimeletes) of the empire, and Meleager his lieutenant. Soon, however, Perdiccas had Meleager and the other infantry leaders murdered, and assumed full control. The generals who had supported Perdiccas were rewarded in the partition of Babylon by becoming satraps of the various parts of the empire, but Perdiccas' position was shaky, because, as Arrian writes, "everyone was suspicious of him, and he of them". The first of the Diadochi wars broke out when Perdiccas planned to marry Alexander's sister Cleopatra and began to question Antigonus I Monophthalmus' leadership in Asia Minor. Antigonus fled for Greece, and then, together with Antipater and Craterus (the satrap of Cilicia who had been in Greece fighting the Lamian war) invaded Anatolia. The rebels were supported by Lysimachus, the satrap of Thrace and Ptolemy, the satrap of Egypt. Although Eumenes, satrap of Cappadocia, defeated the rebels in Asia Minor, Perdiccas himself was murdered by his own generals Peithon, Seleucus, and Antigenes (possibly with Ptolemy's aid) during his invasion of Egypt (c. 21 May to 19 June, 320 BC). Ptolemy came to terms with Perdiccas's murderers, making Peithon and Arrhidaeus regents in his place, but soon these came to a new agreement with Antipater at the Treaty of Triparadisus. Antipater was made regent of the Empire, and the two kings were moved to Macedon. Antigonus remained in charge of Asia Minor, Ptolemy retained Egypt, Lysimachus retained Thrace and Seleucus I controlled Babylon. The second Diadochi war began following the death of Antipater in 319 BC. Passing over his own son, Cassander, Antipater had declared Polyperchon his successor as Regent. Cassander rose in revolt against Polyperchon (who was joined by Eumenes) and was supported by Antigonus, Lysimachus and Ptolemy. In 317, Cassander invaded Macedonia, attaining control of Macedon, sentencing Olympias to death and capturing the boy king Alexander IV, and his mother. In Asia, Eumenes was betrayed by his own men after years of campaign and was given up to Antigonus who had him executed. The third war of the Diadochi broke out because of the growing power and ambition of Antigonus. He began removing and appointing satraps as if he were king and also raided the royal treasuries in Ecbatana, Persepolis and Susa, making off with 25,000 talents. Seleucus was forced to flee to Egypt and Antigonus was soon at war with Ptolemy, Lysimachus, and Cassander. He then invaded Phoenicia, laid siege to Tyre, stormed Gaza and began building a fleet. Ptolemy invaded Syria and defeated Antigonus' son, Demetrius Poliorcetes, in the Battle of Gaza of 312 BC which allowed Seleucus to secure control of Babylonia, and the eastern satrapies. In 310, Cassander had young King Alexander IV and his mother Roxane murdered, ending the Argead Dynasty which had ruled Macedon for several centuries. Antigonus then sent his son Demetrius to regain control of Greece. In 307 he took Athens, expelling Demetrius of Phaleron, Cassander's governor, and proclaiming the city free again. Demetrius now turned his attention to Ptolemy, defeating his fleet at the Battle of Salamis and taking control of Cyprus. In the aftermath of this victory, Antigonus took the title of king (basileus) and bestowed it on his son Demetrius Poliorcetes, the rest of the Diadochi soon followed suit. Demetrius continued his campaigns by laying siege to Rhodes and conquering most of Greece in 302, creating a league against Cassander's Macedon. The decisive engagement of the war came when Lysimachus invaded and overran much of western Anatolia, but was soon isolated by Antigonus and Demetrius near Ipsus in Phrygia. Seleucus arrived in time to save Lysimachus and utterly crushed Antigonus at the Battle of Ipsus in 301 BC. Seleucus' war elephants proved decisive, Antigonus was killed, and Demetrius fled back to Greece to attempt to preserve the remnants of his rule there by recapturing a rebellious Athens. Meanwhile, Lysimachus took over Ionia, Seleucus took Cilicia, and Ptolemy captured Cyprus.After Cassander's death in 298 BC, however, Demetrius, who still maintained a sizable loyal army and fleet, invaded Macedon, seized the Macedonian throne (294) and conquered Thessaly and most of central Greece (293-291). He was defeated in 288 BC when Lysimachus of Thrace and Pyrrhus of Epirus invaded Macedon on two fronts, and quickly carved up the kingdom for themselves. Demetrius fled to central Greece with his mercenaries and began to build support there and in the northern Peloponnese. He once again laid siege to Athens after they turned on him, but then struck a treaty with the Athenians and Ptolemy, which allowed him to cross over to Asia Minor and wage war on Lysimachus' holdings in Ionia, leaving his son Antigonus Gonatas in Greece. After initial successes, he was forced to surrender to Seleucus in 285 and later died in captivity. Lysimachus, who had seized Macedon and Thessaly for himself, was forced into war when Seleucus invaded his territories in Asia minor and was defeated and killed in 281 BC at the Battle of Corupedium, near Sardis. Seleucus then attempted to conquer Lysimachus' European territories in Thrace and Macedon, but he was assassinated by Ptolemy Ceraunus ("the thunderbolt"), who had taken refuge at the Seleucid court and then had himself acclaimed as king of Macedon. Ptolemy was killed when Macedon was invaded by Gauls in 279—his head stuck on a spear—and the country fell into anarchy. Antigonus II Gonatas invaded Thrace in the summer of 277 and defeated a large force of 18,000 Gauls. He was quickly hailed as king of Macedon and went on to rule for 35 years. At this point the tripartite territorial division of the Hellenistic age was in place, with the main Hellenistic powers being Macedon under Demetrius's son Antigonus II Gonatas, the Ptolemaic kingdom under the aged Ptolemy I and the Seleucid empire under Seleucus' son Antiochus I Soter. See main article: Epirus (ancient state). Epirus was a northwestern Greek kingdom in the western Balkans ruled by the Molossian Aeacidae dynasty. Epirus was an ally of Macedon during the reigns of Philip II and Alexander. In 281 Pyrrhus (nicknamed "the eagle", aetos) invaded southern Italy to aid the city state of Tarentum. Pyrrhus defeated the Romans in the Battle of Heraclea and at the Battle of Asculum. Though victorious, he was forced to retreat due to heavy losses, hence the term "Pyrrhic victory". Pyrrhus then turned south and invaded Sicily but was unsuccessful and returned to Italy. After the Battle of Beneventum (275 BC) Pyrrhus lost all his Italian holdings and left for Epirus. Pyrrhus then went to war with Macedonia in 275, deposing Antigonus II Gonatas and briefly ruling over Macedonia and Thessaly until 285. Afterwards he invaded southern Greece, and was killed in battle against Argos in 272 BC. After the death of Pyrrhus, Epirus remained a minor power. In 233 BC the Aeacid royal family was deposed and a federal state was set up called the Epirote League. The league was conquered by Rome in the Third Macedonian War (171–168 BC). See main article: Antigonid dynasty. Antigonus II, a student of Zeno of Citium, spent most of his rule defending Macedon against Epirus and cementing Macedonian power in Greece, first against the Athenians in the Chremonidean War, and then against the Achaean League of Aratus of Sicyon. Under the Antigonids, Macedonia was often short on funds, the Pangaeum mines were no longer as productive as under Philip II, the wealth from Alexander's campaigns had been used up and the countryside pillaged by the Gallic invasion. A large number of the Macedonian population had also been resettled abroad by Alexander or had chosen to emigrate to the new eastern Greek cities. Up to two thirds of the population emigrated, and the Macedonian army could only count on a levy of 25,000 men, a significantly smaller force than under Philip II. Antigonus II ruled until his death in 239 BC. His son Demetrius II soon died in 229 BC, leaving a child (Philip V) as king, with the general Antigonus Doson as regent. Doson led Macedon to victory in the war against the Spartan king Cleomenes III, and occupied Sparta. Philip V, who came to power when Doson died in 221 BC, was the last Macedonian ruler with both the talent and the opportunity to unite Greece and preserve its independence against the "cloud rising in the west": the ever-increasing power of Rome. He was known as "the darling of Hellas". Under his auspices the Peace of Naupactus (217 BC) brought the latest war between Macedon and the Greek leagues (the social war 220-217) to an end, and at this time he controlled all of Greece except Athens, Rhodes and Pergamum. In 215 BC Philip, with his eye on Illyria, formed an alliance with Rome's enemy Hannibal of Carthage, which led to Roman alliances with the Achaean League, Rhodes and Pergamum. The First Macedonian War broke out in 212 BC, and ended inconclusively in 205 BC. Philip continued to wage war against Pergamum and Rhodes for control of the Aegean (204-200 BC) and ignored Roman demands for non-intervention in Greece by invading Attica. In 198 BC, during the Second Macedonian War Philip was decisively defeated at Cynoscephalae by the Roman proconsul Titus Quinctius Flamininus and Macedon lost all its territories in Greece proper. Southern Greece was now thoroughly brought into the Roman sphere of influence, though it retained nominal autonomy. The end of Antigonid Macedon came when Philip V's son, Perseus, was defeated and captured by the Romans in the Third Macedonian War (171–168 BC). See main article: Hellenistic Greece. During the Hellenistic period the importance of Greece proper within the Greek-speaking world declined sharply. The great centers of Hellenistic culture were Alexandria and Antioch, capitals of Ptolemaic Egypt and Seleucid Syria respectively. The conquests of Alexander greatly widened the horizons of the Greek world, making the endless conflicts between the cities which had marked the 5th and 4th centuries BC seem petty and unimportant. It led to a steady emigration, particularly of the young and ambitious, to the new Greek empires in the east. Many Greeks migrated to Alexandria, Antioch and the many other new Hellenistic cities founded in Alexander's wake, as far away as modern Afghanistan and Pakistan. Independent city states were unable to compete with Hellenistic kingdoms and were usually forced to ally themselves to one of them for defense, giving honors to Hellenistic rulers in return for protection. One example is Athens, which had been decisively defeated by Antipater in the Lamian war (323-322) and had its port in the Piraeus garrisoned by Macedonian troops who supported a conservative oligarchy. After Demetrius Poliorcetes captured Athens in 307 and restored the democracy, the Athenians honored him and his father Antigonus by placing gold statues of them on the agora and granting them the title of king. Athens later allied itself to Ptolemaic Egypt to throw off Macedonian rule, eventually setting up a religious cult for the Ptolemaic kings and naming one of the cities phyles in honour of Ptolemy for his aid against Macedon. In spite of the Ptolemaic monies and fleets backing their endeavors, Athens and Sparta were defeated by Antigonus II during the Chremonidean War (267-261). Athens was then occupied by Macedonian troops, and run by Macedonian officials. Sparta remained independent, but it was no longer the leading military power in the Peloponnese. The Spartan king Cleomenes III (235–222 BC) staged a military coup against the conservative ephors and pushed through radical social and land reforms in order to increase the size of the shrinking Spartan citizenry able to provide military service and restore Spartan power. Sparta's bid for supremacy was crushed at the Battle of Sellasia (222) by the Achaean league and Macedon, who restored the power of the ephors. Other city states formed federated states in self-defense, such as the Aetolian League (est. 370 BC), the Achaean League (est. 280 BC), the Boeotian league, the "Northern League" (Byzantium, Chalcedon, Heraclea Pontica and Tium) and the "Nesiotic League" of the Cyclades. These federations involved a central government which controlled foreign policy and military affairs, while leaving most of the local governing to the city states, a system termed sympoliteia. In states such as the Achaean league, this also involved the admission of other ethnic groups into the federation with equal rights, in this case, non-Achaeans. The Achean league was able to drive out the Macedonians from the Peloponnese and free Corinth, which duly joined the league. One of the few city states who managed to maintain full independence from the control of any Hellenistic kingdom was Rhodes. With a skilled navy to protect its trade fleets from pirates and an ideal strategic position covering the routes from the east into the Aegean, Rhodes prospered during the Hellenistic period. It became a center of culture and commerce, its coins were widely circulated and its philosophical schools became one of the best in the Mediterranean. After holding out for one year under siege by Demetrius Poliorcetes (305-304 BC), the Rhodians built the Colossus of Rhodes to commemorate their victory. They retained their independence by the maintenance of a powerful navy, by maintaining a carefully neutral posture and acting to preserve the balance of power between the major Hellenistic kingdoms. Initially Rhodes had very close ties with the Ptolemaic kingdom. Rhodes later became a Roman ally against the Seleucids, receiving some territory in Caria for their role in the Roman–Seleucid War (192–188 BC). Rome eventually turned on Rhodes and annexed the island as a Roman province. The west Balkan coast was inhabited by various Illyrian tribes and kingdoms such as the kingdom of the Dalmatae and of the Ardiaei, who often engaged in piracy under Queen Teuta (reigned 231 BC to 227 BC). Further inland was the Illyrian Paeonian Kingdom and the tribe of the Agrianes. Illyrians on the coast of the Adriatic were under the effects and influence of Hellenisation and some tribes adopted Greek, becoming bilingual due to their proximity to the Greek colonies in Illyria. Illyrians imported weapons and armor from the Ancient Greeks (such as the Illyrian type helmet, originally a Greek type) and also adopted the ornamentation of Ancient Macedon on their shields and their war belts (a single one has been found, dated 3rd century BC at modern Selce e Poshtme, a part of Macedon at the time under Philip V of Macedon). The Odrysian Kingdom was a union of Thracian tribes under the kings of the powerful Odrysian tribe centered around the region of Thrace. Various parts of Thrace were under Macedonian rule under Philip II of Macedon, Alexander the Great, Lysimachus, Ptolemy II, and Philip V but were also often ruled by their own kings. The Thracians and Agrianes were widely used by Alexander as peltasts and light cavalry, forming about one fifth of his army. The Diadochi also used Thracian mercenaries in their armies and they were also used as colonists. The Odrysians used Greek as the language of administration and of the nobility. The nobility also adopted Greek fashions in dress, ornament and military equipment, spreading it to the other tribes. Thracian kings were among the first to be Hellenized. Southern Italy (Magna Graecia) and south-eastern Sicily had been colonized by the Greeks during the 8th century. In 4th century Sicily the leading Greek city and hegemon was Syracuse. During the Hellenistic period the leading figure in Sicily was Agathocles of Syracuse (361–289 BC) who seized the city with an army of mercenaries in 317 BC. Agathocles extended his power throughout most of the Greek cities in Sicily, fought a long war with the Carthaginians, at one point invading Tunisia in 310 and defeating a Carthaginian army there. This was the first time a European force had invaded the region. After this war he controlled most of south-east Sicily and had himself proclaimed king, in imitation of the Hellenistic monarchs of the east. Agathocles then invaded Italy (c. 300 BC) in defense of Tarentum against the Bruttians and Romans, but was unsuccessful. Greeks in pre-Roman Gaul were mostly limited to the Mediterranean coast of Provence. The first Greek colony in the region was Massalia, which became one of the largest trading ports of Mediterranean by the 4th century BC with 6,000 inhabitants. Massalia was also the local hegemon, controlling various coastal Greek cities like Nice and Agde. The coins minted in Massalia have been found in all parts of Ligurian-Celtic Gaul. Celtic coinage was influenced by Greek designs, and Greek letters can be found on various Celtic coins, especially those of Southern France. Traders from Massalia ventured inland deep into France on the Rivers Durance and Rhône, and established overland trade routes deep into Gaul, and to Switzerland and Burgundy. The Hellenistic period saw the Greek alphabet spread into southern Gaul from Massalia (3rd and 2nd centuries BC) and according to Strabo, Massalia was also a center of education, where Celts went to learn Greek. A staunch ally of Rome, Massalia retained its independence until it sided with Pompey in 49 BC and was then taken by Caesar's forces. The Hellenistic states of Asia and Egypt were run by an occupying imperial elite of Greco-Macedonian administrators and governors propped up by a standing army of mercenaries and a small core of Greco-Macedonian settlers. Promotion of immigration from Greece was important in the establishment of this system. Hellenistic monarchs ran their kingdoms as royal estates and most of the heavy tax revenues went into the military and paramilitary forces which preserved their rule from any kind of revolution. Macedonian and Hellenistic monarchs were expected to lead their armies on the field, along with a group of privileged aristocratic companions or friends (hetairoi, philoi) which dined and drank with the king and acted as his advisory council. The monarch was also expected to serve as a charitable patron of the people; this public philanthropy could mean building projects and handing out gifts but also promotion of Greek culture and religion. See main article: Ptolemaic Kingdom. Ptolemy, a somatophylax, one of the seven bodyguards who served as Alexander the Great's generals and deputies, was appointed satrap of Egypt after Alexander's death in 323 BC. In 305 BC, he declared himself King Ptolemy I, later known as "Soter" (saviour) for his role in helping the Rhodians during the siege of Rhodes. Ptolemy built new cities such as Ptolemais Hermiou in upper Egypt and settled his veterans throughout the country, especially in the region of the Faiyum. Alexandria, a major center of Greek culture and trade, became his capital city. As Egypt's first port city, it was the main grain exporter in the Mediterranean. The Egyptians begrudgingly accepted the Ptolemies as the successors to the pharaohs of independent Egypt, though the kingdom went through several native revolts. The Ptolemies took on the traditions of the Egyptian Pharaohs, such as marrying their siblings (Ptolemy II was the first to adopt this custom), having themselves portrayed on public monuments in Egyptian style and dress, and participating in Egyptian religious life. The Ptolemaic ruler cult portrayed the Ptolemies as gods, and temples to the Ptolemies were erected throughout the kingdom. Ptolemy I even created a new god, Serapis, who was combination of two Egyptian gods: Apis and Osiris, with attributes of Greek gods. Ptolemaic administration was, like the Ancient Egyptian bureaucracy, highly centralized and focused on squeezing as much revenue out of the population as possible though tariffs, excise duties, fines, taxes and so forth. A whole class of petty officials, tax farmers, clerks and overseers made this possible. The Egyptian countryside was directly administered by this royal bureaucracy. External possessions such as Cyprus and Cyrene were run by strategoi, military commanders appointed by the crown. Under Ptolemy II, Callimachus, Apollonius of Rhodes, Theocritus and a host of other poets made the city a center of Hellenistic literature. Ptolemy himself was eager to patronise the library, scientific research and individual scholars who lived on the grounds of the library. He and his successors also fought a series of wars with the Seleucids, known as the Syrian wars, over the region of Coele-Syria. Ptolemy IV won the great battle of Raphia (217 BC) against the Seleucids, using native Egyptians trained as phalangites. However these Egyptian soldiers revolted, eventually setting up a native breakaway Egyptian state in the Thebaid between 205-186/5 BC, severely weakening the Ptolemaic state. Ptolemy's family ruled Egypt until the Roman conquest of 30 BC. All the male rulers of the dynasty took the name Ptolemy. Ptolemaic queens, some of whom were the sisters of their husbands, were usually called Cleopatra, Arsinoe, or Berenice. The most famous member of the line was the last queen, Cleopatra VII, known for her role in the Roman political battles between Julius Caesar and Pompey, and later between Octavian and Mark Antony. Her suicide at the conquest by Rome marked the end of Ptolemaic rule in Egypt though Hellenistic culture continued to thrive in Egypt throughout the Roman and Byzantine periods until the Muslim conquest. See main article: Seleucid Empire. Following division of Alexander's empire, Seleucus I Nicator received Babylonia. From there, he created a new empire which expanded to include much of Alexander's near eastern territories. At the height of its power, it included central Anatolia, the Levant, Mesopotamia, Persia, today's Turkmenistan, Pamir, and parts of Pakistan. It included a diverse population estimated at fifty to sixty million people. Under Antiochus I (c. 324/3 – 261 BC), however, the unwieldy empire was already beginning to shed territories. Pergamum broke away under Eumenes I who defeated a Seleucid army sent against him. The kingdoms of Cappadocia, Bithynia and Pontus were all practically independent by this time as well. Like the Ptolemies, Antiochus I established a dynastic religious cult, deifying his father Seleucus I. Seleucus, officially said to be descended from Apollo, had his own priests and monthly sacrifices. The erosion of the empire continued under Seleucus II, who was forced to fight a civil war (239-236) against his brother Antiochus Hierax and was unable to keep Bactria, Sogdiana and Parthia from breaking away. Hierax carved off most of Seleucid Anatolia for himself, but was defeated, along with his Galatian allies, by Attalus I of Pergamon who now also claimed kingship. The vast Seleucid Empire was, like Egypt, mostly dominated by a Greco-Macedonian political elite. The Greek population of the cities who formed the dominant elite were reinforced by emigration from Greece. These cities included newly founded colonies such as Antioch, the other cities of the Syrian tetrapolis, Seleucia (north of Babylon) and Dura-Europos on the Euphrates. These cities retained traditional Greek city state institutions such as assemblies, councils and elected magistrates, but this was a facade for they were always controlled by the royal Seleucid officials. Apart from these cities, there were also a large number of Seleucid garrisons (choria), military colonies (katoikiai) and Greek villages (komai) which the Seleucids planted throughout the empire to cement their rule. This 'Greco-Macedonian' population (which also included the sons of settlers who had married local women) could make up a phalanx of 35,000 men (out of a total Seleucid army of 80,000) during the reign of Antiochos III. The rest of the army was made up of native troops. Antiochus III ("the Great") conducted several vigorous campaigns to retake all the lost provinces of the empire since the death of Seleucus I. After being defeated by Ptolemy IV's forces at Raphia (217), Antiochus III led a long campaign to the east to subdue the far eastern breakaway provinces (212-205) including Bactria, Parthia, Ariana, Sogdiana, Gedrosia and Drangiana. He was successful, bringing back most of these provinces into at least nominal vassalage and receiving tribute from their rulers. After the death of Ptolemy IV (204), Antiochus took advantage of the weakness of Egypt to conquer Coele-Syria in the fifth Syrian war (202-195). He then began expanding his influence into Pergamene territory in Asia and crossed into Europe, fortifying Lysimachia on the Hellespont, but his expansion into Anatolia and Greece was abruptly halted after a decisive defeat at the Battle of Magnesia (190 BC). In the Treaty of Apamea which ended the war, Antiochus lost all of his territories in Anatolia west of the Taurus and was forced to pay a large indemnity of 15,000 talents. Much of the eastern part of the empire was then conquered by the Parthians under Mithridates I of Parthia in the mid-2nd century BC, yet the Seleucid kings continued to rule a rump state from Syria until the invasion by the Armenian king Tigranes the Great and their ultimate overthrow by the Roman general Pompey. See main article: Pergamum. After the death of Lysimachus, one of his officers, Philetaerus, took control of the city of Pergamum in 282 BC along with Lysimachus' war chest of 9,000 talents and declared himself loyal to Seleucus I while remaining de facto independent. His descendant, Attalus I, defeated the invading Galatians and proclaimed himself an independent king. Attalus I (241–197 BC), was a staunch ally of Rome against Philip V of Macedon during the first and second Macedonian Wars. For his support against the Seleucids in 190 BC, Eumenes II was rewarded with all the former Seleucid domains in Asia Minor. Eumenes II turned Pergamon into a centre of culture and science by establishing the library of Pergamum which was said to be second only to the library of Alexandria with 200,000 volumes according to Plutarch. It included a reading room and a collection of paintings. Eumenes II also constructed the Pergamum Altar with friezes depicting the Gigantomachy on the acropolis of the city. Pergamum was also a center of parchment (charta pergamena) production. The Attalids ruled Pergamon until Attalus III bequeathed the kingdom to the Roman Republic in 133 BC to avoid a likely succession crisis. See main article: Galatia. The Celts who settled in Galatia came through Thrace under the leadership of Leotarios and Leonnorios c. 270 BC. They were defeated by Seleucus I in the 'battle of the Elephants', but were still able to establish a Celtic territory in central Anatolia. The Galatians were well respected as warriors and were widely used as mercenaries in the armies of the successor states. They continued to attack neighboring kingdoms such as Bithynia and Pergamon, plundering and extracting tribute. This came to an end when they sided with the renegade Seleucid prince Antiochus Hierax who tried to defeat Attalus, the ruler of Pergamon (241–197 BC). Attalus severely defeated the Gauls, forcing them to confine themselves to Galatia. The theme of the Dying Gaul (a famous statue displayed in Pergamon) remained a favorite in Hellenistic art for a generation signifying the victory of the Greeks over a noble enemy. In the early 2nd century BC, the Galatians became allies of Antiochus the Great, the last Seleucid king trying to regain suzerainty over Asia Minor. In 189 BC, Rome sent Gnaeus Manlius Vulso on an expedition against the Galatians. Galatia was henceforth dominated by Rome through regional rulers from 189 BC onward. After their defeats by Pergamon and Rome the Galatians slowly became hellenized and they were called "Gallo-Graeci" by the historian Justin as well as Greek, Ancient (to 1453): Ἑλληνογαλάται (Hellēnogalátai) by Diodorus Siculus in his Bibliotheca historica v.32.5, who wrote that they were "called Helleno-Galatians because of their connection with the Greeks." See main article: Bithynia. The Bithynians were a Thracian people living in northwest Anatolia. After Alexander's conquests the region of Bithynia came under the rule of the native king Bas, who defeated Calas, a general of Alexander the Great, and maintained the independence of Bithynia. His son, Zipoetes I of Bithynia maintained this autonomy against Lysimachus and Seleucus I, and assumed the title of king (basileus) in 297 BC. His son and successor, Nicomedes I, founded Nicomedia, which soon rose to great prosperity, and during his long reign (c. 278 – c. 255 BC), as well as those of his successors, the kingdom of Bithynia held a considerable place among the minor monarchies of Anatolia. Nicomedes also invited the Celtic Galatians into Anatolia as mercenaries, and they later turned on his son Prusias I, who defeated them in battle. Their last king, Nicomedes IV, was unable to maintain himself against Mithridates VI of Pontus, and, after being restored to his throne by the Roman Senate, he bequeathed his kingdom by will to the Roman republic (74 BC). See main article: Cappadocia. Cappadocia, a mountainous region situated between Pontus and the Taurus mountains, was ruled by an Iranian dynasty. Ariarathes I (332–322 BC) was the satrap of Cappadocia under the Persians and after the conquests of Alexander he retained his post. After Alexander's death he was defeated by Eumenes and crucified in 322 BC, but his son, Ariarathes II managed to regain the throne and maintain his autonomy against the warring Diadochi. In 255 BC, Ariarathes III took the title of king and married Stratonice, a daughter of Antiochus II, remaining an ally of the Seleucid kingdom. Under Ariarathes IV, Cappadocia came into relations with Rome, first as a foe espousing the cause of Antiochus the Great, then as an ally against Perseus of Macedon and finally in a war against the Seleucids. Ariarathes V also waged war with Rome against Aristonicus, a claimant to the throne of Pergamon, and their forces were annihilated in 130 BC. This defeat allowed Pontus to invade and conquer the kingdom. See main article: Kingdom of Pontus. The Kingdom of Pontus was a Hellenistic kingdom on the southern coast of the Black Sea. It was founded by Mithridates I in 291 BC and lasted until its conquest by the Roman Republic in 63 BC. Despite being ruled by a dynasty which was a descendant of the Persian Achaemenid Empire it became hellenized due to the influence of the Greek cities on the Black Sea and its neighboring kingdoms. Pontic culture was a mix of Greek and Iranian elements; the most hellenized parts of the kingdom were on the coast, populated by Greek colonies such as Trapezus and Sinope, the latter of which became the capital of the kingdom. Epigraphic evidence also shows extensive Hellenistic influence in the interior. During the reign of Mithridates II, Pontus was allied with the Seleucids through dynastic marriages. By the time of Mithridates VI Eupator, Greek was the official language of the kingdom, though Anatolian languages continued to be spoken. The kingdom grew to its largest extent under Mithridates VI, who conquered Colchis, Cappadocia, Paphlagonia, Bithynia, Lesser Armenia, the Bosporan Kingdom, the Greek colonies of the Tauric Chersonesos and, for a brief time, the Roman province of Asia. Mithridates VI, himself of mixed Persian and Greek ancestry, presented himself as the protector of the Greeks against the 'barbarians' of Rome styling himself as "King Mithridates Eupator Dionysus" and as the "great liberator". Mithridates also depicted himself with the anastole hairstyle of Alexander and used the symbolism of Herakles, from whom the Macedonian kings claimed descent. After a long struggle with Rome in the Mithridatic wars, Pontus was defeated; part of it was incorporated into the Roman Republic as the province of Bithynia, while Pontus' eastern half survived as a client kingdom. See main article: Kingdom of Armenia (antiquity). Orontid Armenia formally passed to the empire of Alexander the Great following his conquest of Persia. Alexander appointed an Orontid named Mithranes to govern Armenia. Armenia later became a vassal state of the Seleucid Empire, but it maintained a considerable degree of autonomy, retaining its native rulers. Towards the end 212 BC the country was divided into two kingdoms, Greater Armenia and Armenia Sophene, including Commagene or Armenia Minor. The kingdoms became so independent from Seleucid control that Antiochus III the Great waged war on them during his reign and replaced their rulers. After the Seleucid defeat at the Battle of Magnesia in 190 BC, the kings of Sophene and Greater Armenia revolted and declared their independence, with Artaxias becoming the first king of the Artaxiad dynasty of Armenia in 188. During the reign of the Artaxiads, Armenia went through a period of hellenization. Numismatic evidence shows Greek artistic styles and the use of the Greek language. Some coins describe the Armenian kings as "Philhellenes". During the reign of Tigranes the Great (95–55 BC), the kingdom of Armenia reached its greatest extent, containing many Greek cities, including the entire Syrian tetrapolis. Cleopatra, the wife of Tigranes the Great, invited Greeks such as the rhetor Amphicrates and the historian Metrodorus of Scepsis to the Armenian court, and—according to Plutarch—when the Roman general Lucullus seized the Armenian capital, Tigranocerta, he found a troupe of Greek actors who had arrived to perform plays for Tigranes. Tigranes' successor Artavasdes II even composed Greek tragedies himself. See main article: Parthian Empire. Parthia was a north-eastern Iranian satrapy of the Achaemenid empire which later passed on to Alexander's empire. Under the Seleucids, Parthia was governed by various Greek satraps such as Nicanor and Philip. In 247 BC, following the death of Antiochus II Theos, Andragoras, the Seleucid governor of Parthia, proclaimed his independence and began minting coins showing himself wearing a royal diadem and claiming kingship. He ruled until 238 BC when Arsaces, the leader of the Parni tribe conquered Parthia, killing Andragoras and inaugurating the Arsacid Dynasty. Antiochus III recaptured Arsacid controlled territory in 209 BC from Arsaces II. Arsaces II sued for peace and became a vassal of the Seleucids. It was not until the reign of Phraates I (168–165 BC), that the Arsacids would again begin to assert their independence. During the reign of Mithridates I of Parthia, Arsacid control expanded to include Herat (in 167 BC), Babylonia (in 144 BC), Media (in 141 BC), Persia (in 139 BC), and large parts of Syria (in the 110s BC). The Seleucid–Parthian wars continued as the Seleucids invaded Mesopotamia under Antiochus VII Sidetes (r. 138–129 BC), but he was eventually killed by a Parthian counterattack. After the fall of the Seleucid dynasty, the Parthians fought frequently against neighbouring Rome in the Roman–Parthian Wars (66 BC – 217 AD). Abundant traces of Hellenism continued under the Parthian empire. The Parthians used Greek as well as their own Parthian language (though lesser than Greek) as languages of administration and also used Greek drachmas as coinage. They enjoyed Greek theater, and Greek art influenced Parthian art. The Parthians continued worshipping Greek gods syncretized together with Iranian deities. Their rulers established ruler cults in the manner of Hellenistic kings and often used Hellenistic royal epithets. See main article: Nabatean Kingdom. The Nabatean Kingdom was an Arab state located between the Sinai Peninsula and the Arabian Peninsula. Its capital was the city of Petra, an important trading city on the incense route. The Nabateans resisted the attacks of Antigonus and were allies of the Hasmoneans in their struggle against the Seleucids, but later fought against Herod the Great. The hellenization of the Nabateans occurred relatively late in comparison to the surrounding regions. Nabatean material culture does not show any Greek influence until the reign of Aretas III Philhellene in the 1st century BC. Aretas captured Damascus and built the Petra pool complex and gardens in the Hellenistic style. Though the Nabateans originally worshipped their traditional gods in symbolic form such as stone blocks or pillars, during the Hellenistic period they began to identify their gods with Greek gods and depict them in figurative forms influenced by Greek sculpture. Nabatean art shows Greek influences and paintings have been found depicting Dionysian scenes. They also slowly adopted Greek as a language of commerce along with Aramaic and Arabic. See main article: Coele-Syria. During the Hellenistic period, Judea became a frontier region between the Seleucid Empire and Ptolemaic Egypt and therefore was often the frontline of the Syrian wars, changing hands several times during these conflicts. Under the Hellenistic kingdoms, Judea was ruled by the hereditary office of the High Priest of Israel as a Hellenistic vassal. This period also saw the rise of a Hellenistic Judaism, which first developed in the Jewish diaspora of Alexandria and Antioch, and then spread to Judea. The major literary product of this cultural syncretism is the Septuagint translation of the Hebrew Bible from Biblical Hebrew and Biblical Aramaic to Koiné Greek. The reason for the production of this translation seems to be that many of the Alexandrian Jews had lost the ability to speak Hebrew and Aramaic. Between 301 and 219 BC the Ptolemies ruled Judea in relative peace, and Jews often found themselves working in the Ptolemaic administration and army, which led to the rise of a Hellenized Jewish elite class (e.g. the Tobiads). The wars of Antiochus III brought the region into the Seleucid empire; Jerusalem fell to his control in 198 and the Temple was repaired and provided with money and tribute. Antiochus IV Epiphanes sacked Jerusalem and looted the Temple in 169 BC after disturbances in Judea during his abortive invasion of Egypt. Antiochus then banned key Jewish religious rites and traditions in Judea. He may have been attempting to Hellenize the region and unify his empire and the Jewish resistance to this eventually led to an escalation of violence. Whatever the case, tensions between pro and anti-Seleucid Jewish factions led to the 174–135 BC Maccabean Revolt of Judas Maccabeus (whose victory is celebrated in the Jewish festival of Hanukkah). Modern interpretations see this period as a civil war between Hellenized and orthodox forms of Judaism. Out of this revolt was formed an independent Jewish kingdom known as the Hasmonaean Dynasty, which lasted from 165 BC to 63 BC. The Hasmonean Dynasty eventually disintegrated in a civil war, which coincided with civil wars in Rome. The last Hasmonean ruler, Antigonus II Mattathias, was captured by Herod and executed in 37 BC. In spite of originally being a revolt against Greek overlordship, the Hasmonean kingdom and also the Herodian kingdom which followed gradually became more and more hellenized. From 37 BC to 4 BC, Herod the Great ruled as a Jewish-Roman client king appointed by the Roman Senate. He considerably enlarged the Temple (see Herod's Temple), making it one of the largest religious structures in the world. The style of the enlarged temple and other Herodian architecture shows significant Hellenistic architectural influence. His son, Herod Archelaus, ruled from 4 BC to 6 AD when he was deposed for the formation of Roman Judea. See main article: Greco-Bactrian kingdom. The Greek kingdom of Bactria began as a breakaway satrapy of the Seleucid empire, which, because of the size of the empire, had significant freedom from central control. Between 255-246 BC, the governor of Bactria, Sogdiana and Margiana (most of present-day Afghanistan), one Diodotus, took this process to its logical extreme and declared himself king. Diodotus II, son of Diodotus, was overthrown in about 230 BC by Euthydemus, possibly the satrap of Sogdiana, who then started his own dynasty. In c. 210 BC, the Greco-Bactrian kingdom was invaded by a resurgent Seleucid empire under Antiochus III. While victorious in the field, it seems Antiochus came to realise that there were advantages in the status quo (perhaps sensing that Bactria could not be governed from Syria), and married one of his daughters to Euthydemus's son, thus legitimising the Greco-Bactrian dynasty. Soon afterwards the Greco-Bactrian kingdom seems to have expanded, possibly taking advantage of the defeat of the Parthian king Arsaces II by Antiochus. According to Strabo, the Greco-Bactrians seem to have had contacts with China through the silk road trade routes (Strabo, XI.XI.I). Indian sources also maintain religious contact between Buddhist monks and the Greeks, and some Greco-Bactrians did convert to Buddhism. Demetrius, son and successor of Euthydemus, invaded north-western India in 180 BC, after the destruction of the Mauryan Empire there; the Mauryans were probably allies of the Bactrians (and Seleucids). The exact justification for the invasion remains unclear, but by about 175 BC, the Greeks ruled over parts of north-western India. This period also marks the beginning of the obfuscation of Greco-Bactrian history. Demetrius possibly died about 180 BC; numismatic evidence suggests the existence of several other kings shortly thereafter. It is probable that at this point the Greco-Bactrian kingdom split into several semi-independent regions for some years, often warring amongst themselves. Heliocles was the last Greek to clearly rule Bactria, his power collapsing in the face of central Asian tribal invasions (Scythian and Yuezhi), by about 130 BC. However, Greek urban civilisation seems to have continued in Bactria after the fall of the kingdom, having a hellenising effect on the tribes which had displaced Greek rule. The Kushan Empire which followed continued to use Greek on their coinage and Greeks continued being influential in the empire. See main article: Indo-Greeks. The separation of the Indo-Greek kingdom from the Greco-Bactrian kingdom resulted in an even more isolated position, and thus the details of the Indo-Greek kingdom are even more obscure than for Bactria. Many supposed kings in India are known only because of coins bearing their name. The numismatic evidence together with archaeological finds and the scant historical records suggest that the fusion of eastern and western cultures reached its peak in the Indo-Greek kingdom. After Demetrius' death, civil wars between Bactrian kings in India allowed Apollodotus I (from c. 180/175 BC) to make himself independent as the first proper Indo-Greek king (who did not rule from Bactria). Large numbers of his coins have been found in India, and he seems to have reigned in Gandhara as well as western Punjab. Apollodotus I was succeeded by or ruled alongside Antimachus II, likely the son of the Bactrian king Antimachus I. In about 155 (or 165) BC he seems to have been succeeded by the most successful of the Indo-Greek kings, Menander I. Menander converted to Buddhism, and seems to have been a great patron of the religion; he is remembered in some Buddhist texts as 'Milinda'. He also expanded the kingdom further east into Punjab, though these conquests were rather ephemeral. After the death of Menander (c. 130 BC), the Kingdom appears to have fragmented, with several 'kings' attested contemporaneously in different regions. This inevitably weakened the Greek position, and territory seems to have been lost progressively. Around 70 BC, the western regions of Arachosia and Paropamisadae were lost to tribal invasions, presumably by those tribes responsible for the end of the Bactrian kingdom. The resulting Indo-Scythian kingdom seems to have gradually pushed the remaining Indo-Greek kingdom towards the east. The Indo-Greek kingdom appears to have lingered on in western Punjab until about 10 AD, at which time it was finally ended by the Indo-Scythians. After conquering the Indo-Greeks, the Kushan empire took over Greco-Buddhism, the Greek language, Greek script, Greek coinage and artistic styles. Greeks continued being an important part of the cultural world of India for generations. The depictions of the Buddha appear to have been influenced by Greek culture: Buddha representations in the Ghandara period often showed Buddha under the protection of Herakles. Several references in Indian literature praise the knowledge of the Yavanas or the Greeks. The Mahabharata compliments them as "the all-knowing Yavanas" (sarvajnaa yavanaa); e.g., "The Yavanas, O king, are all-knowing; the Suras are particularly so. The mlecchas are wedded to the creations of their own fancy", such as flying machines that are generally called vimanas. The "Brihat-Samhita" of the mathematician Varahamihira says: "The Greeks, though impure, must be honored since they were trained in sciences and therein, excelled others....." . Hellenistic culture was at its height of world influence in the Hellenistic period. Hellenism or at least Philhellenism reached most regions on the frontiers of the Hellenistic kingdoms. Though some of these regions were not ruled by Greeks or even Greek speaking elites, certain Hellenistic influences can be seen in the historical record and material culture of these regions. Other regions had established contact with Greek colonies before this period, and simply saw a continued process of Hellenization and intermixing. Before the Hellenistic period, Greek colonies had been established on the coast of the Crimean and Taman peninsulas. The Bosporan Kingdom was a multi-ethnic kingdom of Greek city states and local tribal peoples such as the Maeotians, Thracians, Crimean Scythians and Cimmerians under the Spartocid dynasty (438–110 BC). The Spartocids were a hellenized Thracian family from Panticapaeum. The Bosporans had long lasting trade contacts with the Scythian peoples of the Pontic-Caspian steppe, and Hellenistic influence can be seen in the Scythian settlements of the Crimea, such as in the Scythian Neapolis. Scythian pressure on the Bosporan kingdom under Paerisades V led to its eventual vassalage under the Pontic king Mithradates VI for protection, c. 107 BC. It later became a Roman client state. Other Scythians on the steppes of Central Asia came into contact with Hellenistic culture through the Greeks of Bactria. Many Scythian elites purchased Greek products and some Scythian art shows Greek influences. At least some Scythians seem to have become Hellenized, because we know of conflicts between the elites of the Scythian kingdom over the adoption of Greek ways. These Hellenized Scythians were known as the "young Scythians". The peoples around Pontic Olbia, known as the Callipidae, were intermixed and Hellenized Greco-Scythians. The Greek colonies on the west coast of the Black sea, such as Istros, Tomi and Callatis traded with the Thracian Getae who occupied modern-day Dobruja. From the 6th century BC on, the multiethnic people in this region gradually intermixed with each other, creating a Greco-Getic populace. Numismatic evidence shows that Hellenic influence penetrated further inland. Getae in Wallachia and Moldavia coined Getic tetradrachms, Getic imitations of Macedonian coinage. The ancient Georgian kingdoms had trade relations with the Greek city-states on the Black Sea coast such as Poti and Sukhumi. The kingdom of Colchis, which later became a Roman client state, received Hellenistic influences from the Black Sea Greek colonies. In Arabia, Bahrain, which was referred to by the Greeks as Tylos, the centre of pearl trading, when Nearchus came to discover it serving under Alexander the Great. The Greek admiral Nearchus is believed to have been the first of Alexander's commanders to visit these islands. It is not known whether Bahrain was part of the Seleucid Empire, although the archaeological site at Qalat Al Bahrain has been proposed as a Seleucid base in the Persian Gulf. Alexander had planned to settle the eastern shores of the Persian Gulf with Greek colonists, and although it is not clear that this happened on the scale he envisaged, Tylos was very much part of the Hellenised world: the language of the upper classes was Greek (although Aramaic was in everyday use), while Zeus was worshipped in the form of the Arabian sun-god Shams. Tylos even became the site of Greek athletic contests. Carthage was a Phoenician colony on the coast of Tunisia. Carthaginian culture came into contact with the Greeks through Punic colonies in Sicily and through their widespread Mediterranean trade network. While the Carthaginians retained their Punic culture and language, they did adopt some Hellenistic ways, one of the most prominent of which was their military practices. In 550 BC, Mago I of Carthage began a series of military reforms which included copying the army of Timoleon, Tyrant of Syracuse. The core of Carthage's military was the Greek-style phalanx formed by citizen hoplite spearmen who had been conscripted into service, though their armies also included large numbers of mercenaries. After their defeat in the First Punic War, Carthage hired a Spartan mercenary captain, Xanthippus of Carthage, to reform their military forces. Xanthippus reformed the Carthaginian military along Macedonian army lines. By the 2nd century BC, the kingdom of Numidia also began to see Hellenistic culture influence its art and architecture. The Numidian royal monument at Chemtou is one example of Numidian Hellenized architecture. Reliefs on the monument also show the Numidians had adopted Greco-Macedonian type armor and shields for their soldiers. Ptolemaic Egypt was the center of Hellenistic influence in Africa and Greek colonies also thrived in the region of Cyrene, Libya. The kingdom of Meroë was in constant contact with Ptolemaic Egypt and Hellenistic influences can be seen in their art and archaeology. There was a temple to Serapis, the Greco-Egyptian god. Widespread Roman interference in the Greek world was probably inevitable given the general manner of the ascendency of the Roman Republic. This Roman-Greek interaction began as a consequence of the Greek city-states located along the coast of southern Italy. Rome had come to dominate the Italian peninsula, and desired the submission of the Greek cities to its rule. Although they initially resisted, allying themselves with Pyrrhus of Epirus, and defeating the Romans at several battles, the Greek cities were unable to maintain this position and were absorbed by the Roman republic. Shortly afterwards, Rome became involved in Sicily, fighting against the Carthaginians in the First Punic War. The end result was the complete conquest of Sicily, including its previously powerful Greek cities, by the Romans. Roman entanglement in the Balkans began when Illyrian piratical raids on Roman merchants led to invasions of Illyria (the First and, Second Illyrian Wars). Tension between Macedon and Rome increased when the young king of Macedon, Philip V, harbored one of the chief pirates, Demetrius of Pharos (a former client of Rome). As a result, in an attempt to reduce Roman influence in the Balkans, Philip allied himself with Carthage after Hannibal had dealt the Romans a massive defeat at the Battle of Cannae (216 BC) during the Second Punic War. Forcing the Romans to fight on another front when they were at a nadir of manpower gained Philip the lasting enmity of the Romans—the only real result from the somewhat insubstantial First Macedonian War (215–202 BC). Once the Second Punic War had been resolved, and the Romans had begun to regather their strength, they looked to re-assert their influence in the Balkans, and to curb the expansion of Philip. A pretext for war was provided by Philip's refusal to end his war with Attalid Pergamum and Rhodes, both Roman allies. The Romans, also allied with the Aetolian League of Greek city-states (which resented Philip's power), thus declared war on Macedon in 200 BC, starting the Second Macedonian War. This ended with a decisive Roman victory at the Battle of Cynoscephalae (197 BC). Like most Roman peace treaties of the period, the resultant 'Peace of Flaminius' was designed utterly to crush the power of the defeated party; a massive indemnity was levied, Philip's fleet was surrendered to Rome, and Macedon was effectively returned to its ancient boundaries, losing influence over the city-states of southern Greece, and land in Thrace and Asia Minor. The result was the end of Macedon as a major power in the Mediterranean. As a result of the confusion in Greece at the end of the Second Macedonian War, the Seleucid Empire also became entangled with the Romans. The Seleucid Antiochus III had allied with Philip V of Macedon in 203 BC, agreeing that they should jointly conquer the lands of the boy-king of Egypt, Ptolemy V. After defeating Ptolemy in the Fifth Syrian War, Antiochus concentrated on occupying the Ptolemaic possessions in Asia Minor. However, this brought Antiochus into conflict with Rhodes and Pergamum, two important Roman allies, and began a 'cold war' between Rome and Antiochus (not helped by the presence of Hannibal at the Seleucid court). Meanwhile, in mainland Greece, the Aetolian League, which had sided with Rome against Macedon, now grew to resent the Roman presence in Greece. This presented Antiochus III with a pretext to invade Greece and 'liberate' it from Roman influence, thus starting the Roman-Syrian War (192–188 BC). In 191 BC, the Romans under Manius Acilius Glabrio routed him at Thermopylae and obliged him to withdraw to Asia. During the course of this war Roman troops moved into Asia for the first time, where they defeated Antiochus again at the Battle of Magnesia (190 BC). A crippling treaty was imposed on Antiochus, with Seleucid possessions in Asia Minor removed and given to Rhodes and Pergamum, the size of the Seleucid navy reduced, and a massive war indemnity invoked.Thus, in less than twenty years, Rome had destroyed the power of one of the successor states, crippled another, and firmly entrenched its influence over Greece. This was primarily a result of the over-ambition of the Macedonian kings, and their unintended provocation of Rome, though Rome was quick to exploit the situation. In another twenty years, the Macedonian kingdom was no more. Seeking to re-assert Macedonian power and Greek independence, Philip V's son Perseus incurred the wrath of the Romans, resulting in the Third Macedonian War (171–168 BC). Victorious, the Romans abolished the Macedonian kingdom, replacing it with four puppet republics; these lasted a further twenty years before Macedon was formally annexed as a Roman province (146 BC) after yet another rebellion under Andriscus. Rome now demanded that the Achaean League, the last stronghold of Greek independence, be dissolved. The Achaeans refused and declared war on Rome. Most of the Greek cities rallied to the Achaeans' side, even slaves were freed to fight for Greek independence. The Roman consul Lucius Mummius advanced from Macedonia and defeated the Greeks at Corinth, which was razed to the ground. In 146 BC, the Greek peninsula, though not the islands, became a Roman protectorate. Roman taxes were imposed, except in Athens and Sparta, and all the cities had to accept rule by Rome's local allies. The Attalid dynasty of Pergamum lasted little longer; a Roman ally until the end, its final king Attalus III died in 133 BC without an heir, and taking the alliance to its natural conclusion, willed Pergamum to the Roman Republic. The final Greek resistance came in 88 BC, when King Mithridates of Pontus rebelled against Rome, captured Roman held Anatolia, and massacred up to 100,000 Romans and Roman allies across Asia Minor. Many Greek cities, including Athens, overthrew their Roman puppet rulers and joined him in the Mithridatic wars. When he was driven out of Greece by the Roman general Lucius Cornelius Sulla, the latter laid siege to Athens and razed the city. Mithridates was finally defeated by Gnaeus Pompeius Magnus (Pompey the Great) in 65 BC. Further ruin was brought to Greece by the Roman civil wars, which were partly fought in Greece. Finally, in 27 BC, Augustus directly annexed Greece to the new Roman Empire as the province of Achaea. The struggles with Rome had left Greece depopulated and demoralised. Nevertheless, Roman rule at least brought an end to warfare, and cities such as Athens, Corinth, Thessaloniki and Patras soon recovered their prosperity. Contrarily, having so firmly entrenched themselves into Greek affairs, the Romans now completely ignored the rapidly disintegrating Seleucid empire (perhaps because it posed no threat); and left the Ptolemaic kingdom to decline quietly, while acting as a protector of sorts, in as much as to stop other powers taking Egypt over (including the famous line-in-the-sand incident when the Seleucid Antiochus IV Epiphanes tried to invade Egypt). Eventually, instability in the near east resulting from the power vacuum left by the collapse of the Seleucid Empire caused the Roman proconsul Pompey the Great to abolish the Seleucid rump state, absorbing much of Syria into the Roman Republic. Famously, the end of Ptolemaic Egypt came as the final act in the republican civil war between the Roman triumvirs Mark Anthony and Augustus Caesar. After the defeat of Anthony and his lover, the last Ptolemaic monarch, Cleopatra VII, at the Battle of Actium, Augustus invaded Egypt and took it as his own personal fiefdom. He thereby completed both the destruction of the Hellenistic kingdoms and the Roman Republic, and ended (in hindsight) the Hellenistic era. In some fields Hellenistic culture thrived, particularly in its preservation of the past. The states of the Hellenistic period were deeply fixated with the past and its seemingly lost glories. The preservation of many classical and archaic works of art and literature (including the works of the three great classical tragedians, Aeschylus, Sophocles, and Euripides) are due to the efforts of the Hellenistic Greeks. The museum and library of Alexandria was the center of this conservationist activity. With the support of royal stipends, Alexandrian scholars collected, translated, copied, classified and critiqued every book they could find. Most of the great literary figures of the Hellenistic period studied at Alexandria and conducted research there. They were scholar poets, writing not only poetry but treatises on Homer and other archaic and classical Greek literature. Athens retained its position as the most prestigious seat of higher education, especially in the domains of philosophy and rhetoric, with considerable libraries and philosophical schools. Alexandria had the monumental Museum (i.e. research center) and Library of Alexandria which was estimated to have had 700,000 volumes. The city of Pergamon also had a large library and became a major center of book production. The island of Rhodes had a library and also boasted a famous finishing school for politics and diplomacy. Libraries were also present in Antioch, Pella, and Kos. Cicero was educated in Athens and Mark Antony in Rhodes. Antioch was founded as a metropolis and center of Greek learning which retained its status into the era of Christianity. Seleucia replaced Babylon as the metropolis of the lower Tigris. The spread of Greek culture and language throughout the Near East and Asia owed much to the development of newly founded cities and deliberate colonization policies by the successor states, which in turn was necessary for maintaining their military forces. Settlements such as Ai-Khanoum, situated on trade routes, allowed Greek culture to mix and spread. The language of Philip II's and Alexander's court and army (which was made up of various Greek and non-Greek speaking peoples) was a version of Attic Greek, and over time this language developed into Koine, the lingua franca of the successor states. The identification of local gods with similar Greek deities, a practice termed 'Interpretatio graeca', facilitated the building of Greek-style temples, and the Greek culture in the cities also meant that buildings such as gymnasia and theaters became common. Many cities maintained nominal autonomy while under the rule of the local king or satrap, and often had Greek-style institutions. Greek dedications, statues, architecture and inscriptions have all been found. However, local cultures were not replaced, and mostly went on as before, but now with a new Greco-Macedonian or otherwise Hellenized elite. An example that shows the spread of Greek theater is Plutarch's story of the death of Crassus, in which his head was taken to the Parthian court and used as a prop in a performance of The Bacchae. Theaters have also been found: for example, in Ai-Khanoum on the edge of Bactria, the theater has 35 rows – larger than the theater in Babylon. The spread of Greek influence and language is also shown through Ancient Greek coinage. Portraits became more realistic, and the obverse of the coin was often used to display a propaganda image, commemorating an event or displaying the image of a favored god. The use of Greek-style portraits and Greek language continued under the Roman, Parthian and Kushan empires, even as the use of Greek was in decline. The concept of Hellenization, meaning the adoption of Greek culture in non-Greek regions, has long been controversial. Undoubtedly Greek influence did spread through the Hellenistic realms, but to what extent, and whether this was a deliberate policy or mere cultural diffusion, have been hotly debated. It seems likely that Alexander himself pursued policies which led to Hellenization, such as the foundations of new cities and Greek colonies. While it may have been a deliberate attempt to spread Greek culture (or as Arrian says, "to civilise the natives"), it is more likely that it was a series of pragmatic measures designed to aid in the rule of his enormous empire. Cities and colonies were centers of administrative control and Macedonian power in a newly conquered region. Alexander also seems to have attempted to create a mixed Greco-Persian elite class as shown by the Susa weddings and his adoption of some forms of Persian dress and court culture. He also brought Persian and other non-Greek peoples into his military and even the elite cavalry units of the companion cavalry. Again, it is probably better to see these policies as a pragmatic response to the demands of ruling a large empire than to any idealized attempt to bringing Greek culture to the 'barbarians'. This approach was bitterly resented by the Macedonians and discarded by most of the Diadochi after Alexander's death. These policies can also be interpreted as the result of Alexander's possible megalomania during his later years. After Alexander's death in 323 BC, the influx of Greek colonists into the new realms continued to spread Greek culture into Asia. The founding of new cities and military colonies continued to be a major part of the Successors' struggle for control of any particular region, and these continued to be centers of cultural diffusion. The spread of Greek culture under the Successors seems mostly to have occurred with the spreading of Greeks themselves, rather than as an active policy. Throughout the Hellenistic world, these Greco-Macedonian colonists considered themselves by and large superior to the native "barbarians" and excluded most non-Greeks from the upper echelons of courtly and government life. Most of the native population was not Hellenized, had little access to Greek culture and often found themselves discriminated against by their Hellenic overlords. Gymnasiums and their Greek education, for example, were for Greeks only. Greek cities and colonies may have exported Greek art and architecture as far as the Indus, but these were mostly enclaves of Greek culture for the transplanted Greek elite. The degree of influence that Greek culture had throughout the Hellenistic kingdoms was therefore highly localized and based mostly on a few great cities like Alexandria and Antioch. Some natives did learn Greek and adopt Greek ways, but this was mostly limited to a few local elites who were allowed to retain their posts by the Diadochi and also to a small number of mid-level administrators who acted as intermediaries between the Greek speaking upper class and their subjects. In the Seleucid Empire, for example, this group amounted to only 2.5 percent of the official class. Hellenistic art nevertheless had a considerable influence on the cultures that had been affected by the Hellenistic expansion. As far as the Indian subcontinent, Hellenistic influence on Indian art was broad and far-reaching, and had effects for several centuries following the forays of Alexander the Great. Despite their initial reluctance, the Successors seem to have later deliberately naturalized themselves to their different regions, presumably in order to help maintain control of the population. In the Ptolemaic kingdom, we find some Egyptianized Greeks by the 2nd century onwards. In the Indo-Greek kingdom we find kings who were converts to Buddhism (e.g., Menander). The Greeks in the regions therefore gradually become 'localized', adopting local customs as appropriate. In this way, hybrid 'Hellenistic' cultures naturally emerged, at least among the upper echelons of society. The trends of Hellenization were therefore accompanied by Greeks adopting native ways over time, but this was widely varied by place and by social class. The farther away from the Mediterranean and the lower in social status, the more likely that a colonist was to adopt local ways, while the Greco-Macedonian elites and royal families usually remained thoroughly Greek and viewed most non-Greeks with disdain. It was not until Cleopatra VII that a Ptolemaic ruler bothered to learn the Egyptian language of their subjects. See main article: Hellenistic religion. In the Hellenistic period, there was much continuity in Greek religion: the Greek gods continued to be worshiped, and the same rites were practiced as before. However the socio-political changes brought on by the conquest of the Persian empire and Greek emigration abroad meant that change also came to religious practices. This varied greatly by location. Athens, Sparta and most cities in the Greek mainland did not see much religious change or new gods (with the exception of the Egyptian Isis in Athens), while the multi-ethnic Alexandria had a very varied group of gods and religious practices, including Egyptian, Jewish and Greek. Greek emigres brought their Greek religion everywhere they went, even as far as India and Afghanistan. Non-Greeks also had more freedom to travel and trade throughout the Mediterranean and in this period we can see Egyptian gods such as Serapis, and the Syrian gods Atargatis and Hadad, as well as a Jewish synagogue, all coexisting on the island of Delos alongside classical Greek deities. A common practice was to identify Greek gods with native gods that had similar characteristics and this created new fusions like Zeus-Ammon, Aphrodite Hagne (a Hellenized Atargatis) and Isis-Demeter. Greek emigres faced individual religious choices they had not faced on their home cities, where the gods they worshiped were dictated by tradition. Hellenistic monarchies were closely associated with the religious life of the kingdoms they ruled. This had already been a feature of Macedonian kingship, which had priestly duties. Hellenestic kings adopted patron deities as protectors of their house and sometimes claimed descent from them. The Seleucids for example took on Apollo as patron, the Antigonids had Herakles, and the Ptolemies claimed Dionysus among others. The worship of dynastic ruler cults was also a feature of this period, most notably in Egypt, where the Ptolemies adopted earlier Pharaonic practice, and established themselves as god-kings. These cults were usually associated with a specific temple in honor of the ruler such as the Ptolemaieia at Alexandria and had their own festivals and theatrical performances. The setting up of ruler cults was more based on the systematized honors offered to the kings (sacrifice, proskynesis, statues, altars, hymns) which put them on par with the gods (isotheism) than on actual belief of their divine nature. According to Peter Green, these cults did not produce genuine belief of the divinity of rulers among the Greeks and Macedonians. The worship of Alexander was also popular, as in the long lived cult at Erythrae and of course, at Alexandria, where his tomb was located. The Hellenistic age also saw a rise in the disillusionment with traditional religion. The rise of philosophy and the sciences had removed the gods from many of their traditional domains such as their role in the movement of the heavenly bodies and natural disasters. The Sophists proclaimed the centrality of humanity and agnosticism; the belief in Euhemerism (the view that the gods were simply ancient kings and heroes), became popular. The popular philosopher Epicurus promoted a view of disinterested gods living far away from the human realm in metakosmia. The apotheosis of rulers also brought the idea of divinity down to earth. While there does seem to have been a substantial decline in religiosity, this was mostly reserved for the educated classes. Magic was practiced widely, and this, too, was a continuation from earlier times. Throughout the Hellenistic world, people would consult oracles, and use charms and figurines to deter misfortune or to cast spells. Also developed in this era was the complex system of astrology, which sought to determine a person's character and future in the movements of the sun, moon, and planets. Astrology was widely associated with the cult of Tyche (luck, fortune), which grew in popularity during this period. The Hellenistic period saw the rise of New Comedy, the only few surviving representative texts being those of Menander (born 342/1 BC). Only one play, Dyskolos, survives in its entirety. The plots of this new Hellenistic comedy of manners were more domestic and formulaic, stereotypical low born characters such as slaves became more important, the language was colloquial and major motifs included escapism, marriage, romance and luck (Tyche). Though no Hellenistic tragedy remains intact, they were still widely produced during the period, yet it seems that there was no major breakthrough in style, remaining within the classical model. The Supplementum Hellenisticum, a modern collection of extant fragments, contains the fragments of 150 authors. Hellenistic poets now sought patronage from kings, and wrote works in their honor. The scholars at the libraries in Alexandria and Pergamon focused on the collection, cataloging, and literary criticism of classical Athenian works and ancient Greek myths. The poet-critic Callimachus, a staunch elitist, wrote hymns equating Ptolemy II to Zeus and Apollo. He promoted short poetic forms such as the epigram, epyllion and the iambic and attacked epic as base and common ("big book, big evil" was his doctrine). He also wrote a massive catalog of the holdings of the library of Alexandria, the famous Pinakes. Callimachus was extremely influential in his time and also for the development of Augustan poetry. Another poet, Apollonius of Rhodes, attempted to revive the epic for the Hellenistic world with his Argonautica. He had been a student of Callimachus and later became chief librarian (prostates) of the library of Alexandria. Apollonius and Callimachus spent much of their careers feuding with each other. Pastoral poetry also thrived during the Hellenistic era, Theocritus was a major poet who popularized the genre. Around 240 BC Livius Andronicus, a Greek slave from southern Italy, translated Homer's Odyssey into Latin. Greek literature would have a dominant effect on the development of the Latin literature of the Romans. The poetry of Virgil, Horace and Ovid were all based on Hellenistic styles. See main article: Hellenistic philosophy. During the Hellenistic period, many different schools of thought developed. Athens, with its multiple philosophical schools, continued to remain the center of philosophical thought. However, Athens had now lost her political freedom, and Hellenistic philosophy is a reflection of this new difficult period. In this political climate, Hellenistic philosophers went in search of goals such as ataraxia (un-disturbedness), autarky (self-sufficiency) and apatheia (freedom from suffering), which would allow them to wrest well-being or eudaimonia out of the most difficult turns of fortune. This occupation with the inner life, with personal inner liberty and with the pursuit of eudaimonia is what all Hellenistic philosophical schools have in common. The Epicureans and the Cynics rejected public offices and civic service, which amounted to a rejection of the polis itself, the defining institution of the Greek world. Epicurus promoted atomism and an asceticism based on freedom from pain as its ultimate goal. Cynics such as Diogenes of Sinope rejected all material possessions and social conventions (nomos) as unnatural and useless. The Cyrenaics, meanwhile, embraced hedonism, arguing that pleasure was the only true good. Stoicism, founded by Zeno of Citium, taught that virtue was sufficient for eudaimonia as it would allow one to live in accordance with Nature or Logos. Zeno became extremely popular; the Athenians set up a gold statue of him, and Antigonus II Gonatas invited him to the Macedonian court. The philosophical schools of Aristotle (the Peripatetics of the Lyceum) and Plato (Platonism at the Academy) also remained influential. The academy would eventually turn to Academic Skepticism under Arcesilaus until it was rejected by Antiochus of Ascalon (c. 90 BC) in favour of Neoplatonism. Hellenistic philosophy had a significant influence on the Greek ruling elite. Examples include Athenian statesman Demetrius of Phaleron, who had studied in the lyceum; the Spartan king Cleomenes III, who was a student of the Stoic Sphairos of Borysthenes; and Antigonus II, who was also a well known Stoic. This can also be said of the Roman upper classes, where Stoicism was dominant, as seen in the Meditations of the Roman emperor Marcus Aurelius and the works of Cicero. The spread of Christianity throughout the Roman world, followed by the spread of Islam, ushered in the end of Hellenistic philosophy and the beginnings of Medieval philosophy (often forcefully, as under Justinian I), which was dominated by the three Abrahamic traditions: Jewish philosophy, Christian philosophy, and early Islamic philosophy. In spite of this shift, Hellenistic philosophy continued to influence these three religious traditions and the renaissance thought which followed them. Hellenistic culture produced seats of learning throughout the Mediterranean. Hellenistic science differed from Greek science in at least two ways: first, it benefited from the cross-fertilization of Greek ideas with those that had developed in the larger Hellenistic world; secondly, to some extent, it was supported by royal patrons in the kingdoms founded by Alexander's successors. Especially important to Hellenistic science was the city of Alexandria in Egypt, which became a major center of scientific research in the 3rd century BC. Hellenistic scholars frequently employed the principles developed in earlier Greek thought: the application of mathematics and deliberate empirical research, in their scientific investigations. Hellenistic Geometers such as Archimedes (– 212 BC), Apollonius of Perga (BC), and Euclid (– 265 BC), whose Elements became the most important textbook in mathematics until the 19th century, built upon the work of the Hellenic era Pythagoreans. Euclid developed proofs for the Pythagorean Theorem, for the infinitude of primes, and worked on the five Platonic solids. Eratosthenes used his knowledge of geometry to measure the circumference of the Earth. His calculation was remarkably accurate. He was also the first to calculate the tilt of the Earth's axis (again with remarkable accuracy). Additionally, he may have accurately calculated the distance from the Earth to the Sun and invented the leap day. Known as the "Father of Geography ", Eratosthenes also created the first map of the world incorporating parallels and meridians, based on the available geographical knowledge of the era. Astronomers like Hipparchus (BC) built upon the measurements of the Babylonian astronomers before him, to measure the precession of the Earth. Pliny reports that Hipparchus produced the first systematic star catalog after he observed a new star (it is uncertain whether this was a nova or a comet) and wished to preserve astronomical record of the stars, so that other new stars could be discovered. It has recently been claimed that a celestial globe based on Hipparchus's star catalog sits atop the broad shoulders of a large 2nd-century Roman statue known as the Farnese Atlas. Another astronomer, Aristarchos of Samos developed a heliocentric system. The level of Hellenistic achievement in astronomy and engineering is impressively shown by the Antikythera mechanism (150–100 BC). It is a 37-gear mechanical computer which computed the motions of the Sun and Moon, including lunar and solar eclipses predicted on the basis of astronomical periods believed to have been learned from the Babylonians. Devices of this sort are not found again until the 10th century, when a simpler eight-geared luni-solar calculator incorporated into an astrolabe was described by the Persian scholar, Al-Biruni. Similarly complex devices were also developed by other Muslim engineers and astronomers during the Middle Ages. Medicine, which was dominated by the Hippocratic tradition, saw new advances under Praxagoras of Kos, who theorized that blood traveled through the veins. Herophilos (335–280 BC) was the first to base his conclusions on dissection of the human body and animal vivisection, and to provide accurate descriptions of the nervous system, liver and other key organs. Influenced by Philinus of Cos (fl. 250), a student of Herophilos, a new medical sect emerged, the Empiric school, which was based on strict observation and rejected unseen causes of the Dogmatic school. Bolos of Mendes made developments in alchemy and Theophrastus was known for his work in plant classification. Krateuas wrote a compendium on botanic pharmacy. The library of Alexandria included a zoo for research and Hellenistic zoologists include Archelaos, Leonidas of Byzantion, Apollodoros of Alexandria and Bion of Soloi. Technological developments from the Hellenistic period include cogged gears, pulleys, the screw, Archimedes' screw, the screw press, glassblowing, hollow bronze casting, surveying instruments, an odometer, the pantograph, the water clock, a water organ, and the Piston pump. The interpretation of Hellenistic science varies widely. At one extreme is the view of the English classical scholar Cornford, who believed that "all the most important and original work was done in the three centuries from 600 to 300 BC". At the other is the view of the Italian physicist and mathematician Lucio Russo, who claims that scientific method was actually born in the 3rd century BC, to be forgotten during the Roman period and only revived in the Renaissance. Hellenistic warfare was a continuation of the military developments of Iphicrates and Philip II of Macedon, particularly his use of the Macedonian Phalanx, a dense formation of pikemen, in conjunction with heavy companion cavalry. Armies of the Hellenistic period differed from those of the classical period in being largely made up of professional soldiers and also in their greater specialization and technical proficiency in siege warfare. Hellenistic armies were significantly larger than those of classical Greece relying increasingly on Greek mercenaries (misthophoroi; men-for-pay) and also on non-Greek soldiery such as Thracians, Galatians, Egyptians and Iranians. Some ethnic groups were known for their martial skill in a particular mode of combat and were highly sought after, including Tarantine cavalry, Cretan archers, Rhodian slingers and Thracian peltasts. This period also saw the adoption of new weapons and troop types such as Thureophoroi and the Thorakitai who used the oval Thureos shield and fought with javelins and the machaira sword. The use of heavily armored cataphracts and also horse archers was adopted by the Seleucids, Greco-Bactrians, Armenians and Pontus. The use of war elephants also became common. Seleucus received Indian war elephants from the Mauryan empire, and used them to good effect at the battle of Ipsus. He kept a core of 500 of them at Apameia. The Ptolemies used the smaller African elephant.Hellenistic military equipment was generally characterized by an increase in size. Hellenistic-era warships grew from the trireme to include more banks of oars and larger numbers of rowers and soldiers as in the Quadrireme and Quinquereme. The Ptolemaic Tessarakonteres was the largest ship constructed in Antiquity. New siege engines were developed during this period. An unknown engineer developed the torsion-spring catapult (c. 360) and Dionysios of Alexandria designed a repeating ballista, the Polybolos. Preserved examples of ball projectiles range from 4.4 kg to 78 kg (or over 170 lbs). Demetrius Poliorcetes was notorious for the large siege engines employed in his campaigns, especially during the 12-month siege of Rhodes when he had Epimachos of Athens build a massive 160 ton siege tower named Helepolis, filled with artillery. See main article: Hellenistic art. The term Hellenistic is a modern invention; the Hellenistic World not only included a huge area covering the whole of the Aegean, rather than the Classical Greece focused on the Poleis of Athens and Sparta, but also a huge time range. In artistic terms this means that there is huge variety which is often put under the heading of "Hellenistic Art" for convenience. Hellenistic art saw a turn from the idealistic, perfected, calm and composed figures of classical Greek art to a style dominated by realism and the depiction of emotion (pathos) and character (ethos). The motif of deceptively realistic naturalism in art (aletheia) is reflected in stories such as that of the painter Zeuxis, who was said to have painted grapes that seemed so real that birds came and pecked at them. The female nude also became more popular as epitomized by the Aphrodite of Cnidos of Praxiteles and art in general became more erotic (e.g., Leda and the Swan and Scopa's Pothos). The dominant ideals of Hellenistic art were those of sensuality and passion. People of all ages and social statuses were depicted in the art of the Hellenistic age. Artists such as Peiraikos chose mundane and lower class subjects for his paintings. According to Pliny, "He painted barbers' shops, cobblers' stalls, asses, eatables and similar subjects, earning for himself the name of rhyparographos [painter of dirt/low things]. In these subjects he could give consummate pleasure, selling them for more than other artists received for their large pictures" (Natural History, Book XXXV.112). Even barbarians, such as the Galatians, were depicted in heroic form, prefiguring the artistic theme of the noble savage. The image of Alexander the Great was also an important artistic theme, and all of the diadochi had themselves depicted imitating Alexander's youthful look. A number of the best-known works of Greek sculpture belong to the Hellenistic period, including Laocoön and his Sons, Venus de Milo, and the Winged Victory of Samothrace. Developments in painting included experiments in chiaroscuro by Zeuxis and the development of landscape painting and still life painting. Greek temples built during the Hellenistic period were generally larger than classical ones, such as the temple of Artemis at Ephesus, the temple of Artemis at Sardis, and the temple of Apollo at Didyma (rebuilt by Seleucus in 300 BC). The royal palace (basileion) also came into its own during the Hellenistic period, the first extant example being the massive 4th-century villa of Cassander at Vergina. There has been a trend in writing the history of this period to depict Hellenistic art as a decadent style, following the Golden Age of Classical Athens. Pliny the Elder, after having described the sculpture of the classical period, says: Cessavit deinde ars ("then art disappeared"). The 18th century terms Baroque and Rococo have sometimes been applied to the art of this complex and individual period. The renewal of the historiographical approach as well as some recent discoveries, such as the tombs of Vergina, allow a better appreciation of this period's artistic richness. The focus on the Hellenistic period over the course of the 19th century by scholars and historians has led to an issue common to the study of historical periods; historians see the period of focus as a mirror of the period in which they are living. Many 19th century scholars contended that the Hellenistic period represented a cultural decline from the brilliance of classical Greece. Though this comparison is now seen as unfair and meaningless, it has been noted that even commentators of the time saw the end of a cultural era which could not be matched again. This may be inextricably linked with the nature of government. It has been noted by Herodotus that after the establishment of the Athenian democracy: ...the Athenians found themselves suddenly a great power. Not just in one field, but in everything they set their minds to...As subjects of a tyrant, what had they accomplished?...Held down like slaves they had shirked and slacked; once they had won their freedom, not a citizen but he could feel like he was labouring for himself" Thus, with the decline of the Greek polis, and the establishment of monarchical states, the environment and social freedom in which to excel may have been reduced. A parallel can be drawn with the productivity of the city states of Italy during the Renaissance, and their subsequent decline under autocratic rulers. However, William Woodthorpe Tarn, between World War I and World War II and the heyday of the League of Nations, focused on the issues of racial and cultural confrontation and the nature of colonial rule. Michael Rostovtzeff, who fled the Russian Revolution, concentrated predominantly on the rise of the capitalist bourgeoisie in areas of Greek rule. Arnaldo Momigliano, an Italian Jew who wrote before and after the Second World War, studied the problem of mutual understanding between races in the conquered areas. Moses Hadas portrayed an optimistic picture of synthesis of culture from the perspective of the 1950s, while Frank William Walbank in the 1960s and 1970s had a materialistic approach to the Hellenistic period, focusing mainly on class relations. Recently, however, papyrologist C. Préaux has concentrated predominantly on the economic system, interactions between kings and cities, and provides a generally pessimistic view on the period. Peter Green, on the other hand, writes from the point of view of late 20th century liberalism, his focus being on individualism, the breakdown of convention, experiments, and a postmodern disillusionment with all institutions and political processes. . Lucio Russo. The Forgotten Revolution: How Science Was Born in 300 BC and Why It Had To Be Reborn. 2004. Springer. 3-540-20396-6. Berlin. But see the critical reviews by Mott Greene, Nature, vol 430, no. 7000 (5 Aug 2004):614 http://www.nature.com/nature/journal/v430/n7000/full/430614a.html and Michael Rowan-Robinson, Physics World, vol. 17, no. 4 (April 2004)http://physicsweb.org/articles/review/17/4/1/1.
Part of this lecture is about soundings and the structure of the atmosphere. I have an activity (further down below) where students plot data from a weather balloon to “discover” the troposphere and stratosphere. I have several data sets to choose from – all from the same weather balloon but at different resolutions (different amounts of data to plot). When I do this activity with my students, we have not talked about the structure of the atmosphere at all yet. After they plot it, then we annotate it and talk about the atmospheric layers. If you watch the lecture, this data is the same set that I am using on screen. So in a way the lecture kind of gives away the answer. I also have some activities that are centered around interpretation of different soundings. In this lecture I do not emphasize the soundings too much, but in previous classes I did. So you may or may not be able to answer all of the questions. This activity forces you to really understand dew point and relative humidity to make sense of what is going on:sounding_examples_II Overview: This project is a cross-cutting activity that asks students to first create scatter plots / line plots of data from a weather balloon and then interpret the physical meaning of their graph. They discover how temperature changes with altitude and how the lower atmosphere is structured. The students then fill in their graph with features of the atmosphere. More advanced students can continue on to Part II of this project. Video above will walk you through this project. Weather balloon data: Your students will need one of the files below. Each has the same data from the weather balloon, but they differ in that I have highlighted (shaded) different alternate rows for the students to plot. I only have my students plot the highlighted rows, not the entire set. It would be ridiculous to have them plot all of the data by hand and the overall pattern of their graph will more ore less be the same no matter which subset of points you have them plot. You can decide how many data points you want your students to plot….I have indicated how many total points that will need to be graphed for each file. If you don’t know which one to use, start with the easy one first (11 points). Answer keys: An excel-generated plot for each of the above datasets. Please realize that the aspect of your students’ graphs will depend on the scale you use. Pick the answer key that matches the dataset you are using. July-4th-soundings-all-points-graph [PDF] for all data that the weather balloon generated. After the students have made their plots, I show them this graph to illustrate what their graph would have looked like if they had plotted ALL of the data. The general shape is the same is theirs, but there is more detail – more variations in temperature.
Eighth graders explore the characteristics of a circle and the formulas to find circumference. They use a bicycle wheel to determine the circumference around a wheel. They utilize a worksheet imbedded in this plan which has a variety of formulas. 3 Views 0 Downloads Wheel of Theodorus Extra Credit Project Think it's possible to make a wheel out of triangles? According to this geometry project it is. Using their knowledge of the Pythagorean Theorem, young mathematicians carefully calculate and draw a series of right triangles that form... 7th - 8th Math CCSS: Adaptable Module 6: Congruence, Construction, and Proof Trace the links between a variety of math concepts in this far-reaching unit. Ideas that seem very different on the outset (like the distance formula and rigid transformations) come together in very natural and logical ways. This unit... 8th - 10th Math CCSS: Designed Authentic Activities for Connecting Mathematics to the Real World Check out the handout from a presentation that contains a collection of high school algebra projects connecting a variety of mathematics to the real world. Activities range from simple probabilities to calculus. The activities can be... 6th - 12th Math CCSS: Adaptable Area and Circumference of Circles Don't go around and around, help your class determine amounts around and in a circle with a video that connects circumference to the perimeter or the distance around an object. The resource includes 14 questions dealing with circles and... 9 mins 8th - 10th Math CCSS: Adaptable
In today’s post, we’re going to find out what a simplified fraction is, how to simplify fractions and what equivalent fractions and irreducible fractions are. We need to start by understanding that there are fractions that look different but represent the same amount. For example: let’s take the case of this fraction, four eighths. When we see this fraction, we can see just by looking at it that four-eighths is the same as one half. But four-eighths is equivalent to more fractions, too. For example, the following… All fractions that represent the same quantity with different values are equivalent fractions. In this case, look closely at the two fractions on the right. They have something in common, they represent the same amount as four eighths, but with a smaller number of parts. The simplified fraction of another fraction, as in the previous example, represents the same amount as the original fraction but with fewer parts. Now we’re going to learn how to simplify a fraction mathematically. We have to divide the numerator (4) and the denominator (8) of the original fraction by the same number. In this case, we divide them both by 2. The fraction that we get as a result is a simplified version of the previous one. We can keep simplifying our original fraction to try and find an even simpler one. So, if we divide by a bigger number (4), the result is one half. If we put these fractions in a sequence, we can see that a half is also a simplified fraction of two quarters. Finally, what happens when we find ourselves with a fraction that we can’t simplify? For example, a half? Fractions that can’t be simplified are called irreducible fractions. I hope you’ve enjoyed learning how to simplify fractions! If you want to keep practicing and learning about equivalent fractions and simplifying fractions, log in to Smartick and try our online math learning method for free.
We have a rectangular sheet with length = 12 and width = 8. The rectangle is rolled long-side to shape a cylinder. Then the height of the cylinder will be the length of the rectangle= 12. ==> The height (h) = 12. Then, the width of the rectangle will form the circumference of the base. ==> The circumference C = 8. Now we will calculate the volume. We know that the volume v = r^2 * pi * h where r is the radius of the base. But we have C = 8 ==> C = 2* pi * r = 8 ==> r= 8/ 2*pi = 4/pi = 1.27 ( approx.) Now we will substitute into the volume. ==> V = r^2 * pi * h ==> V = ( 4/pi)^2 * pi * 12 = 16/ pi * 12 = 61.12. Then, the volume of the cylinder is 61.12 cubic units. If you fold the sheet 8*12 along the longer side and join the 2 paralle widths, it becomes a cylinder of circumfernce 12 and height 8. So the radius r of the cyliber whose circumference is 12 is given by : r = circumference /2pi = 12/2pi = 6/pi. The volume V of the cylinder whose radius is r is given by: V =( pi*r^2)h, where h = is height. Here, r = 6/pi and h = 8 Therefore V = 2pi* (6/pi)^2 *8 = 576/pi = 183.35 cubic units. Therefore the volume of the cylinder = 183.35 cubic units.
Shortly after its commissioning, the James Webb Space Telescope found “unusually bright” galaxies from the early days of the universe that baffled the research community. The Space Telescope Institute, which is responsible for the instrument’s science work, announced this, citing two verified research papers. The research team wrote that the brightness and shape of the galaxies as they were 350 and 450 million years after the Big Bang, respectively, indicate that the first stars formed as early as 100 million years after the Big Bang. The “Dark Ages” were much shorter then assumed. Final confirmation pending It was known that the James Webb Space Telescope discovered a particularly large number of distant galaxies immediately after it began its work. However, there were also doubts about the alleged discoveries, Discoveries made now But it passed peer review. These are objects with the designations GLASS-z12 and GLASS-z10, that is, galaxies with redshifts of z = 12.5 and z = 10.5. The first galaxy is therefore the most distant galaxy we’ve found so far. Distances still have to be finally confirmed. The research team explains that both will turn gas into stars very quickly. It is spherical or disk-shaped and is only a small percentage of the size of our Milky Way galaxy. The “quiet, orderly disks” of the galaxies found challenge our understanding of how the first galaxies formed in the “crowded, chaotic early universe,” says Erica Nilsson, who co-authored the study. In addition, by all accounts, it was also assumed that the search for such early galaxies would take much longer. Several relatively low-mass stars could be responsible for the unexpectedly high brightness. But it is also possible that there are many particularly bright stars of the so-called Population III, the first generation of stars ever. They have never been directly observed before. The first data suggest this, but only the most detailed analyzes can provide evidence. Information about the distance between the two galaxies was still based on infrared measurements. For independent confirmation, spectra must be determined and the actual redshift measured, the team writes. On its way to us, the expansion of space transfers light itself into red and eventually into infrared, which is why the value is also a measure of the age of a cosmic body. However, the discoveries are indeed remarkable, and a whole new chapter opens in astronomy, says astronomer Paola Santini: “It’s like an archaeological dig where you suddenly find a lost city or something unknown. It’s amazing.” The compact and extremely bright galaxies are very different from the Milky Way and its neighbors, adds research leader Tommaso Trio. The James Webb Space Telescope is operated by the space agencies NASA, ESA, and CSA and was launched on December 25, 2021. After a complex procedure of self-detection, it reached the L2 Lagrange point a month later. Here he looks away from the Sun, Earth and Moon into space so that their heat radiation does not disturb the infrared telescope. A huge protective screen prevents them. Since the scientific work began at the beginning of July, the quality of the data has not only astonished the research community. The first recordings are being posted live. The goal is for the scientific community to learn how to use the new observatory and its instruments as much as possible. the study On the two very early galaxies now in The Astrophysical Journal. “Social media evangelist. Baconaholic. Devoted reader. Twitter scholar. Avid coffee trailblazer.”
High School Mathematics Extensions/Primes/Full |Problems & Projects| |Problem Set Solutions| A prime number (or prime for short) is a natural number that has exactly two divisors: itself and the number 1. Since 1 has only one divisor — itself — we do not consider it to be a prime number but a unit. So, 2 is the first prime, 3 is the next prime, but 4 is not a prime because 4 divided by 2 equals 2 without a remainder. We've proved 4 has three divisors: 1, 2, and 4. Numbers with more than two divisors are called composite numbers. The first 20 primes are 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, and 71. Primes are an endless source of fascination for mathematicians. Some of the problems concerning primes are so difficult that even decades of work by some of the most brilliant mathematicians have failed to solve them. One such problem is Goldbach's conjecture, which proposes that all even numbers greater than 3 can be expressed as the sum of two primes. No one has been able to prove it true or false. This chapter will introduce some of the elementary properties of primes and their connection to an area of mathematics called modular arithmetic. Geometric meaning of primes Given 12 pieces of square floor tiles, can we assemble them into a rectangular shape in more than one way? Of course we can, this is due to the fact that We do not distinguish between 2×6 and 6×2 because they are essentially equivalent arrangements. But what about the number 7? Can you arrange 7 square floor tiles into rectangular shapes in more than one way? The answer is no, because 7 is a prime number. Fundamental Theorem of Arithmetic A theorem is a non-obvious mathematical fact. A theorem must be proven; a proposition that is generally believed to be true, but without a proof, is called a conjecture or a hypothesis. With those definitions out of the way the fundamental theorem of arithmetic simply states that: - Any natural number (except for 1) can be expressed as the product of primes in one and only one way. Rearranging the multiplication order is not considered a different representation of the number, so there is no other way of expressing 12 as the product of primes. A few more examples It can be easily seen that a composite number has more than one prime factor (counting recurring prime factors multiple times). Bearing in mind the definition of the fundamental theorem of arithmetic, why isn't the number 1 considered a prime? We know from the fundamental theorem of arithmetic that any integer can be expressed as the product of primes. The million dollar question is: given a number x, is there an easy way to find all prime factors of x? If x is a small number then it is easy. For example 90 = 2 × 3 × 3 × 5. But what if x is large? For example x = 4539? Most people can't factorize 4539 into primes in their heads. But can a computer do it? Yes, the computer can factorize 4539 fairly quickly. We have 4539 = 3 × 17 × 89. Since computers are very good at doing arithmetic, we can work out all the factors of x by simply instructing the computer to divide x by 2 then 3 then 5 then 7 ... and so on. So there is an easy way to factorize a number into prime factors. Just apply the method described above. However, that method is too slow for large numbers: trying to factorize a number with thousands of digits would take more time than the current age of the universe. But is there a fast way? Or more precisely, is there an efficient way? There may be, but no one has found one yet. Some of the most widely used encryption schemes today (such as RSA) make use of the fact that we can't factorize large numbers into prime factors quickly. If such a method is found, a lot of internet transactions will be rendered unsafe. Consider the following three examples of the dividing method in action. - not a whole number, so 2 is not a factor of 21 - hence 3 and 7 are the factors of 21. - hence 2 is not a factor of 153 - hence 3 and 51 are factors of 153 - hence 3 and 17 are factors of 153 It is clear that 3, 9, 17 and 51 are the factors of 153. The prime factors of 153 are 3, 3 and 17 (3×3×17 = 153) - hence 11, 11 and 17 are the prime factors of 2057. Fun Fact — Is this prime? Interestingly, there is an efficient way of determining whether a number is prime with 100% accuracy with the help of a computer. 2, 5 and 3 The primes 2, 5, and 3 hold a special place in factorization. Firstly, all even numbers have 2 as one of their prime factors. Secondly, all numbers whose last digit is 0 or 5 can be divided wholly by 5. The third case, where 3 is a prime factor, is the focus of this section. The underlying question is: is there a simple way to decide whether a number has 3 as one of its prime factors? Theorem — Divisibility by 3 A number is divisible by 3 if and only if the sum of its digits is divisible by 3 For example, 272 is not divisible by 3, because 2 + 7 + 2 = 11, which is not divisible by 3. 945 is divisible by 3, because 9 + 4 + 5 = 18. And 18 is divisible by 3. In fact 945 / 3 = 315 Is 123456789 divisible by 3? - 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 = (1 + 9) × 9 / 2 = 45 - 4 + 5 = 9 Nine is divisible by 3, therefore 45 is divisible by 3, therefore 123456789 is divisible by 3! The beauty of the theorem lies in its recursive nature. A number is divisible by 3 if and only if the sum of its digits is divisible by 3. How do we know whether the sum of its digits is divisible by 3? Apply the theorem again! info — Recursion A prominent computer scientist once said "To iterate is human, to recurse, divine." But what does it mean to recurse? Before that, what is to iterate? "To iterate" simply means doing the same thing over and over again, and computers are very good at that. An example of iteration in mathematics is the exponential operation, e.g. xn means doing x times x times x times x...n times. That is an example of iteration. Thinking about iteration economically (in terms of mental resources), by defining a problem in terms of itself, is "to recurse". To recursively represent xn, we write: - if n equals 0. - if n > 0 What is 99? It is 9 times 9 8. But, 98 is 9 times 97. Repeating this way is an example of recursion. The prime sieve is a relatively efficient method for finding all primes less than or equal to a specified number. To find all primes less than or equal to 50, we do the following: First, we write out the numbers 1 to 50 in a table as below Cross out 1, because it's not a prime. Now 2 is the smallest number not crossed out in the table. We mark 2 as a prime and cross out all multiples of 2 i.e. 4, 6, 8, 10 ... Now 3 is the smallest number not marked in anyway. We mark 3 as a prime and cross out all multiples of 3 i.e. 6, 9, 12, 15 ... Continue this way to find all the primes. When do you know you have found all the primes under 50? Note that this algorithm is called Sieve of Eratosthenes - The prime sieve has been applied to the table above. Notice that every number situated directly below 2 and 5 are crossed out. Construct a rectangular grid of numbers running from 1 to 60 so that after the prime sieve has been performed on it, all numbers situated directly below 2 and 5 are crossed out. What is the width of the grid? - Find all primes below 200. - Find the numbers which are divisible by 3 below 200. Did you change the width of your grid? Infinitely many primes To answer the question what is the largest prime number? let us first answer what is the largest natural number? If somebody tells you that is the largest natural number, you can immediately prove them wrong by telling them that is a larger natural number. You can substitute with any number other natural number and your argument will still work. This means that there is no such thing as the largest natural number. (Some of you might be tempted to say that infinity is the largest natural number. However, infinity is not a natural number but just a mathematical concept.) The ancient Greek mathematician Euclid, had the following proof of the infinitude of primes. Proof of infinitude of primes Let us first assume that - there are a finite number of primes - there must be one prime that is greater than all others, let this prime be referred to as n. We now proceed to show the two assumptions made above will lead to a contradiction, and thus there are infinitely many primes. Take the product of all prime numbers to yield a number x. Thus: Then, let y equal one more than x: One may now conclude that y is not divisible by any of the primes up to n, since y differs from a multiple of each such prime by exactly 1. Since y is not divisible by any prime number, y must either be prime, or its prime factors must all be greater than n, a contradiction of the original assumption that n is the largest prime! Therefore, one must declare the original assumption incorrect, and that there exists an infinite number of primes. Fun Fact — Largest known prime Useful Off-site Resources Modular arithmetic connects with primes in an interesting way. Modular arithmetic is a system in which all numbers up to some positive integer, n say, are used. So if you were to start counting you would go 0, 1, 2, 3, ... , n - 1 but instead of counting n you would start over at 0. And what would have been n + 1 would be 1 and what would have been n + 2 would be 2. Once 2n has been reached the number is reset to 0 again, and so on. Modular arithmetic is also called clock-arithmetic because we only use 12 numbers to tell standard time. On clocks we start at 1 instead of 0, continue to 12, and then start at 1 again. Hence the name clock-arithmetic. The sequence also continues into what would be the negative numbers. What would have been -1 is now n - 1. For example, consider modulo 7 arithmetic, it's just like ordinary arithmetic except the only numbers we use are 0, 1, 2, 3, 4, 5 and 6. If we see a number outside of this range we add 7 to (or subtract 7 from) it, until it lies within that range. As mentioned above, modular arithmetic is not that different to ordinary arithmetic. For example in modulo 7 arithmetic, we have We have done some calculation with negative numbers. Consider 5 × -6. Since -6 does not lie in the range 0 to 6, we need to add 7 to it until it does. And -6 + 7 = 1. So in modular 7 arithmetic, -6 = 1. In the above example we showed that 5 × -6 = -30 = 5, but 5 × 1 = 5. So we didn't do ourselves any harm by using -6 instead of 1. Why? Note - Negatives: The preferred representation of -3 is 4, as -3 + 7 = 4, but using either -3 and 4 in a calculation will give us the same answer as long as we convert the final answer to a number between 0 and 6 (inclusive). Find in modulo 11 - -1 × -5 - 3 × 7 3. Compute the first 10 the powers of 2 - 21, 22, 23, ... , 210 What do you notice? Using the powers of 2 find - 61, 62, 63, ... , 610 What do you notice again? i.e. find, by trial and error (or otherwise), all numbers x such that x2 = 4 (mod 11). There are two solutions, find both . i.e. find all numbers x such that x2 = 9 (mod 11). There are two solutions, find both. Consider a number n, the inverse of n is the number that when multiplied by n gives 1. For example, if we were to solve the following equation the (mod 7) is used to make it clear that we are doing arithemetic modulo 7. We want to get rid of the 5 somehow. Multiplying it by something to turn it into a 1 would do the job. Notice that because 3 multiplied by 5 gives 1, we say 3 is the inverse of 5 in modulo 7. Now we multiply both sides by 3 So x = 2 modulo 7 is the required solution. - The inverse of (a number) x is a number y such that xy = 1. We denote the inverse of x by x-1 or 1/x. Inverse is unique From above, we know the inverse of 5 is 3, but does 5 have another inverse? The answer is no. In fact, in any reasonable number system, a number can have one and only one inverse. We can see that from the following proof Suppose n has two inverses b and c From the above argument, all inverses of n must be equal. As a result, if the number n has an inverse, the inverse must be unique. An interesting property of any modulo n arithmetic is that the number n - 1 has itself as an inverse. That is, (n - 1) × (n - 1) = 1 (mod n), or we can write (n - 1)2 = (-1)2 = 1 (mod n). The proof is left as an exercise at the end of the section. Existence of inverse Not every number has an inverse in every modulo arithmetic. For example, 3 doesn't have an inverse mod 6, i.e., we can't find a number x such that 3x = 1 mod 6 (the reader can easily check). Consider modulo 15 arithmetic and note that 15 is composite. We know the inverse of 1 is 1 and of 14 is 14. But what about 3, 6, 9, 12, 5 and 10? None of them has an inverse! Note that each of them shares a common factor with 15! As an example, we show that 3 does not have an inverse modulo 15. Suppose 3 has an inverse x, then we have We make the jump from modular arithemetic into rational number arithmetic. If 3x = 1 in modulo 15 arithmetic, then for some integer k. Now we divide both sides by 3, we get But this cannot be true, because we know that x is an integer, not a fraction. Therefore 3 doesn't have an inverse in mod 15 arithmetic. To show that 10 doesn't have an inverse is harder and is left as an exercise. We will now state the theorem regarding the existence of inverses in modular arithmetic. - If n is prime then every number (except 0) has an inverse in modulo n arithmetic. - If n is composite then every number that doesn't share a common factor with n has an inverse. It is interesting to note that division is closely related to the concept of inverses. Consider the following expression the conventional way to calculate the above would be to find the inverse of 3 (being 5). So We write the inverse of 3 as 1/3, so we think of multiplying 3-1 as dividing by 3, we get Notice that we got the same answer! In fact, the division method will always work if the inverse exists. However, the expression in a different modulo system will produce the wrong answer, for example we don't get 2, as 3-1 does not exist in modulo 9, so we can't use the division method. 1. Does 8 have an inverse in mod 16 arithemetic? If not, why not? 2. Find x mod 7 if x exists: 3. Calculate x in two ways, finding inverse and division 4. (Trick) Find x Find all inverses mod n (n ≤ 19) Coprime and greatest common divisor Two numbers are said to be coprimes if their greatest common divisor (gcd) is 1. E.g. 21 and 55 are both composite, but they are coprime as their greatest common divisor is 1. In other words, they do not share a common divisor other than 1. There is a quick and elegant way to compute the gcd of two numbers, called Euclid's algorithm. Let's illustrate with a few examples: - Find the gcd of 21 and 49. We set up a two-column table where the larger of the two numbers is on the right hand side as follows We now compute 49 (mod 21) which is 7 and put it in the second row smaller column, and put 21 into the larger column. Perform the same action on the second row to produce the third row. Whenever we see the number 0 appear on the smaller column, we know the corresponding larger number is the gcd of the two numbers we started with, i.e. 7 is the gcd of 21 and 49. This algorithm is called Euclid's algorithm. - Find the gcd of 31 and 101 - Find the gcd of 132 and 200 Important to note - The gcd need not be a prime number. - The gcd of two different primes is 1. In other words, two different primes are always coprime. 1. Determine whether the following sets of numbers are coprimes - 5050 5051 - 59 78 - 111 369 - 2021 4032 2. Find the gcd of the numbers 15, 510 and 375 info -- Algorithm An algorithm is a step-by-step description of a series of actions when performed correctly can accomplish a task. There are algorithms for finding primes, deciding whether 2 numbers are coprimes, finding inverses and many other purposes. You'll learn how to implement some of the algorithms we have seen using a computer in the chapter [[../../../Mathematical Programming/]]. Let's look at the idea of inverse again, but from a different angle. In fact we will provide a sure-fire method to find the inverse of any number. Let's consider: - 5x = 1 (mod 7) We know x is the inverse of 5 and we can work out it is 3 reasonably quickly. But x = 10 is also a solution, so is x = 17, 24, 31, ... 7n + 3. So there are infinitely many solutions; therefore we say 3 is equivalent to 10, 17, 24, 31 and so on. This is a crucial observation. Now let's consider A new notation is introduced here, it is the equal sign with three strokes instead of two. It is the "equivalent" sign; the above statement should read "216x is EQUIVALENT to 1" instead of "216x is EQUAL to 1". From now on, we will use the equivalent sign for modulo arithmetic and the equal sign for ordinary arithmetic. Back to the example, we know that x exists, as gcd(811,216) = 1. The problem with the above question is that there is no quick way to decide the value of x! The best way we know is to multiply 216 by 1, 2, 3, 4... until we get the answer, there are at most 811 calculations, way too tedious for humans. But there is a better way, and we have touched on it quite a few times! We notice that we could make the jump just like before into rational mathematics: We jump into rational maths again We jump once more Now the pattern is clear, we shall start from the beginning so that the process is not broken: Now all we have to do is choose a value for f and substitute it back to find a! Remember a is the inverse of 216 mod 811. We choose f = 0, therefore e = 1, d = 13, c = 40, b = 53 and finally a = 199! If f is chosen to be 1 we will get a different value for a. The very perceptive reader should have noticed that this is just Euclid's gcd algorithm in reverse. Here are a few more examples of this ingenious method in action: Find the smallest positive value of a: Choose d = 0, therefore a = 49. Example 2 Find the smallest positive value of a: Choose e = 0, therefore a = -152 = 669 Example 3 Find the smallest positive value of a: Set i = 0, then a = -21 = 34. Why is this so slow for two numbers that are so small? What can you say about the coefficients? Example 4 Find the smallest positive value of a: Now d is not an integer, therefore 21 does not have an inverse mod 102. What we have discussed so far is the method of finding integer solutions to equations of the form: - ax + by = 1 where x and y are the unknowns and a and b are two given constants, these equations are called linear Diophantine equations. It is interesting to note that sometimes there is no solution, but if a solution exists, it implies that infinitely many solutions exist. In the Modular Arithmetic section, we stated a theorem that says if gcd(a,m) = 1 then a-1 (the inverse of a) exists in mod m. It is not difficult to see that if p is prime then gcd(b,p) = 1 for all b less than p, therefore we can say that in mod p, every number except 0 has an inverse. We also showed a way to find the inverse of any element mod p. In fact, finding the inverse of a number in modular arithmetic amounts to solving a type of equations called Diophantine equations. A Diophantine equation is an equation of the form - ax + by = d where x and y are unknown. As an example, we should try to find the inverse of 216 in mod 811. Let the inverse of 216 be x, we can write we can rewrite the above in every day arithmetic which is in the form of a Diophantine equation. Now we are going to do the inelegant method of solving the above problem, and then the elegant method (using Magic Tables). Both methods mentioned above uses the Euclid's algorithm for finding the gcd of two numbers. In fact, the gcd is closely related to the idea of an inverse. Let's apply the Euclid's algorithm on the two numbers 216 and 811. This time, however, we should store more details; more specifically, we want to set up an additional column called PQ which stands for partial quotient. The partial quotient is just a technical term for "how many n goes into m" e.g. The partial quotient of 3 and 19 is 6, the partial quotient of 4 and 21 is 5 and one last example the partial quotient of 7 and 49 is 7. The tables says three 216s goes into 811 with remainder 163, or symbollically: - 811 = 3×216 + 163. Reading off the table, we can form the following expressions - 811 = 3× 216 + 163 - 216 = 1× 163 + 53 - 163 = 3× 53 + 4 - 53 =13× 4 + 1 Now that we can work out the inverse of 216 by working the results backwards - 1 = 53 - 13×4 - 1 = 53 - 13×(163 - 3×53) - 1 = 40×53 - 13×163 - 1 = 40×(216 - 163) - 13×163 - 1 = 40×216 - 53×163 - 1 = 40×216 - 53×(811 - 3×216) - 1 = 199×216 - 53×811 Now look at the equation mod 811, we will see the inverse of 216 is 199. The Magic Table is a more elegant way to do the above caculations, let us use the table we form from Euclid's algorithm Now we set up the so-called "magic table" which looks like this initially Now we write the partial quotient on the first row: We produce the table according to the following rule: - Multiply a partial quotient one space to the left of it in a different row, add the product to the number two space to the left on the same row and put the sum in the corresponding row. It sounds more complicated then it should. Let's illustrate by producing a column: We put a 3 in the second row because 3 = 3×1 + 0. We put a 1 in the third row because 1 = 3×0 + 1. We shall now produce the whole table without disruption: We can check that - |199×216 - 811×53| = 1 In fact, if the magic table is contructed properly, and we cross multiplied and subtracted the last two column correctly, then we will always get 1 or -1, provided the two numbers we started with were coprimes. The magic table is just a cleaner way of doing the mathematics. 1. Find the smallest positive x: 2. Find the smallest positive x: (a) Produce the magic table for 33a = 1 (mod 101) (b) Evaluate and express in the form p/q What do you notice? (a) Produce the magic table for 17a = 1 (mod 317) (b) Evaluate and express in the form p/q What do you notice? Chinese remainder theorem The Chinese remainder theorem is known in China as Han Xing Dian Bing, which in its most naive translation means Han Xing counts his soldiers. The original problem goes like this: - There exists a number x, when divided by 3 leaves remainder 2, when divided by 5 leaves remainder 3 and when divided by 7 leaves remainder 2. Find the smallest x. We translate the question into symbolic form: How do we go about finding such a x? We shall use a familiar method and it is best illustrated by example: Looking at x = 2 (mod 3), we make the jump into ordinary mathematics Now we look at the equation modulo 5 Substitute into (1) to get the following Now look at the above modulo 7 We choose b = 1 to minimize x, therefore x = 23. And a simple check (to be performed by the reader) should confirm that x = 23 is a solution. A good question to ask is what is the next smallest x that satisfies the three congruences? The answer is x = 128, and the next is 233 and the next is 338, and they differ by 105, the product of 3, 5 and 7. We will illustrate the method of solving a system of congruences further by the following examples: Example 1 Find the smallest x that satisfies: now substitute back into the first equation, we get again substituting back Therefore 52 is the smallest x that satisfies the congruences. Find the smallest x that satisfies: now solve for b again, substitue back Therefore 269 is the smallest x that satisfies the congruences. 1. Solve for x 2. Solve for x *Existence of a solution* The exercises above all have a solution. So does there exist a system of congruences such that no solution could be found? It certainly is possible, consider: - x ≡ 5 (mod 15) - x ≡ 10 (mod 21) a cheekier example is: - x ≡ 1 (mod 2) - x ≡ 0 (mod 2) but we won't consider silly examples like that. Back to the first example, we can try to solve it by doing: the above equation has no solution because 3 does not have an inverse modulo 21! One may be quick to conclude that if two modulo systems share a common factor then there is no solution. But this is not true! Consider: we can find a solution we now multiply both sides by the inverse of 5 (which is 17), we obtain obviously, k = 3 is a solution, and the two modulo systems are the same as the first example (i.e. 15 and 21). So what determines whether a system of congruences has a solution or not? Let's consider the general case: essentially, the problem asks us to find k and l such that the above equations are satisfied. We can approach the problem as follows now suppose m and n have gcd(m,n) = d, and m = dmo, n = dno. We have if (a - b)/d is an integer then we can read the equation mod mo, we have: Again, the above only makes sense if (a - b)/d is integeral. Also if (a - b)/d is an integer, then there is a solution, as mo and no are coprimes! In summary: for a system of two congruent equations there is a solution if and only if - d = gcd(m,n) divides (a - b) And the above generalises well into more than 2 congruences. For a system of n congruences: for a solution to exist, we require that if i ≠ j - gcd(mi,mj) divides (ai - aj) Decide whether a solution exists for each of the congruences. Explain why. - x ≡ 7 (mod 25) - x ≡ 22 (mod 45) - x ≡ 7 (mod 23) - x ≡ 3 (mod 11) - x ≡ 3 (mod 13) - x ≡ 7 (mod 25) - x ≡ 22 (mod 45) - x ≡ 7 (mod 11) - x ≡ 4 (mod 28) - x ≡ 28 (mod 52) - x ≡ 24 (mod 32) To go further This chapter has been a gentle introduction to number theory, a profoundly beautiful branch of mathematics. It is gentle in the sense that it is mathematically light and overall quite easy. If you enjoyed the material in this chapter, you would also enjoy Further Modular Arithmetic, which is a harder and more rigorous treatment of the subject. Also, if you feel like a challenge you may like to try out the Problem Set we have prepared for you. On the other hand, the project asks you to take a more investigative approach to work through some of the finer implications of the Chinese Remainder Theorem. Acknowledgement: This chapter of the textbook owes much of its inspiration to Terry Gagen, Emeritus Associate Professor of Mathematics at the University of Sydney, and his lecture notes on "Number Theory and Algebra". Terry is a much loved figure among his students and is renowned for his entertaining style of teaching. What do you think? Too easy or too hard? Too much information or not enough? How can we improve? Please let us know by leaving a comment in the discussion tab. Better still, edit it yourself and make it better.
The Great Migration was the twentieth-century mass movement of African Americans from the rural South to the cities primarily of the North and West. In the view of some historians, the migration began around the turn of the twentieth century and continued through the 1950s. Others set narrower parameters, arguing that the Great Migration began with World War I and ended with World War II. All agree, however, that the shift fundamentally altered the history of African Americans and of American society as a whole. The number of persons on the move was substantial. In 1910, roughly 90 percent of America's 10 million African Americans lived in the South, a figure roughly unchanged since the formal ending of slavery in 1865. Of these 9 million, roughly eight in ten lived in rural areas. Between 1915 and 1920, somewhere between 500,000 and 1 million African Americans left the rural South for the urban North and West. A similar number made the move in the 1920s. Like many migrants, a number of these individuals returned to the South. Still, enough stayed that of the roughly 12 million African Americans in 1930, 20 percent were living in the urban North and West. Cities in the Northeast and Midwest received the bulk of the migrants. New York's black population rose from about 92,000 in 1910 (1.9 percent of the total population) to just over 150,000 in 1920 and to nearly 330,000 by 1930 (4.8 percent). Chicago's rose from about 44,000 (2.0 percent) to nearly 110,000 to more than 230,000 in the same period (6.8 percent). Detroit's black population rose from under 6,000 (1.3 percent) to almost 41,000 to more than 120,000 (7.6 percent). Los Angeles saw its black population rise from about 8,000 in 1910 (2.5 percent) to almost 40,000 in 1930 (3.2 percent). Southern cities also saw significant growth. Atlanta's black population went from just under 52,000 in 1910 (33.8 percent) to nearly 63,000 in 1920 and more than 90,000 (33.3 percent) in 1930; that of Memphis climbed from around 50,000 (38.2 percent) to about 60,000 to roughly 100,000 (39.5 percent). The population of heavily African American states in the South, particularly those without large urban areas, declined in relative terms, as black majorities in Louisiana, Mississippi, and South Carolina turned into minorities. Like most other mass migrations in modern world history, the Great Migration was caused by a host of factors, which scholars divide into two general categories: push and pull. Push factors included environmental, economic, and social conditions. During the Reconstruction period after the Civil War, African Americans in the South had made significant economic and political gains. Later, however, their right to vote was largely taken away from them by the poll tax and other restrictive laws. Their economic gains were whittled away by declines in the prices for the commodities they raised as well as high rents and interest rates charged by landholders and merchants. Also damaging was an infestation by the boll weevil, an insect pest that destroyed much of the cotton crop in the mid-1910s. And the 1896 U.S. Supreme Court decision in Plessy v. Ferguson upholding segregation greatly restricted African Americans’ place in society. The pull factors were largely economic but included some political and social elements as well. First and foremost was the lure of relatively high-paying jobs, particularly in heavy industry, made possible by a booming manufacturing economy in both the urban North and a few Southern cities such as Richmond and Birmingham. World War I, with its huge demand for labor, opened new opportunities for blacks in defense plants, as did passage of the Naturalization Acts of 1921 and 1924, which severely restricted the number of unskilled immigrant workers from Southern and Eastern Europe. While blacks were often restricted to the lowest-paying maintenance jobs in many factories, such employment still represented an economic improvement for former sharecroppers who, as late as the early twentieth century, were still barely connected to the cash economy. Northern cities, though largely segregated, also offered greater social inclusiveness for blacks, who formed impoverished but culturally dynamic neighborhoods in virtually every major city north of the Mason-Dixon line. The Page 463 | Top of ArticleJazz Age was the decade of the “New Negro”—urbanized, literate, and assertive—and the Harlem Renaissance, a black cultural efflorescence centered in the Harlem section of New York City. Also, by moving North, blacks were able to reenter the political process, as most Northern states had no restrictions on black voting. The result was that blacks became part of the calculation of urban political machines, their needs addressed by their representatives who, though limited in number, served in city and state governments and, after 1928, in the U.S. Congress as well. Yet African Americans who moved North did not escape racism entirely. While in their own neighborhoods they were largely free of the daily indignities and humiliations of the Jim Crow South, many Northern urban whites resented their growing numbers, seeing in African Americans a threat to their own precarious economic and social position. Many white workers saw blacks as economic competitors, a situation exacerbated by employers who used African Americans as strikebreakers. In addition, many immigrants feared that their tentative rise up the social ladder was jeopardized by intermingling with a caste of people—African Americans—uniformly looked down upon by the native born. Not surprisingly, the major antiblack riots of the late 1910s—in East St. Louis and Chicago—were triggered in part by white perceptions that blacks were trying to move into their neighborhoods. Despite this backlash, the Great Migration continued, slowed only by the depressed manufacturing economy of the 1930s. With the defense-industry demands of World War II and the great postwar economic boom of the 1950s, the Great Migration—whether called by that name or not—continued. By 1960, roughly 40 percent of all African Americans lived in the North and West, and nearly 75 percent were urban. James Ciment and Sam Hitchmough Gregory, James N. The Southern Diaspora: How the Great Migrations of Black and White Southerners Transformed America. Chapel Hill: University of North Carolina Press, 2005. Grossman, James. Land of Hope: Chicago, Black Southerners, and the Great Migration. Reprint ed. Chicago: University of Chicago Press, 1991. Hahn, Steven. A Nation Under Our Feet: Black Political Struggles in the Rural South from Slavery to the Great Migration. Cambridge, MA: Belknap Press of Harvard University Press, 2003. Harrison, Alferdteen. Black Exodus: The Great Migration from the American South. Oxford: University Press of Mississippi, 1992. Lemann, Nicholas. The Promised Land: The Great Black Migration and How It Changed America. New York: Vintage Books, 1992. Wilkerson, Isabel. The Warmth of Other Suns: The Epic Story of America's Great Migration. New York: Random House, 2010.
According to data from the 2002 National Assessment of Educational Progress (NAEP), only 28% of fourth graders, 31% of eighth graders, and 24% of twelfth graders performed at or above a proficient (i.e., competent) level of writing achievement for their respective grade level (Persky, Daane, & Jin, 2003). This Access Center resource is intended to help teachers implement writing instruction that will lead to better writing outcomes for students with and without writing difficulties. We provide research-based recommendations, activities, and materials to effectively teach writing to the wide range of students educators often find in their classrooms. There are three apparent reasons why so many children and youth find writing challenging. First, composing text is a complex and difficult undertaking that requires the deployment and coordination of multiple affective, cognitive, linguistic, and physical operations to accomplish goals associated with genre-specific conventions, audience needs, and an author’s communicative purposes. Second, the profile of the typical classroom in the United States has undergone dramatic changes in the recent past. Many more students today come from impoverished homes, speak English as a second language, and have identified or suspected disabilities (Persky, Daane, & Jin, 2003). This increasing diversity of the school-aged population has occurred within the context of the standards-based education movement and its accompanying high-stakes accountability testing. As a consequence, more demands for higher levels of writing performance and for demonstration of content mastery through writing are being made of students and their teachers, while teachers are simultaneously facing a higher proportion of students who struggle not only with composing, but also with basic writing skills. Unfortunately, many teachers feel ill-equipped to handle these competing pressures, in part because they lack the prerequisite pedagogical knowledge, instructional capabilities, and valued resources for teaching writing, and in part because writing curricula, which exert a strong influence on teachers’ writing instruction, tend to be underdeveloped and misaligned with other curricula (Troia & Maddox, 2004). Third, the quality of instruction students receive is a major determinant of their writing achievement (Graham & Harris, 2002). In some classrooms, writing instruction focuses almost exclusively on text transcription skills, such as handwriting and spelling, with few opportunities to compose meaningful, authentic text (e.g., Palinscar & Klenk, 1992). In other classrooms, frequent and varied opportunities exist to use the writing process to complete personally relevant and engaging writing tasks, but little time is devoted to teaching important writing skills and strategies, as it is assumed these can be mastered through incidental teaching and learning (e.g., Westby & Costlow, 1991). Still in other classrooms, virtually no time is devoted to writing instruction or writing activities (e.g., Christenson, Thurlow, Ysseldyke, & McVicar, 1989). In perhaps a minority of classrooms, students are taught by exemplary educators who blend process-embedded skill and strategy instruction with writing workshop elements such as mini-lessons, sustained writing, conferencing, and sharing (e.g., Bridge, Compton-Hall, & Cantrell, 1997; Troia, Lin, Cohen, & Monroe, in preparation; Wray, Medwell, Fox, & Poulson, 2000). Yet, for students with disabilities who tend to develop or exhibit chronic and pernicious writing difficulties, even this type of instruction may be inadequate. These students need considerably more intensive, individualized, and explicit teaching of transcription skills and composing strategies that incorporates effective adaptations to task demands, response formats, student supports, and teacher practices (Troia & Graham, 2003; Troia, Lin, Monroe, & Cohen, in preparation). The box below presents several areas of difficulty for students with writing problems. Areas of difficulty for students with writing problems Students with writing problems show: - Less awareness of what constitutes good writing and how to produce it - Restricted knowledge about genre-specific text structures (e.g., setting or plot elements in a narrative) - Poor declarative, procedural, and conditional strategy knowledge (e.g., knowing that one should set goals for writing, how to set specific goals, and when it is most beneficial to alter those goals) - Limited vocabulary - Underdeveloped knowledge of word and sentence structure (i.e., phonology, morphology, and syntax) - Impoverished, fragmented, and poorly organized topic knowledge - Difficulty accessing existing topic knowledge - Insensitivity to audience needs and perspectives, and to the functions their writing is intended to serve Students with writing problems: - Often do not plan before or during writing - Exhibit poor text transcription (e.g., spelling, handwriting, and punctuation) - Focus revision efforts (if they revise at all) on superficial aspects of writing (e.g., handwriting, spelling, and grammar) - Do not analyze or reflect on writing - Have limited ability to self regulate thoughts, feelings, and actions throughout the writing process - Show poor attention and concentration - Have visual motor integration weaknesses and fine motor difficulties Students with writing problems: - Often do not develop writing goals and subgoals or flexibly alter them to meet audience, task, and personal demands - Fail to balance performance goals, which relate to documenting performance and achieving success, and mastery goals, which relate to acquiring competence - Exhibit maladaptive attributions by attributing academic success to external and uncontrollable factors such as task ease or teacher assistance, but academic failure to internal yet uncontrollable factors such as limited aptitude - Have negative self efficacy (competency) beliefs - Lack persistence - Feel helpless and poorly motivated due to repeated failure See Troia, 2002; Troia & Graham, 2003 Qualities of strong writing instruction In order for teachers to support all students’ writing ability development, certain qualities of the writing classroom must be present. Four core components of effective writing instruction constitute the foundation of any good writing program: - Students should have meaningful writing experiences and be assigned authentic writing tasks that promote personal and collective expression, reflection, inquiry, discovery, and social change. - Routines should permit students to become comfortable with the writing process and move through the process over a sustained period of time at their own rate. - Lessons should be designed to help students master craft elements (e.g., text structure, character development), writing skills (e.g., spelling, punctuation), and process strategies (e.g., planning and revising tactics). - A common language for shared expectations and feedback regarding writing quality might include the use of traits (e.g., organization, ideas, sentence fluency, word choice, voice, and conventions). The illustration below provides a graphic representation of the core components of effective writing instruction. Putting the pieces together: the core components of effective writing instruction All of these basic components must be thoughtfully coordinated to form a comprehensive writing program for students. Of course, these are only the basic features of strong writing instruction. Additional features, such as procedural supports for carrying out the writing process, a sense of writing community, integration of writing with other academic areas, assistance in implementing a writing program, and sustained professional development to strengthen teachers’ knowledge and skills are presented in the box below. Six additional attributes of a top-notch classroom writing program - Procedural supports such as conferences, planning forms and charts, checklists for revision/editing, and computer tools for removing transcription barriers - A sense of community in which risks are supported, children and teachers are viewed as writers, personal ownership is expected, and collaboration is a cornerstone of the program - Integration of writing instruction with reading instruction and content area instruction (e.g., use of touchstone texts to guide genre study, use of common themes across the curriculum, maintaining learning notebooks in math and science classes) - A cadre of trained volunteers who respond to, encourage, coach, and celebrate children’s writing, and who help classroom teachers give more feedback and potentially individualize their instruction - Resident writers and guest authors who share their expertise, struggles, and successes so that children and teachers have positive role models and develop a broader sense of writing as craft - Opportunities for teachers to upgrade and expand their own conceptions of writing, the writing process, and how children learn to write, primarily through professional development activities but also through being an active member of a writing community (e.g., National Writing Project) See Atwell, 1998; Calkins, 1994; Culham, 2003; Elbow, 1998a, 1998b; Graves, 1994; Spandel, 2001; Troia & Graham, 2003 These characteristics of exemplary writing instruction are equally relevant for elementary and secondary teachers — regardless of content area focus — and their young writers. If students are expected to become competent writers, then writing instruction must be approached in similar ways by all teachers who expect writing performance in their classrooms and must be sustained across the grades to support students as they gradually become accomplished writers. A major step in implementing strong writing instruction is establishing routines for (a) daily writing instruction, (b) covering the whole writing curriculum, and (c) examining the valued qualities of good writing. A typical writing lesson will have at least four parts: Mini-lesson (15 minutes) Teacher-directed lesson on writing skills, composition strategies, and crafting elements (e.g., writing quality traits, character development, dialogue, leads for exposition, literary devices), which are demonstrated and practiced through direct modeling of teacher’s writing or others’ work (e.g., shared writing, literature, student papers); initially, mini-lessons will need to focus on establishing routines and expectations. Check-in (5 minutes) Students indicate where they are in the writing process (i.e., planning, drafting, revising, editing, publishing). The teacher asks students to identify how they plan to use what was taught during the mini-lesson in their writing activities for that day. Independent Writing and Conferring (30 minutes) Students are expected to be writing or revising/editing, consulting with a peer, and/or conferencing with the teacher during this time. Sharing (10 minutes) Students identify how they used what was taught during the mini-lesson in their own writing and what challenges arose. The teacher may discuss impressions from conferring with students; students share their writing (it does not have to be a complete paper and may, in fact, only be initial ideas for writing) with the group or a partner, while others provide praise and constructive feedback. Students discuss next steps in the writing assignment. Publishing Celebration (occasionally) Students need a variety of outlets for their writing to make it purposeful and enjoyable, such as a class anthology of stories or poems, a grade-level newspaper or school magazine, a public reading in or out of school, a Web site for student writing, a pen pal, the library, and dramatizations. Several tools can help the teacher maintain the integrity of this lesson structure. Examples of these tools follow First, each student should have a writing notebook for (a) recording “seed” ideas for writing, such as memories, wishes, observations, quotations, questions, illustrations, and artifacts [e.g., a letter or recipe]; (b) performing planning activities; (c) drafting writing pieces; and (d) logging writing activities and reflections [see Fletcher, 1996]. Second, writing folder in which students keep their papers should be in boxes that are labeled for different phases of the writing process. These folder will help organize different versions of a piece of writing students generate, as well as the various projects students work on at a given time. Third, some means for visually displaying check-in status will help students and teacher monitor individual and class progress in writing. Each student might, for example, put a card in the appropriate slot of a class pocket chart labled with the stages of the writing process. Or, the student might display the cube that represents the different writing stages (the sixth side might simply be labeled “help” and would be used when teacher assistance is required). Fourth, a personal journal (that may or may not be shared with the teacher and/or other students) helps teachers encourage writing outside of the period (e.g., content area instruction, independent activity, writing homework), and may be used later as material use for a dialogue format that yields productive interactions between the author and readers (e.g., a double-column entry journal for another’s remarks in response to the writers entry) give thought to how the journal is to be evaluated, if at all. Additional instructional considerations Writing workshop is an instructional model in which the process of writing is emphasized more than the written product and which highly values students’ interests and autonomy. Because so many teachers use some variation of writing workshop as the fundamental structure for their writing program, the attributes of an exemplary workshop are described in Specific Characteristics of a Strong Writers’ Workshop. Some of the most important attributes include explicit modeling, regular conferencing with students and families, high expectations, encouragement, flexibility, cooperative learning arrangements, and ample opportunities for self-regulation. On occasion, teachers may wish to assign topics or provide prompts for journaling or other writing activities. A list of potential prompts appropriate for late elementary and middle school grades is given in Writing Prompts. Using titles is a unique way of having students plan and write creative narratives that conform to a particular sub-genre or that have a distinctive tone. Other ways of prompting creative narratives include pictures, story starters, and story endings (these are particularly beneficial because they require a high degree of planning). Numerous persuasive topic prompts are listed because persuasive writing often is overlooked until secondary school, and because such topics can engage students in critical thinking about relevant issues. Of course, teachers will need to supplement this list with other prompts to trigger other forms of writing (e.g., exposition, poetry); many such prompts can and should be derived from the curriculum as well as students’ personal experiences and interests (for suggestions, see Fletcher, 2002; Heard, 1989; Portalupi & Fletcher, 2001; Young, 2002). Breaking down different genres in writing A carefully orchestrated routine should also guide coverage of the writing curriculum. One type of routine includes genre study. In genre study, each instructional cycle focuses on a single genre (e.g., poetry) and one or two particular forms of that genre (e.g., cinquain and haiku). To develop a strong sense of the genre, a genre study cycle should typically last about one marking period. For primary grade students, it is advisable to begin genre study with a highly familiar genre, such as personal narrative, so that students have an opportunity to become accustomed to the activities associated with genre study. Specific recommended procedures for narrative genre study and expository genre study are presented in the associated charts (see Genre Study Routines for Narrative Text and Genre Study Routines for Expository Text). For these and any other genre of instructional focus, teachers need to do the following: - Develop students’ explicit understanding of the genre structure, perhaps using a graphic aid or mnemonic device (see SPACE mnemonic for narratives) - Share “touchstone” texts that exemplify the structure and valued genre traits (perhaps solicit suggestions from students) - Give students time to explore potential ideas for writing through reflection, discussion, and research (writing notebooks are helpful for this) - Provide students with graphic aids for planning their texts - Have students quickly write (flash-draft) parts of their papers to diminish their reluctance to revise - Allow enough time for students to proceed through multiple iterations of revising and editing before publishing the finished product One way of thinking about the organization of genre study is to relate it to the process of growing a prize-winning rose for entry into a garden show. The first step is to plant the seed for writing by immersing students in touchstone texts (i.e., exemplary models) of the genre targeted for instruction and discussing the key qualities of those examples to illustrate the structure and function of the genre. The next step is to grow the seed idea through careful planning and small increments of drafting (much like giving a seed just the right amount of sunlight, water, and fertilizer to help it grow). Then, as any accomplished gardener will tell you, once a rose plant begins to grow, it is often necessary to prune back dead branches and leaves, add structural supports, and perhaps even graft new plants. Likewise, once a draft has been produced, it requires multiple trimmings of unworkable portions or irrelevant information; expansions through the addition of details, examples, and even new portions of text; and attention to writing conventions for ultimate publication. Displaying one’s writing in some public forum to gain valuable feedback and accolades, much like a prized rose, is the culmination of all the hard work invested in the writing process and the written product. Building and assessing advanced writing components Students need to develop an understanding of the valued aspects or traits of good writing and the capacity to incorporate these traits into their writing. Developing a routine for communicating about specific writing qualities is essential to the success of a writing program. A number of resources are available to help teachers do this (e.g., Culham, 2003; Spandel, 2001). The most commonly taught writing traits are ideas, organization, voice, word choice, sentence fluency, and conventions. These closely resemble the dimensions on which many state-mandated accountability measures base their writing achievement assessment (i.e., content, organization, style, and conventions). An example of a scoring rubric for teachers for all of these traits is the Analytic Trait Scoring Rubric (note that voice is not included on the rubric because it is difficult to reliably distinguish it from other traits and score accordingly. However, teaching it does have instructional value). This kind of rubric is appropriate for all types of writing. Examples of genre-specific rubrics, which focus on unique aspects of a genre such as its structure, include the Story Grammar Elements Rating Scale and Guidelines for Segmenting Persuasive Papers Into Functional Elements. To help students develop a sense of what constitutes a strong example of a particular trait, teachers can have students listen to or read excerpts from an exemplar touchstone text (which could be a student writing sample) to (a) identify the primary trait evident in the excerpts and (b) identify concrete evidence for characterizing a piece of writing as strong on that particular trait. Teachers also might ask students to develop their own definition for the trait and/or the descriptors for different scores on a trait rubric by examining superb, average, and weak examples. It is better to limit the number of traits that receive instructional focus at any given time to one or two; the decision regarding which traits are targeted should be guided by the genre and form of writing being taught as well as students’ needs. Writing portfolios are a valuable tool for providing students with feedback regarding how well they incorporate various traits in their writing. They also give students opportunities to reflect on the writing process and their writing accomplishments, and help them make informed choices about what pieces of writing exemplify their best work (see Writing Portfolio: Student Reflection). Portfolios also can provide a mechanism for teachers to reflect on their writing instruction and to establish individualized goals for students (see Writing Portfolio: Teacher Reflection). Accommodating all students Even when a top-notch writing program is firmly established in the classroom, some students will require additional assistance in mastering the skills and strategies of effective writing. Such assistance can be provided through adaptations, which include accommodations in the learning environment, instructional materials, and teaching strategies, as well as more significant modifications to task demands and actual writing tasks. A list of such adaptations is provided in Adaptations for Struggling Writers. Spelling and handwriting strategies Of course, elementary school teachers must explicitly teach spelling and handwriting to their students (this is not to say that secondary educators do not address these skills, but they do so to a much lesser extent). Research-based suggestions for teaching spelling and handwriting to students with and without writing difficulties are summarized in Tips for Teaching Spelling and Tips for Teaching Handwriting, respectively. For students with disabilities and for other struggling writers, more extensive practice and review of spelling, vocabulary, and letter forms and the thoughtful application of other adaptations (e.g., individualized and abbreviated spelling lists, special writing paper) by the teacher will be required. Whether teaching spelling or handwriting, certain curriculum considerations should be addressed (see Tips for Teaching Spelling and Tips for Teaching Handwriting, including the following: - Sequencing skills or grouping elements (words or letters) in developmentally and instructionally appropriate ways - Providing students opportunities to generalize spelling and handwriting skills to text composition - Using activities that promote independence - Establishing weekly routines (see Tips for Teaching Spelling and Tips for Teaching Handwriting) - Providing spelling or handwriting instruction for 15 minutes per day - Introducing the elements at the beginning of the week - Modeling how to spell the words or write the letters correctly - Highlighting patterns and pointing out distinctive attributes (or having students “discover” these) - Giving students ample opportunity to practice with immediate corrective feedback. Students can spend time practicing and self-evaluating their performance, with the teacher frequently checking their work (error correction is critical). Depending on how well the students do, the teacher may teach additional lessons. The students might also work with each other to study/practice and evaluate each other’s work. Finally, at the end of the week, the teacher should assess how well the students have learned the elements. Teaching composing strategies Students who struggle with writing, including those with disabilities, typically require explicit and systematic instruction in specific composing strategies. Even more emphasis should be placed on strategies that support the planning and revising aspects of the writing process, which trouble these students most. Fortunately, there have been numerous studies examining the effectiveness of various planning and revising strategies for students with and without high-incidence disabilities in multiple educational contexts (i.e., whole classrooms, small group instruction, individualized tutoring). Two excellent resources that describe this research and give advice on how to teach the many available strategies are Writing Better: Effective Strategies for Teaching Students With Learning Difficulties (Graham & Harris, 2005) and Making the Writing Process Work: Strategies for Composition and Self-Regulation (Harris & Graham, 1996). For this resource, only a few research-based strategies are presented in depth to give teachers an idea of how to implement composing strategies in their particular setting. Following are two planning strategies (one for narrative writing and one for persuasive writing) and five revising/editing strategies. For all of these, the teacher should first model how to use the strategy, then give students an opportunity to cooperatively apply the strategy while producing group papers, and finally let students practice using the strategy while writing individual papers. Throughout these stages of instruction, the teacher should provide extensive feedback and encouragement, discuss how to apply the strategy in diverse contexts, solicit students’ suggestions for improvement, and directly link strategy use to writing performance. All of the strategies presented here use acronyms that encapsulate the multiple steps of the strategies. Furthermore, each strategy has an accompanying watermark illustration that serves to cue the acronym. These features help reduce memory and retrieval demands for students, particularly those with learning problems. This is a narrative-planning strategy (personal or fictional) that incorporates the basic structure of narrative (i.e., SPACE) and the steps for planning and writing a good story (i.e., LAUNCH). A prompt sheet identifies the strategy steps and can be copied for each student or reproduced for a poster display. A planning sheet allows students to record their story ideas, writing goals, and self-talk statements. First, the student should establish and record personalized writing goals: a quality goal and a related quantity goal. For example, a student struggling with word choice (one of the six traits described previously) might identify a goal to increase quality rating from a 3 to a 5 on a 6-point scale (see Analytic Trait Scoring Rubric). A related quantity goal to help the student reach this level of quality in word choice might be to include a minimum of 10 descriptive words in the story. Next, the student should generate ideas for a story and record single words or short phrases that capture these ideas (it is important to discourage students from writing complete sentences on a planning sheet, as this will restrain flexibility in planning and yield a rough draft rather than a true plan). Note that space is provided for multiple ideas for each basic part of a story — students should be encouraged to explore several possibilities for setting and plot elements to foster creativity and to permit evaluation of each idea’s merit. Finally, the student should record self-talk statements, which are personalized comments, exhortations, or questions to be spoken aloud (initially) or subvocalized (once memorized) while planning and writing to help the student cope with negative thoughts, feelings, and behaviors related to the writing process or the task. For example, a student who believes writing is hard might record, “This is a challenge, but I like challenges and I have my strategy to help me do well.” The last sheet is a score card, which is used by a peer to evaluate the student’s writing performance. The evaluation criteria are closely linked to the valued qualities embedded in the strategy itself (i.e., million-dollar words, sharp sentences, and lots of detail), the basic structure of a narrative, and writing mechanics. Of course, these criteria could be modified to align more with particular writing traits, and the rating scale could be adjusted to match the scale used by the teacher. At the bottom of the score card, the writer tallies the points, determines any improvement (this implies progress monitoring, a critical aspect of strategy instruction that helps students see how their efforts impact their writing), and sets goals for the next story. DARE to DEFEND This strategy for planning persuasive papers incorporates the structure of persuasion (i.e., DARE) and the steps for planning and writing a good opinion paper (i.e., DEFEND). The materials for this strategy are very similar to those provided for SPACE LAUNCH there is a prompt sheet, a planning sheet, and a score card. Note that the student is required to identify and record ideas that support the position and ideas that counter that position. In the process of doing this, the student may decide to alter the position after evaluating the importance and relevance of each idea. The student can place an asterisk next to those ideas to elaborate upon or to provide concrete supporting evidence for, which encourages further planning. COPS and COLA These are revising/editing strategies intended to be used as checklists by individual students during an initial round of revision and editing. COPS (Mulcahy, Marfo, Peat, & Andrews, 1986) is a limited checklist and therefore is appropriate for primary grade students, but it can be used for any genre. COLA (Singer & Bashir, 1999), on the other hand, is a comprehensive checklist and thus is more suitable for older or better writers, but it is used for exposition and persuasion rather than narration. However, the items on the checklist can be modified to make it appropriate for narratives. COPS and COLA This strategy for individual revising (De La Paz, Swanson, & Graham, 1998) involves a greater degree of self-regulation on the part of the writer than checklists and is considerably more powerful; consequently, it is very helpful for students with writing difficulties. The prompt sheet lists the three steps for strategy deployment — compare (identifying discrepancies between written text and intended meaning), diagnose (selecting a specific reason for the mismatch), and operate (fixing the problem and evaluating the effectiveness of the change). These strategy steps occur first while the student attends to each sentence in the paper, and then, during a second “cycle,” while the student attends to each paragraph in the paper. A third cycle, focusing on the whole text, could be added. A minimum of two cycles is necessary to help the student attend to local as well as more global problems in the text. The diagnostic options for making meaningful revisions vary depending on the level of text to which the student is attending. The teacher will need to develop sets of diagnostic cards, color coded for each cycle, from which the student selects. This revising/editing strategy (Ellis & Friend, 1991) employs a checklist, but it does have two unique aspects. First, the student is expected to set writing goals before even beginning to write, and when finished revising and editing a paper, to determine if the student’s goals were met. Second, the student is expected to work with a peer to double-check editing. As for the other checklists, the teacher can add additional items once the student attains mastery of those listed. SEARCH This revising strategy (Neubert & McNelis, 1986) is appropriate for a second round of revision and editing (a third round would involve conferring with the teacher) during which students work with one another. The prompt sheet indicates that a peer editor is to first read the author’s paper and mark those parts of the paper that are imaginative, unusual, interesting, and confusing. Then, the peer editor praises the author for the positive aspects and questions the author about the confusing parts. The peer makes suggestions for how the paper can be improved and gives back the original, marked copy to the author. Finally, the author addresses the confusing parts marked on the paper and, if desired, makes changes suggested by the peer editor. Whenever a student elects to not make a requested or suggested modification, the student should be expected to adequately justify that decision (this encourages ownership and responsibility). Integrating writing instruction with content area learning Teachers often feel that devoting ample time to writing instruction is problematic given the voluminous content area information that must be covered in the typical curriculum (Troia & Maddox, 2004). Simultaneously, they sometimes struggle to identify relevant and stimulating writing topics and assignments that will help students develop their expertise as writers. One way to resolve these dilemmas for older students or students with higher level writing skills is to integrate writing instruction with content area learning. One important aspect of content area learning is developing communicative competence for interacting with others who have shared knowledge about a discipline or area of study. Individuals within a discipline — such as literary critics, historians, economists, biologists, physicists, and mathematicians — possess a unique way of talking and writing about the theories, principles, concepts, facts, methods of inquiry, and so forth connected with that discipline. Thus, a common goal of content area instruction and writing instruction is to help students acquire proficiency in disciplinary writing. This does not mean, however, that less content-driven writing exercises are undesirable or unnecessary; the inclusion of disciplinary writing is simply one part of a strong writing program (see Ten Additional Attributes of a Top-Notch Classroom Writing Program). If teachers have students write regularly in content area classes and use content area materials as stimuli for writing workshop, it is more likely that students will develop the capacity to communicate effectively in varied disciplinary discourse communities and will write for more educationally and personally germane purposes. There are a number of very simple ways to encourage content-relevant writing on a frequent basis in a social studies, science, or mathematics class. Following are some examples: The teacher can ask students to produce a one-minute closing paper (on an index card) at the end of each lesson in which they pose a genuine question about the topic studied that day, identify the key point from the content materials reviewed, summarize a discussion, or develop a question that might be used for a class test. Journaling is another vehicle for writing across the curriculum. In science class, for example, students can be asked to describe what was done, why it was done, what happened, and why it happened. In math, students might record the specific problem-solving procedures they employed for the problems assigned, why these were effective or ineffective, and advice they would offer to other students faced with the same math problems. In social studies, students can use their accumulating knowledge of a historical character to write a first-person fictionalized account of the individual’s life. As with all other forms of writing, students will require immersion in texts related to a particular area of study (e.g., Earth science, history, politics), extensive teacher modeling, and guided practice with feedback before being asked to independently produce writing that reflects a particular disciplinary perspective. So, for instance, students should be given ample opportunity to read the diaries and essays of the historical figures they are studying before attempting to keep a fictional journal as a historical character. A number of methods for integrating content area reading with writing have been developed by researchers. Following is a brief description of four methods. The story impressions method (McGinley & Denner, 1987), similar to exchange-compare writing (Wood, 1986), the steps for which are presented in ;Story Impressions/Exchange-Compare Writing utilizes a cooperative learning framework. Students are assigned to a group and given roles (researcher, scribe, content editor, proofreader, and reporter) for writing a brief summary that predicts the content of a lesson or unit text based on key vocabulary provided by the teacher. Once the group has read the text, they rewrite their summary to reflect the actual content of the text and their improved understanding of the material, and discuss this revised version with the rest of the class. A Jigsaw Content Learning group (Aronson & Patnoe, 1997) is another cooperative learning strategy. It can be coupled with double entry journals (Cox, 1996) for an effective and efficient means of learning from multiple source materials on a topic. The steps for these activities are outlined below. - Students are assigned to home groups and each person in a group is given a different source text (e.g., a magazine article about exercise and cardiovascular health, a newspaper clipping about new medical procedures and drugs that can help reduce the risk of heart attacks, a consumer brochure outlining healthy eating tips for promoting cardiac health, and a textbook chapter about the human circulatory system) to read. - Then, each student completes a double-entry journal while reading the assigned source text. This is a journal in which the student records some important piece of information from the source text on the left side of the journal page (with an accompanying page number) and a response, question, or evaluative comment on the right side. After completing their double-entry journal, students disperse to an expert group, a group where everyone else has read the same source text. Members of the expert group share their journal entries and summarize the material using a graphic organizer. - Finally, students return to their home groups to teach the other members about the content information they learned from their text and discuss how this information relates to that covered by the other texts. The double-entry journal could be expanded to a triple-entry journal by having students within the expert groups respond to each others’ responses, questions, or evaluations in a third column. (Carr & Ogle, 1987; Ogle, 1986) is a time-honored method for activating background knowledge about a topic (Know), setting learning goals (Want to Learn), summarizing learning from text (Learned), and promoting continued investigation (How to Find Out More). The plus (+) portion of the method is a written summary of what was learned and what additional things students would like to learn. This method can be used as a teacher-led pre- and post-reading class exercise or as a small-group activity. Below is an example of how this activity can work for a unit on geometry. - In math, a class might be about to embark on a unit of study related to geometry. The teacher asks students to brainstorm all that they know about geometry and list these under the Know column. This student-generated information should be organized into categories either by the teacher or by the students with teacher guidance (e.g., shapes, angles, spatial orientation, and measurement) that will facilitate text comprehension. - Then, the teacher lists under the Want to Learn column those things students would like to discover about geometry (which helps motivate them to read the text). - After reading, the teacher records under the Learned column what the students learned through the text, with particular attention paid to information that confirmed their prior knowledge, information that was inconsistent with what was anticipated, or new information. If appropriate, new categories are added. Next, students write their summary paragraph based on the information listed in the Learned column. - Finally, students identify how they would locate missing information in the How to Find Out More column (e.g., use a Web browser to search for documents related to geometry), which can help motivate additional learning. One last method for integrating content area reading with writing is the use of Writing Frames (Nichols, 1980). Writing frames help struggling writers use appropriate text organization for summarizing content area information that adheres to a basic structure (e.g., compare-contrast). The frames prompt coherent organization by providing partially completed sentences or transition words that, over time, can be faded as students become familiar with each frame. The examples provided can easily be adjusted to fit the contents of a particular source text. All of these methods are helpful for students who struggle with writing because they activate prior knowledge about the topic of study, require text summarization, and/or encourage discussion through which students are exposed to multiple perspectives. Of course, students who have writing problems sometimes have reading problems, so adaptations may be needed to help these students read the texts assigned. Some appropriate adaptations might include: - Having the text on tape, CD, or in electronic file format for computer readout - Having the struggling reader/writer work with a partner who is a better reader - Providing the student with a modified version of the text that is written with the same essential content but at a lower grade level Likewise, students who struggle with writing may have difficulty working in cooperative learning arrangements. Three proactive measures teachers can take are: - Carefully consider with whom students are most likely to work well in a group and place them in groups accordingly - Assign roles that are well suited for students’ particular strengths (e.g., assign a student who is an accomplished speaker but a struggling writer the role or reporter) - Seek professional development opportunities that focus on cooperative and peer-mediated learning A significant number of students perform well below the proficient level of writing achievement for their grade level (Persky, Daane, & Jin, 2003). The reasons for this are varied and complex. The number of exemplary writing programs are limited, and even when available they are often not adequate to meet the needs of students with disabilities. These students require intensive, individualized, and explicit teaching of various strategies if they are to improve their writing abilities. This document provides an information base for the core components of effective writing instruction, and examples of specific strategies and supports that can be used to develop a comprehensive writing program to meet the needs of all students. About the author Gene Fowler, celebrated author, editor, and journalist, epitomized the inherent difficulty of composing with his comment, “Writing is easy; all you do is sit staring at a blank sheet of paper until the drops of blood form on your forehead.”
Our online tools will provide quick answers to your calculation and conversion needs. Knowing the area's elevation grade helps in the, Determining the elevation grade also helps when. A grade of one hundred percent means that the slope's rise is equal to its run. Or: Aside from this, we can also express grade in terms of angles. A surveyor's transit has an inbuilt telescope that can be rotated laterally or vertically over a tripod. Angle C and angle … Area = a*b/2, where a is height and b is base of the right triangle. You can set the grade and determine the horizontal distance required to obtain the required change in a vertical distance, as well as obtaining the angle of elevation. Therefore, specifying two angles of a tringle allows you to calculate the third angle only. Here are some examples of the importance and advantages of knowing the grade of a particular area: We calculate terrain grade in the same way we calculate the slope of a line. We call an angle that goes up from the horizontal line an "angle of elevation", while those that go down are the "angle of declination." It can simply be the steepness between two specific points in a given area, the average of an area's gradual change in steepness, or an erratic variation in the elevation of the ground. You can calculate angle, side (adjacent, opposite, … Aside from expressing grade using ratios and angles, we can also determine it in terms of percentages - we only have to multiply its ratio value to 100 to get it. Math can be an intimidating subject. Angle C is always 90 degrees; angle 3 is either angle B or angle A, whichever is NOT entered. Enter any two values and press calculate to get the other values. Stair Calculator - used for calculating stair rise and run, stair rail angle, stringer length, and more.. Arch Calculator - calculate the focus point of an elipse, so that you can lay out the curve of an arch.. A clinometer is like a mini telescope with a protractor attached to its side. sin(B) = b/c, cos(B) = a/c, tan(B) = b/a. Right triangle calculator to compute side length, angle, height, area, and perimeter of a right triangle given any 2 values. Thanks for the feedback. You can produce infinite such problems by selecting calculate from angles display, now press dice for a new problem. Elevation grade (or slope) is the steepness, or degree of inclination, of a certain area of land. Roof Pitch Calculator - calculate roof pitch, slope, angle and rafter length of common rafters.. Cubic Yard Calculator - calculate … Please try again using a different payment method. Click on the “Calculate” button to solve for all unknown variables. We will have to pull the tape taut to get a precise measurement, however - not pulling it tight will provide a higher value than this true value because the tape will sag due to gravity. This elevation grade calculator determines, and expresses in three different ways, the slope of an earthen surface. sin(A) = a/c, cos(A) = b/c, tan(A) = a/b Focusing it on the leveling rod (which is like a huge ruler) will give a reading. To enter a value, click inside one of the text boxes. On this page, you can solve math problems involving right triangles. We usually measure the ground's elevation as its altitude above sea level. On the other hand, terrain grades that are greater than one (terrain grade >1) indicates that the terrain is steep. On this page, you can solve math problems involving right triangles. Again, this right triangle calculator works when you fill in 2 fields in the triangle angles, or the triangle sides. Message received. Building structures on a sloping area may require some additional foundation to avoid failing. By using this website, you agree to our … This relationship gives us the popular equation that you might have already heard before: "slope is equal to rise over run." We can also use a tool like a clinometer, which can directly provide the angle of elevation between two points. Terrain grades that are less than 1 (terrain grade <1) mean that the slope is gentle. We can then translate this reading into the elevation between the leveling rod and the transit. Supplementary Angles mode. It may be given in terms of the angle of elevation from the horizontal plane, in terms of the percentage of deviation from the horizontal line, or simply in terms of the ratio of its "rise to run". How to calculate for the elevation grade and angle of elevation? Are you more into building structures and would love to learn more about slopes? Vibrational Modes Of Co2, Skate Shoes Brands, Alberta Stocked Ponds 2020, Identifying Gerunds, Participles And Infinitives Worksheets Answers, Peanut Butter Items, Refried Beans With Kidney Beans, Why Were The Romanovs Hated, Little Black Flakes In Bed, List Of Ancient African Civilizations, Scarred Movie 2005, Alpine Visitor Center Webcam, Dehydration Of 2-methylbutan-2-ol, Scanpan Ctx Oven Safe, Foods To Increase Metabolism, Estar Location Examples, Genius Of The Ancient World Confucius, Hot Tea Maker, Gt Bmx Bikes For Sale Near Me, Hard Volume Word Problems, Methylamine Dipole Moment, Br2 / Chcl3 Mechanism, Black Bean Water Benefits, Thigh Pronunciation In Malayalam, Technical Writing Guidelines For Engineers, Bacon Wrapped Venison Roast, Ricotta Stuffed Vegetables, City Of Orange Police Scanner, Genetic Causes Of Schizophrenia, Can I Send Chocolate To South Korea, Kush Meaning In Urdu, 2 In-1 Apple Tree, Emami Company Profile, Mixing Colours Album, Beckmann Rearrangement Of Cyclohexanone Oxime, Alestorm Mexico Genius,
Triangle given three sides (SSS) Geometry construction using a compass and straightedge This page shows how to construct a triangle given the length of all three sides, with compass and straightedge or ruler. It works by first copying one of the line segments to form one side of the triangle. Then it finds the third from where two at the given distance from each end of it. Multiple triangles possible It is possible to draw more than one triangle that has three sides with the given lengths. For example in the figure below, given the base AB, you can draw four triangles that meet the requirements. All four are correct in that they satisfy the requirements, and are congruent to each other. Note: This construction is not always possible See figure on the right. If two sides add to less than the third, no triangle is possible. Printable step-by-step instructions The above animation is available as a printable step-by-step instruction sheet, which can be used for making handouts or when a computer is not available. The image below is the final drawing above with the red items added. ||Line segment LM is congruent to AB. ||Drawn with the same compass width. See Copying a line segment ||The third vertex N of the triangle must lie somewhere on arc P. ||All points on arc P are distance AC from L since the arc was drawn with the compass width set to AC. ||The third vertex N of the triangle must lie somewhere on arc Q. ||All points on arc Q are distance BC from M since the arc was drawn with the compass width set to BC. ||The third vertex N must lie where the two arcs intersect ||Only point that satisfies 2 and 3. ||Triangle LMN satisfies the three side lengths given. LM is congruent to AB, LN is congruent to AC, MN is congruent to BC, Try it yourself Click here for a printable worksheet containing two triangle construction problems where you are given the three side lengths. When you get to the page, use the browser print command to print as many as you wish. The printed output is not copyright. Constructions pages on this site Circles, Arcs and Ellipses (C) 2009 Copyright Math Open Reference. All rights reserved
“By a continuing process of inflation, governments can confiscate, secretly and unobserved, an important part of the wealth of their citizens. By this method they not only confiscate, but they confiscate arbitrarily,” John Maynard Keynes, “The economic consequences of the peace” Inflation tax is an implicit tax on nominal assets, such as cash, bonds and saving accounts. Inflation reduces the value of money and therefore reduces the real income of households. When governments create inflation by printing money, they usually benefit from the inflation as they get more nominal revenue and can reduce the real value of the government debt. Inflation can have the effect of improving government finances without actually increasing tax rates. The political advantage of an inflation tax is that it is easier to disguise than increasing tax rates. “Inflation is the one form of taxation that can be imposed without legislation.” In simple terms, suppose investors bought government bonds at 5% interest rate (expecting inflation of 3%). But, then inflation rose to 8%. Bondholders would lose out and governments would gain from fall in real value of debt. In this case, inflation causes a redistribution of wealth from savers (bondholders) to lenders (government) Seignorage and inflation tax Seignorage occurs when government print money. Seignorage refers to the difference between the value of money and the cost of producing it. For example, a US $100 bill costs 19.6 cents per note. Therefore from printing a $100 bill, there is seignorage (profit) of $99.804 However, this printing of money can cause inflation. Inflation is like a tax on people who hold money. Bracket creep – inflation One way governments can benefit from inflation is allowing income tax thresholds to be frozen so more workers pay higher tax rates. In March 2022, The UK Office for Budget Responsibility stated that the government’s decision to freeze income tax allowances will raise, by 2026/67, £18.8bn annually because higher inflation reduces the real value of the allowances by a greater amount. (Inflation tax at FT) Who pays the inflation tax? The costs of inflation will be paid by those who hold nominal money and are unable to get interest rates greater than the inflation rate. For example, suppose an investor bought a government bond at a fixed 3% interest rate. (Perhaps because they expected inflation of 2%). If inflation then increases to 7%, then the value of the bond will fall by 4% in real terms each year. The government benefits because it will be easier to repay the bond at the end of the term, because inflation is reducing its value. If the government increase benefits and public sector wages less than the inflation rate, then these benefit recipients and public sector workers will be worse off in real terms. The purchasing power of their income will fall. Workers who find themselves in a higher tax bracket. Suppose there is a higher tax bracket of 40% on income over £50,000. Inflation will lead to a rise in nominal wages and so more workers will see their nominal wage rise over £50,000. Therefore, workers who used to earn just under £50,000, will now start to pay a marginal tax rate of 40%, whereas before they didn’t. Savers. Suppose you have $11,000 savings in a checking account, but interest rates on current account are close to 0%. Inflation of 6.2% will lead to a reduction in the real value of these savings. The inflation will mean that consumers have to spend extra money and if this extra money comes from their savings, they will get fewer goods for the same cash. Bloomberg run this story from March 2022 – U.S. Households Face $5,200 Inflation Tax This Year They state “Inflation will mean the average U.S. household has to spend an extra $5,200 this year ($433 per month) compared to last year for the same consumption basket” However, I find this slightly misleading, because the inflation tax of $5,200 assumes nominal wages are unchanged. However, wages in the US are actually growing quite strongly, so workers financing spending from wages will be better off in real terms. However, if they were financing spending from cash savings, it would be correct. Is there an optimal inflation tax for a government? Seignorage and the inflationary benefits of increasing the money supply can be tempting for governments in a tight fiscal position. However, there is a great danger that the government will get carried away and cause inflation to increase out of control. When inflation starts to rise, it will cause various costs for the economy – menu costs, uncertainty, and discouraging investment. High rates of inflation can lead to poorer economic performance in the long-term, and this will hurt government finances in the long term. Countries that experience high inflation will put off investors from buying bonds and so it can become more expensive to finance debt in the long-term.
Vapor pressure or equilibrium vapor pressure is defined as the pressure exerted by a vapor in thermodynamic equilibrium with its condensed phases at a given temperature in a closed system. The equilibrium vapor pressure is an indication of a liquid's evaporation rate, it relates to the tendency of particles to escape from the liquid. A substance with a high vapor pressure at normal temperatures is referred to as volatile; the pressure exhibited by vapor present above a liquid surface is known as vapor pressure. As the temperature of a liquid increases, the kinetic energy of its molecules increases; as the kinetic energy of the molecules increases, the number of molecules transitioning into a vapor increases, thereby increasing the vapor pressure. The vapor pressure of any substance increases non-linearly with temperature according to the Clausius–Clapeyron relation; the atmospheric pressure boiling point of a liquid is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapor bubbles inside the bulk of the substance. Bubble formation deeper in the liquid requires a higher temperature due to the higher fluid pressure, because fluid pressure increases above the atmospheric pressure as the depth increases. More important at shallow depths is the higher temperature required to start bubble formation; the surface tension of the bubble wall leads to an overpressure in the small, initial bubbles. Thus, thermometer calibration should not rely on the temperature in boiling water; the vapor pressure that a single component in a mixture contributes to the total pressure in the system is called partial pressure. For example, air at sea level, saturated with water vapor at 20 °C, has partial pressures of about 2.3 kPa of water, 78 kPa of nitrogen, 21 kPa of oxygen and 0.9 kPa of argon, totaling 102.2 kPa, making the basis for standard atmospheric pressure. Vapor pressure is measured in the standard units of pressure. The International System of Units recognizes pressure as a derived unit with the dimension of force per area and designates the pascal as its standard unit. One pascal is one newton per square meter. Experimental measurement of vapor pressure is a simple procedure for common pressures between 1 and 200 kPa. Most accurate results are obtained near the boiling point of substances and large errors result for measurements smaller than 1kPa. Procedures consist of purifying the test substance, isolating it in a container, evacuating any foreign gas measuring the equilibrium pressure of the gaseous phase of the substance in the container at different temperatures. Better accuracy is achieved when care is taken to ensure that the entire substance and its vapor are at the prescribed temperature; this is done, as with the use of an isoteniscope, by submerging the containment area in a liquid bath. Low vapor pressures of solids can be measured using the Knudsen effusion cell method. In a medical context, vapor pressure is sometimes expressed in other units millimeters of mercury. This is important for volatile anesthetics, most of which are liquids at body temperature, but with a high vapor pressure. Anesthetics with a higher vapor pressure at body temperature will be excreted more as they are exhaled from the lungs; the Antoine equation is a mathematical expression of the relation between the vapor pressure and the temperature of pure liquid or solid substances. The basic form of the equation is: log P = A − B C + T and it can be transformed into this temperature-explicit form: T = B A − log P − C where: P is the absolute vapor pressure of a substance T is the temperature of the substance A, B and C are substance-specific coefficients log is either log 10 or log e A simpler form of the equation with only two coefficients is sometimes used: log P = A − B T which can be transformed to: T = B A − log P Sublimations and vaporizations of the same substance have separate sets of Antoine coefficients, as do components in mixtures. Each parameter set for a specific compound is only applicable over a specified temperature range. Temperature ranges are chosen to maintain the equation's accuracy of a few up to 8–10 percent. For many volatile substances, several different sets of parameters are available and used for different temperature ranges; the Antoine equation has poor accuracy with any single parameter set when used from a compound's melting point to its critical temperature. Accuracy is usually poor when vapor pressure is under 10 Torr because of the limitations of the apparatus used to establish the Antoine parameter values; the Wagner equation gives "o In the physical sciences, a partition coefficient or distribution coefficient is the ratio of concentrations of a compound in a mixture of two immiscible phases at equilibrium. This ratio is therefore a measure of the difference in solubility of the compound in these two phases; the partition coefficient refers to the concentration ratio of un-ionized species of compound, whereas the distribution coefficient refers to the concentration ratio of all species of the compound. In the chemical and pharmaceutical sciences, both phases are solvents. Most one of the solvents is water, while the second is hydrophobic, such as 1-octanol. Hence the partition coefficient measures how hydrophobic a chemical substance is. Partition coefficients are useful in estimating the distribution of drugs within the body. Hydrophobic drugs with high octanol/water partition coefficients are distributed to hydrophobic areas such as lipid bilayers of cells. Conversely, hydrophilic drugs are found in aqueous regions such as blood serum. If one of the solvents is a gas and the other a liquid, a gas/liquid partition coefficient can be determined. For example, the blood/gas partition coefficient of a general anesthetic measures how the anesthetic passes from gas to blood. Partition coefficients can be defined when one of the phases is solid, for instance, when one phase is a molten metal and the second is a solid metal, or when both phases are solids; the partitioning of a substance into a solid results in a solid solution. Partition coefficients can be measured experimentally in various ways or estimated by calculation based on a variety of methods. Despite formal recommendation to the contrary, the term partition coefficient remains the predominantly used term in the scientific literature. In contrast, the IUPAC recommends that the title term no longer be used, that it be replaced with more specific terms. For example, partition constant, defined as where KD is the process equilibrium constant, represents the concentration of solute A being tested, "org" and "aq" refer to the organic and aqueous phases respectively. The IUPAC further recommends "partition ratio" for cases where transfer activity coefficients can be determined, "distribution ratio" for the ratio of total analytical concentrations of a solute between phases, regardless of chemical form. The partition coefficient, abbreviated P, is defined as a particular ratio of the concentrations of a solute between the two solvents for un-ionized solutes, the logarithm of the ratio is thus log P; when one of the solvents is water and the other is a non-polar solvent the log P value is a measure of lipophilicity or hydrophobicity. The defined precedent is for the lipophilic and hydrophilic phase types to always be in the numerator and denominator respectively. To a first approximation, the non-polar phase in such experiments is dominated by the un-ionized form of the solute, electrically neutral, though this may not be true for the aqueous phase. To measure the partition coefficient of ionizable solutes, the pH of the aqueous phase is adjusted such that the predominant form of the compound in solution is the un-ionized, or its measurement at another pH of interest requires consideration of all species, un-ionized and ionized. A corresponding partition coefficient for ionizable compounds, abbreviated log P I, is derived for cases where there are dominant ionized forms of the molecule, such that one must consider partition of all forms, ionized and un-ionized, between the two phases. M is used to indicate the number of ionized forms. For instance, for an octanol–water partition, it is log P oct/wat I = log . To distinguish between this and the standard, un-ionized, partition coefficient, the un-ionized is assigned the symbol log P0, such that the indexed log P oct/wat I expression for ionized solutes becomes an extension of this, into the range of values I > 0. The distribution co Bernthsen acridine synthesis The Bernthsen acridine synthesis is the chemical reaction of a diarylamine heated with a carboxylic acid and zinc chloride to form a 9-substituted acridine. Using zinc chloride, one must heat the reaction to 200-270 °C for 24hrs; the use of polyphosphoric acid will give acridine products at a lower temperature, but with decreased yields Potassium dichromate, K2Cr2O7, is a common inorganic chemical reagent, most used as an oxidizing agent in various laboratory and industrial applications. As with all hexavalent chromium compounds, it is chronically harmful to health, it is a crystalline ionic solid with a bright, red-orange color. The salt is popular in the laboratory because it is not deliquescent, in contrast to the more industrially relevant salt sodium dichromate. Potassium dichromate is prepared by the reaction of potassium chloride on sodium dichromate. Alternatively, it can be obtained from potassium chromate by roasting chromite ore with potassium hydroxide, it is soluble in water and in the dissolution process it ionizes: K2Cr2O7 → 2 K+ + Cr2O72− Cr2O72− + H2O ⇌ 2 CrO42− + 2 H+ Potassium dichromate is an oxidising agent in organic chemistry, is milder than potassium permanganate. It is used to oxidize alcohols, it converts primary alcohols into aldehydes and, under more forcing conditions, into carboxylic acids. In contrast, potassium permanganate tends to give carboxylic acids as the sole products. Secondary alcohols are converted into ketones. For example, menthone may be prepared by oxidation of menthol with acidified dichromate. Tertiary alcohols cannot be oxidized. In an aqueous solution the color change exhibited can be used to test for distinguishing aldehydes from ketones. Aldehydes reduce dichromate from the +6 to the +3 oxidation state, changing color from orange to green; this color change arises. A ketone will show no such change because it cannot be oxidized further, so the solution will remain orange; when heated it decomposes with the evolution of oxygen. 4K2Cr2O7 → 4K2CrO4 + 2Cr2O3 + 3O2When an alkali is added to an orange red solution containing dichromate ions, a yellow solution is obtained due to the formation of chromate ions. For example, potassium chromate is produced industrially using potash: K2Cr2O7 + K2CO3 → 2 K2CrO4 + CO2The reaction is reversible. Treatment with cold sulphuric acid gives red crystals of chromic anhydride: K2Cr2O7 + 2H2SO4 → 2CrO3 + 2 KHSO4 + H2OOn heating with concentrated acid, oxygen is evolved: 2 K2Cr2O7 + 8H2SO4 → 2 K2SO4 + 2 Cr23 + 8 H2O + 3O2 Potassium dichromate has few major applications, as the sodium salt is dominant industrially. The main use is. Like other chromium compounds, potassium dichromate has been used to prepare "chromic acid" for cleaning glassware and etching materials; because of safety concerns associated with hexavalent chromium, this practice has been discontinued. It is used as an ingredient in cement in which it retards the setting of the mixture and improves its density and texture; this usage causes contact dermatitis in construction workers. Potassium dichromate has uses in photography and in photographic screen printing, where it is used as an oxidizing agent together with a strong mineral acid. In 1839, Mungo Ponton discovered that paper treated with a solution of potassium dichromate was visibly tanned by exposure to sunlight, the discoloration remaining after the potassium dichromate had been rinsed out. In 1852, Henry Fox Talbot discovered that exposure to ultraviolet light in the presence of potassium dichromate hardened organic colloids such as gelatin and gum arabic, making them less soluble. These discoveries soon led to the carbon print, gum bichromate, other photographic printing processes based on differential hardening. After exposure, the unhardened portion was rinsed away with warm water, leaving a thin relief that either contained a pigment included during manufacture or was subsequently stained with a dye; some processes depended on the hardening only, in combination with the differential absorption of certain dyes by the hardened or unhardened areas. Because some of these processes allowed the use of stable dyes and pigments, such as carbon black, prints with an high degree of archival permanence and resistance to fading from prolonged exposure to light could be produced. Dichromated colloids were used as photoresists in various industrial applications, most in the creation of metal printing plates for use in photomechanical printing processes. Chromium intensification or Photochromos uses potassium dichromate together with equal parts of concentrated hydrochloric acid diluted down to 10% v/v to treat weak and thin negatives of black and white photograph roll. This solution reconverts the elemental silver particles in the film to silver chloride. After thorough washing and exposure to actinic light, the film can be redeveloped to its end-point yielding a stronger negative, able to produce a more satisfactory print. A potassium dichromate solution in sulfuric acid can be used to produce a reversal negative; this is effected by developing a black and white film but allowing the development to proceed more or less to the end point. The development is stopped by copious washing and the film treated in the acid dichromate solution; this converts the silver metal to silver sulfate, a compound, insensitive to light. After thorough washing and exposure to actinic light, the film is developed again allowing the unexposed silver halide to be reduced to silver metal; the results obtained can be unpredictable, but sometimes excellent results are obtained producing images that would otherwise be unobtainable. This process can be coupled with solarisation so that the end product resembles a negative and is suitable for printing in the normal way. CrVI compounds have the property of tanning an The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor. The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum has a lower boiling point than when that liquid is at atmospheric pressure. A liquid at high pressure has a higher boiling point than when that liquid is at atmospheric pressure. For example, water at 93.4 °C at 1,905 metres altitude. For a given pressure, different liquids will boil at different temperatures; the normal boiling point of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, 1 atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid; the standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of 1 bar. The heat of vaporization is the energy required to transform a given quantity of a substance from a liquid into a gas at a given pressure. Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid. A saturated liquid contains as much thermal energy. Saturation temperature means boiling point; the saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition. If the pressure in a system remains constant, a vapor at saturation temperature will begin to condense into its liquid phase as thermal energy is removed. A liquid at saturation temperature and pressure will boil into its vapor phase as additional thermal energy is applied. The boiling point corresponds to the temperature at which the vapor pressure of the liquid equals the surrounding environmental pressure. Thus, the boiling point is dependent on the pressure. Boiling points may be published with respect to the NIST, USA standard pressure of 101.325 kPa, or the IUPAC standard pressure of 100.000 kPa. At higher elevations, where the atmospheric pressure is much lower, the boiling point is lower; the boiling point increases with increased pressure up to the critical point, where the gas and liquid properties become identical. The boiling point cannot be increased beyond the critical point; the boiling point decreases with decreasing pressure until the triple point is reached. The boiling point cannot be reduced below the triple point. If the heat of vaporization and the vapor pressure of a liquid at a certain temperature are known, the boiling point can be calculated by using the Clausius–Clapeyron equation, thus: T B = − 1, where: T B is the boiling point at the pressure of interest, R is the ideal gas constant, P is the vapour pressure of the liquid at the pressure of interest, P 0 is some pressure where the corresponding T 0 is known, Δ H vap is the heat of vaporization of the liquid, T 0 is the boiling temperature, ln is the natural logarithm. Saturation pressure is the pressure for a corresponding saturation temperature at which a liquid boils into its vapor phase. Saturation pressure and saturation temperature have a direct relationship: as saturation pressure is increased, so is saturation temperature. If the temperature in a system remains constant, vapor at saturation pressure and temperature will begin to condense into its liquid phase as the system pressure is increased. A liquid at saturation pressure and temperature will tend to flash into its vapor phase as system pressure is decreased. There are two conventions regarding the standard boiling point of water: The normal boiling point is 99.97 °C at a pressure of 1 atm. The IUPAC recommended standard boiling point of water at a standard pressure of 100 kPa is 99.61 °C. For comparison, on top of Mount Everest, at 8,848 m elevation, the pressure is about 34 kPa and the boiling point of water is 71 °C; the Celsius temperature scale was defined until 1954 by two points: 0 °C being defined by the wate Carbon tetrachloride known by many other names is an organic compound with the chemical formula CCl4. It is a colourless liquid with a "sweet" smell, it has no flammability at lower temperatures. It was widely used in fire extinguishers, as a precursor to refrigerants and as a cleaning agent, but has since been phased out because of toxicity and safety concerns. Exposure to high concentrations of carbon tetrachloride can affect the central nervous system, degenerate the liver and kidneys. Prolonged exposure can be fatal. Carbon tetrachloride was synthesized by the French chemist Henri Victor Regnault in 1839 by the reaction of chloroform with chlorine, but now it is produced from methane: CH4 + 4 Cl2 → CCl4 + 4 HClThe production utilizes by-products of other chlorination reactions, such as from the syntheses of dichloromethane and chloroform. Higher chlorocarbons are subjected to "chlorinolysis": C2Cl6 + Cl2 → 2 CCl4Prior to the 1950s, carbon tetrachloride was manufactured by the chlorination of carbon disulfide at 105 to 130 °C: CS2 + 3Cl2 → CCl4 + S2Cl2The production of carbon tetrachloride has steeply declined since the 1980s due to environmental concerns and the decreased demand for CFCs, which were derived from carbon tetrachloride. In 1992, production in the U. S./Europe/Japan was estimated at 720,000 tonnes. In the carbon tetrachloride molecule, four chlorine atoms are positioned symmetrically as corners in a tetrahedral configuration joined to a central carbon atom by single covalent bonds; because of this symmetrical geometry, CCl4 is non-polar. Methane gas has the same structure, making carbon tetrachloride a halomethane; as a solvent, it is well suited to dissolving other non-polar compounds and oils. It can dissolve iodine, it is somewhat volatile, giving off vapors with a smell characteristic of other chlorinated solvents, somewhat similar to the tetrachloroethylene smell reminiscent of dry cleaners' shops. Solid tetrachloromethane has two polymorphs: crystalline II below −47.5 °C and crystalline I above −47.5 °C. At −47.3 °C it has monoclinic crystal structure with space group C2/c and lattice constants a = 20.3, b = 11.6, c = 19.9, β = 111°. With a specific gravity greater than 1, carbon tetrachloride will be present as a dense nonaqueous phase liquid if sufficient quantities are spilled in the environment. In organic chemistry, carbon tetrachloride serves as a source of chlorine in the Appel reaction. One specialty use of carbon tetrachloride is in stamp collecting, to reveal watermarks on postage stamps without damaging them. A small amount of the liquid was placed on the back of a stamp, sitting in a black glass or obsidian tray; the letters or design of the watermark could be seen. Carbon tetrachloride was used as a dry cleaning solvent, as a refrigerant, in lava lamps. In case of the latter, carbon tetrachloride is a key ingredient that adds weight to the otherwise buoyant wax, it once was a popular solvent in organic chemistry, because of its adverse health effects, it is used today. It is sometimes useful as a solvent for infrared spectroscopy, because there are no significant absorption bands > 1600 cm−1. Because carbon tetrachloride does not have any hydrogen atoms, it was used in proton NMR spectroscopy. In addition to being toxic, its dissolving power is low, its use has been superseded by deuterated solvents. Use of carbon tetrachloride in determination of oil has been replaced by various other solvents, such as tetrachloroethylene. Because it has no C-H bonds, carbon tetrachloride does not undergo free-radical reactions, it is a useful solvent for halogenations either by the elemental halogen or by a halogenation reagent such as N-bromosuccinimide. In 1910, the Pyrene Manufacturing Company of Delaware filed a patent to use carbon tetrachloride to extinguish fires; the liquid was vaporized by the heat of combustion and extinguished flames, an early form of gaseous fire suppression. At the time it was believed the gas displaced oxygen in the area near the fire, but research found that the gas inhibits the chemical chain reaction of the combustion process. In 1911, Pyrene patented a portable extinguisher that used the chemical; the extinguisher consisted of a brass bottle with an integrated handpump, used to expel a jet of liquid toward the fire. As the container was unpressurized, it could be refilled after use. Carbon tetrachloride was suitable for liquid and electrical fires and the extinguishers were carried on aircraft or motor vehicles. In the first half of the 20th century, another common fire extinguisher was a single-use, sealed glass globe known as a "fire grenade," filled with either carbon tetrachloride or salt water; the bulb could be thrown at the base of the flames to quench the fire. The carbon tetrachloride type could be installed in a spring-loaded wall fixture with a solder-based restraint; when the solder melted by high heat, the spring would either break the globe or launch it out of the bracket, allowing the extinguishing agent to be automatically dispersed into the fire. A well-known brand was the "Red Comet,", variously manufactured with other fire-fighting equipment in the Denver, Colorado area by the Red Comet Manufacturing Company from its founding in 1919 until manufacturing operations were closed in the early 1980s. Prior to the Montreal Protocol, large quantities of carbon tetrachloride were used to produce the chlorofluorocarbon re Pyridine is a basic heterocyclic organic compound with the chemical formula C5H5N. It is structurally related to benzene, with one methine group replaced by a nitrogen atom, it is a flammable, weakly alkaline, water-soluble liquid with a distinctive, unpleasant fish-like smell. Pyridine is colorless; the pyridine ring occurs in many important compounds, including agrochemicals and vitamins. Pyridine was produced from coal tar. Today it is synthesized on the scale of about 20,000 tonnes per year worldwide; the molecular electric dipole moment is 2.2 debyes. Pyridine is diamagnetic and has a diamagnetic susceptibility of −48.7 × 10−6 cm3·mol−1. The standard enthalpy of formation is 100.2 kJ·mol−1 in the liquid phase and 140.4 kJ·mol−1 in the gas phase. At 25 °C pyridine has a viscosity of 0.88 mPa/s and thermal conductivity of 0.166 W·m−1·K−1. The enthalpy of vaporization is 35.09 kJ · mol − 1 at normal pressure. The enthalpy of fusion is 8.28 kJ·mol−1 at the melting point. The critical parameters of pyridine are pressure 6.70 MPa, temperature 620 K and volume 229 cm3·mol−1. In the temperature range 340–426 °C its vapor pressure p can be described with the Antoine equation log 10 p = A − B C + T where T is temperature, A = 4.16272, B = 1371.358 K and C = −58.496 K. Akin to benzene, pyridine ring forms a C5N hexagon. Electron localization in pyridine is reflected in the shorter C–N ring bond, whereas the carbon–carbon bonds in the pyridine ring have the same 139 pm length as in benzene; these bond lengths lie between the values for the single and double bonds and are typical of aromatic compounds. Pyridine crystallizes in an orthorhombic crystal system with space group Pna21 and lattice parameters a = 1752 pm, b = 897 pm, c = 1135 pm, 16 formula units per unit cell. For comparison, crystalline benzene is orthorhombic, with space group Pbca, a = 729.2 pm, b = 947.1 pm, c = 674.2 pm, but the number of molecules per cell is only 4. This difference is related to the lower symmetry of the individual pyridine molecule. A trihydrate is known; the optical absorption spectrum of pyridine in hexane contains three bands at the wavelengths of 195 nm, 251 nm and 270 nm. The 1H nuclear magnetic resonance spectrum of pyridine contains three signals with the integral intensity ratio of 2:1:2 that correspond to the three chemically different protons in the molecule. These signals originate from γ-proton and β-protons; the carbon analog of pyridine, has only one proton signal at 7.27 ppm. The larger chemical shifts of the α- and γ-protons in comparison to benzene result from the lower electron density in the α- and γ-positions, which can be derived from the resonance structures; the situation is rather similar for the 13C NMR spectra of pyridine and benzene: pyridine shows a triplet at δ = 150 ppm, δ = 124 ppm and δ = 136 ppm, whereas benzene has a single line at 129 ppm. All shifts are quoted for the solvent-free substances. Pyridine is conventionally detected by mass spectrometry methods; because of the electronegative nitrogen in the pyridine ring, the molecule is electron deficient. It, enters less into electrophilic aromatic substitution reactions than benzene derivatives. Correspondingly pyridine is more prone to nucleophilic substitution, as evidenced by the ease of metalation by strong organometallic bases. The reactivity of pyridine can be distinguished for three chemical groups. With electrophiles, electrophilic substitution takes place where pyridine expresses aromatic properties. With nucleophiles, pyridine reacts at positions 2 and 4 and thus behaves similar to imines and carbonyls; the reaction with many Lewis acids results in the addition to the nitrogen atom of pyridine, similar to the reactivity of tertiary amines. The ability of pyridine and its derivatives to oxidize, forming amine oxides, is a feature of tertiary amines; the nitrogen center of pyridine features a basic lone pair of electrons. This lone pair does not overlap with the aromatic π-system ring pyridine is a basic, having chemical properties similar to those of tertiary amines. Protonation gives pyridinium, C5H5NH+; the pKa of the conjugate acid is 5.25. The structures of pyridine and pyridinium are identical. The pyridinium cation is isoelectronic with benzene. Pyridinium p-toluenesulfonate is an illustrative pyridinium salt. In addition to protonation, pyridine undergoes N-centered alkylation, N-oxidation. Pyridine has a conjugated system of six π electrons; the molecule is planar and, follows the Hückel criteria for aromatic systems. In contrast to benzene, the electron density is not evenly distributed over the ring, reflecting the negative inductive effect of the nitrogen atom. For this reason, pyridine has a dipole moment and a weaker resonant stabilization than benzene (re
When Simone Di Matteo first saw the patterns in his data, it seemed too good to be true. "It's too perfect!" Di Matteo, a space physics Ph.D. student at the University of L'Aquila in Italy, recalled thinking. "It can't be real." And it wasn't, he'd soon find out. Di Matteo was looking for long trains of massive blobs—like a lava lamp's otherworldly bubbles, but anywhere from 50 to 500 times the size of Earth—in the solar wind. The solar wind, whose origins aren't yet fully understood, is the stream of charged particles that blows constantly from the Sun. Earth's magnetic field, called the magnetosphere, shields our planet from the brunt of its radiation. But when giant blobs of solar wind collide with the magnetosphere, they can trigger disturbances there that interfere with satellites and everyday communications signals. In his search, Di Matteo was re-examining archival data from the two German-NASA Helios spacecraft, which launched in 1974 and 1976 to study the Sun. But this was 45-year-old data he'd never worked with before. The flawless, wave-like patterns he initially found hinted that something was leading him astray. It wasn't until uncovering and removing those false patterns that Di Matteo found exactly what he was looking for: dotted trails of blobs that oozed from the Sun every 90 minutes or so. The scientists published their findings in JGR Space Physics on Feb. 21, 2019. They think the blobs could shed light on the solar wind's beginnings. Whatever process sends the solar wind out from the Sun must leave signatures on the blobs themselves. Making Way for New Science Di Matteo's research was the start of a project NASA scientists undertook in anticipation of the first data from NASA's Parker Solar Probe mission, which launched in 2018. Over the next seven years, Parker will fly through unexplored territory, soaring as close as 4 million miles from the Sun. Before Parker, the Helios 2 satellite held the record for the closest approach to the Sun at 27 million miles, and scientists thought it might give them an idea of what to expect. "When a mission like Parker is going to see things no one has seen before, just a hint of what could be observed is really helpful," Di Matteo said. The problem with studying the solar wind from Earth is distance. In the time it takes the solar wind to race across the 93 million miles between us and the Sun, important clues to the wind's origins—like temperature and density—fade. "You're constantly asking yourself, 'How much of what I'm seeing here is because of evolution over four days in transit, and how much came straight from the Sun?'" said solar scientist Nicholeen Viall, who advised Di Matteo during his research at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Helios data—some of which was collected at just one-third the distance between the Sun and Earth—could help them begin to answer these questions. The first step was tracing Helios' measurements of the blobs to their source on the Sun. "You can look at spacecraft data all you want, but if you can connect it back to where it came from on the Sun, it tells a more complete story," said Samantha Wallace, one of the study collaborators and a physics Ph.D. student at the University of New Mexico in Albuquerque. Wallace used an advanced solar wind model to link magnetic maps of the solar surface to Helios' observations, a tricky task since computer languages and data conventions have changed greatly since Helios' days. Now, the researchers could see what sorts of regions on the Sun were likely to bud into blobs of solar wind. Sifting the Evidence Then, Di Matteo searched the data for specific wave patterns. They expected conditions to alternate—hot and dense, then cold and tenuous—as individual blobs engulfed the spacecraft and moved on, in a long line. The picture-perfect patterns Di Matteo first found worried him. "That was a red flag," Viall said. "The actual solar wind doesn't have such precise, clean periodicities. Usually when you get such a precise frequency, it means some instrument effect is going on." Maybe there was some element of the instrument design they weren't considering, and it was imparting effects that had to be separated from true solar wind patterns. Di Matteo needed more information on the Helios instruments. But most researchers who worked on the mission have long since retired. He did what anyone else would do, and turned to the internet. Many Google searches and a weekend of online translators later, Di Matteo unearthed a German instruction manual that describes the instruments dedicated to the mission's solar wind experiment. Decades ago, when Helios was merely a blueprint and before anyone ever launched a spacecraft to the Sun, scientists didn't know how best to measure the solar wind. To prepare themselves for different scenarios, Di Matteo learned, they equipped the probes with two different instruments that would each measure certain solar wind properties in their own way. This was the culprit responsible for Di Matteo's perfect waves: the spacecraft itself, as it alternated between two instruments. After they removed segments of data taken during routine instrument-switching, the researchers looked again for the blobs. This time, they found them. The team describes five instances that Helios happened to catch trains of blobs. While scientists have spotted these blobs from Earth before, this is the first time they've studied them this close to the Sun, and with this level of detail. They outline the first conclusive evidence that the blobs are hotter and denser than the typical solar wind. The Return of the Blobs Whether blob trains bubble in 90-minute intervals continuously or in spurts, and how much they vary between themselves, is still a mystery. "This is one of those studies that brought up more questions than we answered, but that's perfect for Parker Solar Probe," Viall said. Parker Solar Probe aims to study the Sun up close, seeking answers to basic questions about the solar wind. "This is going to be very helpful," said Aleida Higginson, the mission's deputy project scientist at Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland. "If you want to even begin to understand things you've never seen before, you need to know what we've measured before and have a solid scientific interpretation for it." Parker Solar Probe performs its second solar flyby on April 4, which brings it 15 million miles from the Sun—already cutting Helios 2's record distance in half. The researchers are eager to see if blobs show up in Parker's observations. Eventually, the spacecraft will get so close it could catch blobs right after they've formed, fresh out of the Sun. Citation: And the blobs just keep on coming (2019, April 4) retrieved 4 April 2019 from https://phys.org/news/2019-04-blobs.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
The story of the first American settlers is a remarkable chapter in the history of the United States. These intrepid pioneers laid the foundation for a nation that would become a beacon of freedom and opportunity. Their journeys across treacherous seas, struggles against the harsh wilderness, and interactions with indigenous peoples shaped the course of American history. In this comprehensive article, we will delve into the fascinating history of the first American settlers, exploring who they were, their motivations, and the enduring impact they had on the nation. The Indigenous Peoples of America: First Inhabitants Before the arrival of European settlers, the Americas were already home to a diverse array of indigenous peoples, each with their own distinct cultures, languages, and traditions. These Native Americans had been living on the continent for thousands of years, establishing thriving societies and civilizations. While the focus of this article is on European settlers, it is important to acknowledge that Native Americans were the first inhabitants of the land, and their history is integral to the broader narrative of America’s settlement. Early European Exploration: Precursors to Settlement The arrival of European explorers in the late 15th and early 16th centuries marked the beginning of sustained contact between the Old World and the New World. Christopher Columbus’s voyages in 1492 opened the door to further exploration, as European powers sought to establish trade routes and expand their influence in the Americas. These early explorations set the stage for the eventual settlement of North America. The English Settlers: Jamestown and Plymouth Rock The first English settlers to establish a permanent presence in what is now the United States arrived in the early 17th century. In 1607, a group of English colonists founded Jamestown in Virginia, marking the birth of the first permanent English settlement in North America. The settlement faced numerous challenges, including conflicts with Native Americans and harsh living conditions, but it ultimately survived and laid the groundwork for future English colonies. Another iconic group of English settlers, the Pilgrims, arrived in 1620 aboard the Mayflower and established Plymouth Colony in present-day Massachusetts. Unlike the Jamestown settlers, the Pilgrims sought religious freedom and were driven by a strong sense of religious identity. Their arrival is celebrated today as Thanksgiving, a holiday that commemorates their early interactions with the Wampanoag people. The Dutch in New Netherland: A Lesser-Known Legacy While the English settlements in Jamestown and Plymouth are widely recognized, the Dutch also played a significant role in the early colonization of America. In the early 17th century, the Dutch West India Company established the colony of New Netherland, which included parts of present-day New York, New Jersey, Delaware, and Connecticut. New Amsterdam, located on the southern tip of Manhattan Island, served as the capital of New Netherland. The Dutch brought with them a legacy of trade and commerce, and New Amsterdam quickly became a bustling trading hub. However, the English sought to expand their territory in North America, leading to conflicts with the Dutch. In 1664, the English captured New Amsterdam, renaming it New York in honor of the Duke of York. While the Dutch presence in North America was relatively short-lived, their contributions to the region’s cultural and economic development are still evident today. The Spanish Influence: Florida and the Southwest The Spanish were among the earliest European explorers to set foot in what is now the United States. In 1513, Juan Ponce de León landed on the eastern coast of Florida, claiming the territory for Spain. This marked the beginning of Spanish exploration and colonization in the southeastern part of the continent. In the Southwest, Spanish explorers and missionaries established a series of missions and presidios (military outposts) in present-day Arizona, New Mexico, and Texas. These efforts were part of a larger mission to convert Native Americans to Christianity and expand Spanish influence in the region. The Spanish legacy in the Southwest is still evident in the architecture, culture, and place names of the region. The French in North America: Explorers and Traders The French also played a significant role in the early exploration and settlement of North America. French explorers such as Jacques Cartier and Samuel de Champlain ventured into North America in the 16th and early 17th centuries. They established fur trading posts and formed alliances with Native American tribes, particularly in the Great Lakes region and the Mississippi River Valley. One of the most notable French settlements in North America was Quebec, founded by Samuel de Champlain in 1608. Quebec became the capital of New France, a vast French colonial territory that stretched from present-day Canada to Louisiana. The French and their Native American allies engaged in a series of conflicts with the English, known as the French and Indian Wars, which had a profound impact on the history of North America. The Swedish and Finnish Settlers: New Sweden on the Delaware In the early 17th century, Swedish and Finnish settlers established a colony known as New Sweden along the Delaware River. The settlement was characterized by its diverse population, including Swedes, Finns, Dutch, and Native Americans. New Sweden was founded primarily as a trading post, and it thrived as a center for fur trading and agriculture. However, like the Dutch, the Swedish settlers found themselves in competition with the English. In 1655, the Dutch, with English support, captured New Sweden, bringing it under English control. Despite its relatively short existence, New Sweden left a lasting legacy, with many place names and cultural influences in the Delaware Valley bearing witness to its presence. Motivations of the First American Settlers The motivations of the first American settlers were diverse and multifaceted. They were driven by a combination of economic, religious, political, and personal factors. Understanding these motivations provides valuable insights into the complex tapestry of early American history. For many of the early settlers, economic opportunity was a primary motivator. The English colonists who established Jamestown were sponsored by the Virginia Company of London, which hoped to profit from the exploration and exploitation of the New World. The settlers were encouraged to search for valuable resources, such as gold and silver, and establish profitable agricultural enterprises. Similarly, the Dutch settlers in New Amsterdam were motivated by economic prospects. The Dutch West India Company sought to establish a profitable fur trade in the region and encouraged settlers to engage in commerce and agriculture. New Amsterdam’s strategic location along the Hudson River made it an ideal trading post, and the Dutch settlers played a crucial role in the development of the fur trade in North America. Religious freedom was a driving force behind the settlement of some early American colonies. The Pilgrims, who arrived on the Mayflower and founded Plymouth Colony, sought to escape religious persecution in England. They were Separatists who believed in the autonomy of individual congregations and sought a place where they could practice their faith without interference. Similarly, the Quakers, who settled in Pennsylvania, were motivated by religious freedom. Pennsylvania was founded by William Penn, a Quaker, as a haven for religious dissenters. The Quakers believed in equality, pacifism, and religious tolerance, and they sought to establish a colony where these principles could be put into practice. Political and Strategic Interests The French and Spanish settlers in North America were often driven by political and strategic interests. The French sought to expand their colonial empire and secure control over valuable fur trading routes. French exploration and settlement in the Mississippi River Valley were driven by the desire to control the interior of the continent. Similarly, the Spanish established colonies in the Southwest to secure their claims to the territory and to extend their influence among indigenous populations. The Spanish missions served both religious and political purposes, as they sought to convert Native Americans to Christianity while also establishing a foothold in the region. Personal Ambition and Adventure Many early American settlers were motivated by personal ambition and a sense of adventure. Explorers like Christopher Columbus, Hernán Cortés, and Hernando de Soto were driven by the prospect of discovery and the allure of new lands. They were willing to risk their lives and resources in pursuit of fame and fortune. Entrepreneurs and traders, such as those in New Amsterdam and New Sweden, saw the New World as a land of opportunity. They were willing to venture into the unknown in search of profit and economic success. These settlers played a crucial role in establishing the economic foundations of the colonies. Challenges Faced by the First American Settlers The first American settlers faced a myriad of challenges as they attempted to establish colonies in a new and unfamiliar land. These challenges tested their resilience, adaptability, and determination. Understanding the difficulties they encountered provides valuable context for appreciating their accomplishments. Survival in a Harsh Environment: Jamestown The settlers in Jamestown faced numerous hardships, including food shortages, disease, and conflicts with Native American tribes. The swampy terrain and humid climate of Virginia were unfamiliar and unforgiving. The lack of suitable drinking water and the prevalence of waterborne diseases, such as dysentery, posed significant threats to the colony’s survival. Additionally, the settlers’ lack of agricultural knowledge and their initial focus on searching for gold and other precious resources contributed to food shortages. The “starving time” of 1609-1610 was a particularly challenging period, during which many colonists perished from hunger and disease. Interaction with Native Americans: Plymouth and Jamestown Interactions with Native Americans were a central aspect of the early settlers’ experience. In Plymouth, the Pilgrims established peaceful relations with the Wampanoag people, thanks in part to the assistance of the Native American named Squanto, who acted as a translator and mediator. In contrast, the settlers in Jamestown had a more tumultuous relationship with the Powhatan Confederacy, a powerful alliance of Native American tribes. The Powhatan chief, Powhatan, initially sought to establish peaceful trade relations with the English but grew increasingly wary of their intentions. The Jamestown colony faced periodic attacks and hostilities from the Powhatan Confederacy, leading to a tense and often violent relationship. Conflicts and Competing Interests: New Netherland and New Sweden The Dutch settlers in New Netherland faced challenges related to competing colonial interests. The English, who had established colonies in neighboring regions, sought to expand their territory and influence. This competition ultimately led to the capture of New Amsterdam by the English in 1664. Similarly, the Swedish and Finnish settlers in New Sweden encountered challenges related to territorial disputes with the Dutch and English. The desire to control valuable trade routes and resources in the Delaware Valley fueled conflicts and power struggles among European colonists. Environmental Adaptation: Spanish Colonies in the Southwest In the arid Southwest, Spanish settlers had to adapt to a challenging environment. They introduced new agricultural techniques, such as the construction of irrigation systems (acequias), to cultivate crops in the desert landscape. These innovations allowed them to establish successful agricultural communities and sustain their colonies. The Spanish also faced the challenge of managing relations with the indigenous populations of the region. The missions, with their dual roles as religious centers and economic enterprises, played a key role in these interactions. Spanish settlers sought to convert Native Americans to Christianity and incorporate them into Spanish colonial society. Legacy of the First American Settlers The legacy of the first American settlers is profound and far-reaching. Their actions and decisions have shaped the course of American history and continue to influence the nation’s identity and character. The following sections explore the enduring impact of these early settlers in various aspects of American life. Cultural and Linguistic Legacy: English, Dutch, and Swedish Influences The early English, Dutch, and Swedish settlers left a lasting imprint on American culture and language. The English language, customs, and legal traditions introduced by the Jamestown and Plymouth colonists remain fundamental to American identity. The Dutch influence can be seen in place names (such as New York and Brooklyn), architectural styles (Dutch Colonial architecture), and culinary traditions (such as pretzels and doughnuts). The Dutch legacy in America is a testament to the enduring impact of their brief colonial presence. The Swedish and Finnish settlers also contributed to the cultural mosaic of America. Their presence in the Delaware Valley is reflected in place names, such as Swedesboro and Wilmington, and in the continued celebration of traditions like Midsummer festivals. Religious Pluralism: The Pilgrims and the Quakers The legacy of religious freedom championed by the Pilgrims and the Quakers has played a central role in shaping the American experience. The principles of religious tolerance and the separation of church and state laid the groundwork for the First Amendment to the United States Constitution, which guarantees freedom of religion. The idea that individuals should have the freedom to practice their faith without fear of persecution or discrimination remains a cornerstone of American democracy. The diverse religious landscape of the United States today is a testament to the enduring legacy of religious pluralism established by these early settlers. Economic Foundations: New Amsterdam and New Sweden The economic foundations laid by the Dutch in New Amsterdam and the Swedish and Finnish settlers in New Sweden contributed to the development of trade and commerce in the United States. The bustling trading port of New Amsterdam set the stage for New York City’s future economic prominence. The fur trade established by the Dutch had a lasting impact on the economic history of North America. It fostered relationships between European settlers and Native American tribes and contributed to the growth of a transatlantic trading network. Territorial Expansion and Conflict: Spanish Colonies and the French Legacy The territorial expansion and conflicts initiated by the Spanish and French settlers had a profound impact on the development of the United States. The Mississippi River Valley, once explored and claimed by the French, later became a critical transportation route and a focus of westward expansion. The Spanish legacy in the Southwest is evident in the cultural and architectural influences of the region. Mission architecture, with its distinctive adobe construction and bell towers, remains an iconic symbol of the American Southwest. Conclusion: Honoring the First American Settlers The first American settlers were a diverse group of individuals who embarked on journeys of exploration, ambition, and perseverance. Their motivations and challenges were as varied as the landscapes they encountered. Through their determination and resilience, they established the foundations of a nation that would become a beacon of freedom, opportunity, and cultural diversity. As we reflect on the history of the first American settlers, it is essential to acknowledge the complexities of their experiences and interactions with Native Americans. The story of America’s settlement is not without its dark chapters, including instances of conflict, displacement, and injustice. Recognizing these aspects of history is an important step toward a more complete understanding of our nation’s past. Today, we honor the first American settlers by preserving their legacies, celebrating their contributions, and continuing the ongoing dialogue about the impact of colonization on Native American communities. The history of America’s settlement is a testament to the resilience of human spirit, the pursuit of freedom, and the enduring quest for a better future. In commemorating the first American settlers, we pay tribute to their courage and determination in the face of adversity, and we are reminded that the story of America is a tapestry woven from the threads of countless journeys, aspirations, and dreams. It is a story that continues to evolve, shaped by the enduring legacy of those who took the first steps on the path to a new world.
Typically, the population is very large, making a census or a complete enumeration of all the values in the population impractical or impossible. The sample usually represents a subset of manageable size. Samples are collected and statistics are calculated from the samples so that one can make inferences or extrapolations from the sample to the population. This process of collecting information from a sample is referred to as sampling. The data sample may be drawn from a population without replacement, in which case it is a subset of a population; or with replacement, in which case it is a multisubset. Kinds of samples A complete sample is a set of objects from a parent population that includes ALL such objects that satisfy a set of well-defined selection criteria. For example, a complete sample of Australian men taller than 2m would consist of a list of every Australian male taller than 2m. But it wouldn't include German males, or tall Australian females, or people shorter than 2m. So to compile such a complete sample requires a complete list of the parent population, including data on height, gender, and nationality for each member of that parent population. In the case of human populations, such a complete list is unlikely to exist, but such complete samples are often available in other disciplines, such as complete magnitude-limited samples of astronomical objects. An unbiased (representative) sample is a set of objects chosen from a complete sample using a selection process that does not depend on the properties of the objects. For example, an unbiased sample of Australian men taller than 2m might consist of a randomly sampled subset of 1% of Australian males taller than 2m. But one chosen from the electoral register might not be unbiased since, for example, males aged under 18 will not be on the electoral register. In an astronomical context, an unbiased sample might consist of that fraction of a complete sample for which data are available, provided the data availability is not biased by individual source properties. The best way to avoid a biased or unrepresentative sample is to select a random sample, also known as a probability sample. A random sample is defined as a sample where each individual member of the population has a known, non-zero chance of being selected as part of the sample. Several types of random samples are simple random samples, systematic samples, stratified random samples, and cluster random samples. A sample that is not random is called a non-random sample or a non-probability sampling. Some examples of nonrandom samples are convenience samples, judgment samples, purposive samples, quota samples, snowball samples, and quadrature nodes in quasi-Monte Carlo methods. Statistic samples have multiple uses. They can be used in many situations. Mathematical description of random sample In mathematical terms, given a random variable X with distribution F, a random sample of length n (where n may be any of 1,2,3,...) is a set of n independent, identically distributed (iid) random variables with distribution F. A sample concretely represents n experiments in which the same quantity is measured. For example, if X represents the height of an individual and n individuals are measured, will be the height of the i-th individual. Note that a sample of random variables (i.e. a set of measurable functions) must not be confused with the realizations of these variables (which are the values that these random variables take, formally called random variates). In other words, is a function representing the measurement at the i-th experiment and is the value actually obtained when making the measurement. The concept of a sample thus includes the process of how the data are obtained (that is, the random variables). This is necessary so that mathematical statements can be made about the sample and statistics computed from it, such as the sample mean and covariance. - Estimation theory - Replication (statistics) - Sample size determination - Sampling (statistics) - Survey sampling - Peck, Roxy; Chris Olsen; Jay L. Devore (2008). Introduction to Statistics and Data Analysis (3 ed.). Cengage Learning. ISBN 0-495-55783-8. Retrieved 2009-08-04. - Borzyszkowski, Andrzej M.; Sokołowski, Stefan, eds. (1993), Mathematical Foundations of Computer Science 1993. 18th International Symposium, MFCS'93 Gdańsk, Poland, August 30–September 3, 1993 Proceedings, Lecture Notes in Computer Science 711, pp. 281–290, doi:10.1007/3-540-57182-5_20, ISBN 978-3-540-57182-7, Zbl 0925.11026 - Pratt, J. W., Raiffa, H. and Schaifer, R. (1995). Introduction to Statistical Decision Theory. MIT Press, Cambridge,MA. MR1326829 - Lomax, R. G. and Hahs-Vaughan, Debbie L. An introduction to statistical concepts (3rd ed). - Cochran, William G. (1977). Sampling techniques (Third ed.). Wiley. ISBN 0-471-16240-X. - Johan Strydom (2005). Introduction to Marketing (Third ed.). Wiley. ISBN 0-471-16240-X. - Samuel S. Wilks, Mathematical Statistics, John Wiley, 1962, Section 8.1
Understanding Logic and Reason Reason and logic are two closely related forms of thinking involving the comparison of concepts. Both can be studied in terms of mathematics or philosophy and can be considered together as well as apart. Before we get started, consider the following example, as speaks to the foundation of logic and reason (and thus also illustrates the key differences between the two): - Concepts/terms are the comparison of attributes (Conception). Ex. 1, 2, 3, =; or, blue ball, red chair. - Judgements/propositions are the comparison of concepts/terms (Logic). Ex. 1+2=3; or, x+2=3; or, the blue ball is on the red chair. - Inferences are the comparison of judgements/propositions (Reason). Ex. Since 1+2=3, and since x+2=3, therefore we can infer x=1; Or, since the blue ball is on the red chair, and the blue ball can only be in one place at a time, therefore I can reason that the blue ball is necessarily not on the ground. Meanwhile, each step above, from conception, to forming logical judgements as propositions, to then drawing reasoned inferences, follows a logical rule-set and requires increasingly complex degrees of reasoning. This is one way to illustrate the core of how logic and reason differ and relate. Below we explain the details. TIP: The example above is hierarchical. Reason deals with both logic and concepts, logic deals with concepts, and concepts are essentially just terms. Thus, while logic and reason are different, they are also closely related in that almost all types of reasoning utilize logic (and both logic and reasoning involve comparing concepts). With that noted, a term can exist without logic or reason, for example “blue ball,” logic can exist without reason but generally must use concepts, for example “1+2=3,” and in most cases reasoning will always involve using both logic and concepts, for example “since the blue ball is on the chair, then it is necessarily not on the ground.” The Basics of Logic and Reason In simple terms, logic describes comparing concepts/terms like 1, 2, or 3, using formal rule-sets like x+y=z, and reason describes drawing inferences from the comparison of logical rule-sets and terms, like since 1+2=3, and since x+2=3, therefore x is probably 1. With that said, both logic and reason speak to very specific parts of the human process of thought. From our conception of rational and empirical data, to the related art/science of working with our thoughts about these things in the language form, we can call this whole… concept… logic and reason, just “reason,” very loosely just “logic,” or logical reasoning (which by any name is the art/science of comparing terms using logic and reason). Consider the different ways we can use these terms, all of which are fully correct: - One can either use the terms logic and reason as synonyms to describe the method of logical reasoning as a whole (where deductive logic and deductive reasoning essentially mean the same exact thing for example), - One can use the terms in context where logic describes specific rule-sets that produce definite outcomes and reason describes the art of drawing inferences. - One can describe logic and reason as two steps within logical reasoning where the first step is conceptualizing a term, for example, “look, there is an A and a B (each defined by their attributes).” Then logic describes judgements that compare concepts/terms, for example, “A=B and B=C.” Then reason describes conclusions inferred from that logic such as “since A=B and B=C, therefore A=C.” - One can say logic describes that sort of “If…then…” logic that computers and calculators use, and reason describes the sort of critical thinking humans use to create computer programs and to make clever use of their outputs. - In fact, one can say a lot of different things, each time using terms like “inference,” “deduction,” “concept”, “logic”, and “reason” in maddeningly different ways… but let’s not go down that rabbit hole yet. So then, in general, logic describes rule-sets and reason describes inferences, but that aside, both are part of logic and reason and thus, the difference depends partly on context and is a bit semantical and interchangeable (especially when we use the terms in modern language). To better illustrate the difference between logic and reason it will help to understand the three basic parts of thought which we eluded to above. Or let me rephrase that, with the introduction covered, the next section explains the most useful and correct way to understand Logic and Reason as a mode of thinking, as a science/art, and as a formal study. Terms, Logic, Reason, and Skepticism With that covered, let’s return to our three-aspects of thought example we started the page with (as that really is the key here). Speaking loosely, there are three basic parts of thought (which bring us from conception/concept, to judgement/proposition, to drawing inferences) and therefore there are three natural parts of logical reasoning. They work like this: - There are terms or concepts we conceptualize by rationalizing or observing (by comparing attributes); like Socrates, men, or mortality. - There are logical judgements (propositions) we get by comparing terms; like Socrates is a man, and all men are mortal. <— This is Logic - Then there are reasoned inferences we get by comparing judgements and propositions; like since Socrates is a man and since all men are mortal, therefore Socrates is mortal. <—This is Reason Then, one could reason further and apply skepticism by asking, “how do we know all men are mortal?” In fact, there are many different types of reasoning, just like there are many ways to compare terms in logical propositions (AKA statements)… and thus we can do much more than just contradict and be skeptical like Socrates. For example, we could reason further by creating the grounds for a hypothesis by pondering, “since energy can’t be created or destroyed, perhaps men aren’t truly mortal? After-all, all men are made of energy…. maybe this is a virtual simulation?” Meanwhile, one can apply logical rule-sets to that skepticism and hypothesis, just like the scientific method does. Ok, now we have a hypothesis, its time to look for empirical data and apply formal rule-sets so we can get published! Perhaps we want to collect facts to prove/disprove our hypothesis, the art of knowing which facts to collect is a thing of reason… but the structure of those facts will follow some rules of formal logic. We will make reasoned arguments with those facts, but the structure of our arguments and our propositions will be logical. From all this we can… infer… that logic is a judgement (statement) with an expected outcome, like a rule-set (A+B=C), and reason is the act of critical thinking, combining, associating, questioning, and drawing inferences from judgements. Meanwhile terms (themselves a collection of attributes) are what is being worked with in logic (and thus in reason as well), and skepticism is one example of the art of questioning inferences and propositions. Together, these parts of logic and reason, which can be denoted as term, proposition, and inference, are three increasingly more complex and often probabilistic modes of thought (modes of conceptualizing and comparing terms, judgements, and inferences). Are logic and reason arts or sciences? The answer is they are both. There is both an art (an action refined by practice) and a science (a type of knowledge refined by study) to reason and logic. Or, there is a certain art to reasoning with the science of logic. Or is there?! TIP: Logical reasoning can be divided into three main types: deduction, induction, and abduction (three methods of logical reasoning), meanwhile arts like rhetoric and acts like being critical and skeptical and thinking “outside of the box” are aspects of using our reason. TIP: As you may have gathered, it is near impossible to reason without logic (as logic precedes reasoning). Terms, Logic, and Reason With a Syllogism The simplest example of using both logic and reason is the Syllogism already eluded to above. One could describe the syllogism as a thing of logic (its barebones are really just a logical rule-set), but lets discuss it as a thing of logic and reason (where our conclusion is our inference and we see the syllogism as a logical rule-set for making reasoned judgements). A syllogism is a logical argument that applies Deductive Reasoning (AKA Deductive Logic), to arrive at a logically certain conclusion based on the comparison of two or more propositions (statements, premises, judgements; two or more logical conclusions based on conceptions). Consider the classic example of a deductive argument (a logical argument): - All men are mortal. (judgement; we reasonably assume all men are mortal). - Socrates is a man. (judgement; we look and see he is a man). - Therefore, Socrates is mortal. (inference; we draw the logical conclusion Socrates is mortal). There is logic to the above line of reasoning (since all men are mortal, and since Socrates is a man, Socrates is mortal), it is a consistent rule-set (so it is logic in that sense), but it is reasoning because one is deducing inferences to draw conclusions from judgments. Reason as a Synonym for Applying Logic In the above section on the syllogism all we really did was work with logic, but the term “reasoning” was used a lot (even though we didn’t do any complex reasoning like compare syllogisms or approach our conclusions in a skeptical manner). This is, as noted in the introduction, generally explained by the fact that we use our language loosely (and often use logic and reason as synonyms). Synonyms and Syllogisms aside (sorry), logic and reason are two very different parts of the same puzzle (AKA the process of thought). So let’s look closer at how they are different by expanding on the concepts we have already introduced. TIP: As you may have already noticed, most of the time terms, judgments, and inferences are bumping up against each other (making them hard to discuss alone). Reasoning always involves terms and logic, logic always involves terms but can involve little to no reasoning, and terms only require the most basic forms of logic and reason to conceptualize. Pure logic (only logic) like mathematics requires almost no reasoning, a computer can do pure logic based on terms and make judgements. Meanwhile, advanced AI aside, comparing judgements and employing reason is a very human thing. A Definition of Logic and Reason At this point we can define our terms again as: - A Term is a name for any concept we have conceptualized. Conception (process of thought) -> Concept (product of thought) -> Term (name we use in language). - Logic tends to seek absolute truth via a series of judgements using specific rule-sets (like 1+1 = the judgement of 2, or Socrates + the features of a man = the judgement Socrates is a man). Judgement (the process form) -> Judgement (the product form) -> Proposition (language form)., - Reason compares judgements and draws inferences associating terms and logic to seek probable truths and deeper understanding via a mix of formal and informal rule-sets (a sort of critical thinking that uses logic, skepticism, justified beliefs, philosophy, hypotheses, and many other modes of thought to draw inferences from judgements, terms, and reasoned arguments; reasoning is the process of deduction, induction, and abduction). Confusingly, an inference is always called an inference in any form (there is that pesky english language being confusing again!) How Logic and Reason are Different – Consider Terms, Judgements, and Inferences To view this another way, let’s look at an excerpt from a very simple and insightful resource on logic, the mostly forgotten (but free online), Deductive Logic by St. George William Joseph Stock): There are three processes of thought that all relate to each other. The reason we want to phrase these three different ways is because there are at least three parts to thought. §32. There are three processes of thought (what is happening when we think): § 36. Corresponding to these three processes there are three products of thought (once we have thought we get): - The Concept. - The Judgement. - The Inference. § 38. When the three products of thought are expressed in language (when we express our thoughts, they are): - The Term. - The Proposition. - The Inference. In other words: There are three categories of logic: - Conception -> Concept -> Term. This category can be expressed as Terms / Concepts. - Judgement (the process form) -> Judgement (the product form) -> Proposition (language form). This category is Logic / Propositions. - Inference (the process form) -> Inference (the product form) -> Inference (the language form). This category is Reason / Conclusions / Inferences. TIP: So really there are three basic things to deal with here that share names and get many names, but have specific meanings depending on context. So simple concept, but kind of tricky to master. Don’t worry about mastering it, just get the three basics: terms, logic (propositions), and reason (inferences). Those are the key elements of a deductive argument (plus, as we’ll discuss in a second and eluded to above, “the relation”). Then this relates to the idea that: - The concept is the result of comparing attributes. - The judgement is the result of comparing concepts. - The inference is the result of comparing judgements. And likewise (to phrase the same thing in different words): - The term is the result of comparing attributes. - The proposition is the result of comparing terms. - The inference is the result of comparing propositions. The Laws of Thought Compare all of that to the idea that the laws of thought are all reducible to the three following axioms, which are known as The Three Fundamental Laws of Thought: - The Law of Identity: Whatever is, is; or, in a more precise form, Every A is A. - The Law of Contradiction: Nothing can both be and not be; Nothing can be A and not A. - The Law of Excluded Middle: Everything must either be or not be; Everything is either A or not A. TIP: And here I would also note, especially if we were discussing induction, that we must also consider the “laws of probability” (a thing can be in a state of superposition or can be “likely A” or “likely B.”) And we have all the tools we need to use to understand and employ logic and reason. The rule-sets and the judgements made from comparing terms are logic, and the inferences made from comparing propositions is reason, meanwhile terms are the names given to concepts. Or, in simple logic: - Observe concepts. ex. men, mortals, Socrates, Plato. - Make judgements about concepts (logic). ex. All men are Mortal, Socrates is mortal, Plato is mortal. - Compare judgements and make inferences (reason). ex. If Socrates and Plato fight to the death, there can be only one left alive, after-all, all men are mortal, even the great Socrates. TIP: In other words, reason deals with probabilities and logic deals with absolutes (and therefore deduction is most logic and induction and abduction more things of reason). Logic seeks A + B = C judgements, and reason works with those judgements. Both are aspects of the art and science of comparing terms (where a term is A or B itself). TIP: these modes of thought are all different, they have the same general end, which is the approaching of truth and understanding. Examples of Logic and Reason Logic is what makes a computer’s brain work, and reason is the skill one uses to fact-check using a search engine (comparing articles, being skeptical, applying logic, spotting false information based on experience, etc). Logic is solving mathematic equations, reason is thinking of new ways to apply, combine, and refine those equations and the art of drawing inferences from them (the art of using deductive logic). Moving On and Other Definitions With the above said, given the close relation of logic and reason, and the sea of definitions from 300’s BC to today, I’m not going to offer a single specific answer as to exactly how to define logic and reason. Rather, as I’ve already done above, I’m going to continue to offer many examples of the ways in which logic and reason relate and differ, offering my own opinions, and resources like Wikipedia definitions and insight into classical texts on logic. So then, before we move on, here are the Wikipedia definitions: - Logic is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a specific relation of logical support between the assumptions of the argument and its conclusion. (In ordinary discourse, the conclusion of such an argument may be signified by words like therefore, hence, ergo and so on.) - Reason is the capacity for consciously making sense of things, applying logic, establishing and verifying facts, and changing or justifying practices, institutions, and beliefs based on new or existing information. It is closely associated with such characteristically human activities as philosophy, science, language, mathematics, and art and is normally considered to be a definitive characteristic of human nature. Reason, or an aspect of it, is sometimes referred to as rationality. TIP: In words, Reason is what makes us human, logic is what we often use to reason, both are modes of human thinking, but (cognitive AI aside) only logic can be mimicked by a modern machine. I keep using the computer analogy here, sorry. Logic vs. Reason With the above definitions in mind, the first thing to know about logic and reason is… that I’ve never seen the terms logic and reason defined perfectly (I’ve seen them defined very well many times, but never “perfectly”). With that said, certainly, logic is the A+B=C science one (it is a formal system with clear rule-sets), and reason is the more broad and loose “art of critical thinking” that uses logic as one of its tools (it is a more informal system of induction, deduction, and associating that mixes in beliefs, opinions, and facts and rationalizes toward many ends). Orators and philosophers use reason liberally, while the insurance adjuster generally sticks to logic. Still, outside of very specific cases, it is hard to do any sort of thinking that doesn’t employ concepts, logic, and reason. There are correct ways to reason (valid vs. invalid, sound and unsound), but the system has somewhat loose bounds beyond this… meanwhile logic demands rules so exact that a calculator could follow them flawlessly. Logic is more a thing of the empirical, reason more theoretical. Logic is more a natural science, reason well suited for moral philosophy. A skeptic uses reason to deduce a range of probable answers, logic is binary. Reason uses logic, but logic doesn’t have much need for reason after the rule-set has been formulated. Reason is more like the human brain, pulling from experience, logic, ethics, morals, and tastes, considering many complex layers, associating and combing ideas, and logic is more like the cold and hard mathematics of a calculator. With that in mind, Reason is more a thing of philosophy and critical thinking that moves one toward understanding of any sort, it can follow logical rule-sets, but can also use beliefs and opinions, it seeks truth and understanding over consistent answers (and can even be used to sway opinions, such as in oration and rhetoric). A debater reasons, a lawyer reasons, and a person reasons with their friend to get them to share their cake. Those who reason almost always use an assortment of different types of logic in their reasoning. Logic is more a science that involves a series of judgments that includes formal logic like that used in mathematics and computing, and deductive logic (where conclusions are drawn from premisses). A computer uses logic, a statistician uses logic, and a person reasoning often uses logic in their reasoning. So, one would use logic to program a computer, but one might use reason to come up with easier ways to program the computer. It makes sense to use logic in your reasoning, but logic itself doesn’t always require the use of reason. In this sense we can say reason is a broad category of thinking, where logic is the aspect of thinking that can be translated to actionable and consistent rule-sets. To frame this another way: Both seek understanding, but logic is what makes a computer run, while reason is what made Jobs and Woz decide to build computers. One doesn’t use reason to do their math homework, they reason with their mother to stay home from school. One might use logic in a debate, but the art of rhetoric can sometimes involve using reason and not logic. - Thus, Reason generally uses logic, although it doesn’t have to (one can use specific rule-sets in their reasoning such as because A is true, therefore B is also true; but they can also say, “knowing B is true, how can I use this to convince a person of C”). - Meanwhile, logic doesn’t generally require reasoning (as finding “X” in the equation 1+X=2 requires nothing more than a rule-set). ON LOGIC: A master of logic was Lewis Carrol (the guy who wrote Alice in Wonderland). Alice is actually a story poking fun at theoretical mathematics and almost all of his works are about logic. If you like your brain, you’ll love having it destroyed by Lewis’ “so far from Alice it stops being funny pretty quickly, and then becomes fun again” Symbolic Logic. The more logic you know, the better you’ll be able to reason, so if Carrol’s name isn’t reason enough… I mean, logically speaking. Ok, maybe that didn’t sell you, but if that is overwhelming try a System of Logic, Ratiocinative and Inductive by John Stuart Mill (it is even more burly and will make you appreciate Carrol). Sorry, bad joke. Do read those, but start with Carrol’s Game of Logic (there is cake!) If you understand logic, you understand reason, reason is logic and then everything left over pertaining to critical thinking. How to Argue – Philosophical Reasoning: Crash Course Philosophy #2 Reason can be used to seek any truth or understanding, but its inputs aren’t limited to facts and rule-sets. One can reason using emotions, opinions, or beliefs, and can arrive at illogical answers. Reason is a process of critical thinking, but the result doesn’t define it. For example, I can say, “I believe in Santa, and I want Santa to get me a unicorn, and I have only enough ingredients to make chocolate chip or blueberry cookies, since I don’t think Santa’s reindeer like chocolate, I’ll make the blueberry cookies.” That line of reasoning is actually valid, even though nothing in the story is fact-based. We can say the line of reasoning was “logical,” as it followed a rule-set… but I certainly would describe it as “logic-of-the-highest-order”. Meanwhile, if I say, “I believe in Santa, and I want Santa to get me a unicorn, so I’m going to go steal stuff…” that reasoning is not very good. Santa doesn’t get bad kids presents, so stealing stuff is not (logically speaking) going to net one a Unicorn. In other words, my logic was off in my reasoning, so my reasoning was not good. Logic and reason are both things of “pure reason”, but where logic deals with formal rule-sets most often applied to the natural sciences, reason can pull from anything and be applied to anything. So, while the distinctions are somewhat semantic in every day language, there is a world of difference between the formal science of logic (which can make a computer run), and the more ethereal art of reasoning (which is what Google tries to get its search engines to do with endless lines of coded logic when you ask it “what is the difference between reason and logic”). All the Enlightenment founders used reason, but Newton’s mathematics are a thing of logic. We can apply reason, even when we don’t know all the facts logically. To end, I’d say this: - Logic is the science of following a rule-set that produces consistent results. - Reason is the application of “pure logic,” empirical evidence, experiment, and skepticism to find truths, facts, and theories (AKA “critical thinking”). - Enlightenment is simply the natural conclusions to which reason leads. In other words, if the goal is enlightenment, the foundation must start with logic, and to do logic, we must properly define our terms. TIP: Logic and reason are also music programs. Great ones actually. In music, learning the intervals and chords and scales is a thing of logic, but improvising with those rule-sets is a thing of reason. 😀 Logic –The Structure of Reason (Great Ideas of Philosophy) - Validity and Invalidity, Soundness and Unsoundness - What is the difference between logic and reason? - What is Kant’s argument about the relationship between logic and reason? - Classical Logic
The map shows the inner part of the Milky Way has two prominent, symmetric spiral arms, which extend into the outer galaxy where they branch into four spiral arms. “For the first time these arms are mapped over the entire Milky Way,” said Pohl, an Iowa State associate professor of physics and astronomy. “The branching of two of the arms may explain why previous studies – using mainly the inner or mainly the outer galaxy – have found conflicting numbers of spiral arms.” The new map was developed by Pohl, Peter Englmaier of the University of Zurich in Switzerland and Nicolai Bissantz of Ruhr-University in Bochum, Germany. As the sun and other stars revolve around the center of the Milky Way, researchers cannot see the spiral arms directly, but have to rely on indirect evidence to find them. In the visible light, the Milky Way appears as an irregular, densely populated strip of stars. Dark clouds of dust obscure the galaxy’s central region so it cannot be observed in visible light. The National Aeronautics and Space Administration’s Cosmic Background Explorer satellite was able to map the Milky Way in infrared light using an instrument called the Diffuse IR Background Experiment. The infrared light makes the dust clouds almost fully transparent. Englmaier and Bissantz used the infrared data from the satellite to develop a kinematic model of gas flow in the inner galaxy. Pohl used the model to reconstruct the distribution of molecular gas in the galaxy. And that led to the researchers’ map of the galaxy’s spiral arms. The Milky Way is the best studied galaxy in the universe because other galaxies are too far away for detailed observations. And so studies of the galaxy are an important reference point for the interpretation of other galaxies. Astrophysicists know that the stars in the Milky Way are distributed as a disk with a central bulge dominated by a long bar-shaped arrangement of stars. Outside this central area, stars are located along spiral arms. In addition to the two main spiral arms in the inner galaxy, two weaker arms exist. These arms end about 10,000 light-years from the galaxy’s center. (The sun is located about 25,000 light-years from the galactic center.) One of these arms has been known for a long time, but has always been a mystery because of its large deviation from circular motion. The new model explains the deviation as a result of alternations to its orbit caused by the bar’s gravitational pull. The other, symmetric arm on the far side of the galaxy was recently found in gas data. The discovery of this second arm was a great relief for Englmaier: “Finally it is clear that our model assumption of symmetry was correct and the inner galaxy is indeed quite symmetric in structure.” Other scientific groups are already interested in using the new map for their research. A group from France, for example, hopes to use it in their search for dark matter. Mike Krapfl | Newswise Science News From rocks in Colorado, evidence of a 'chaotic solar system' 23.02.2017 | University of Wisconsin-Madison Prediction: More gas-giants will be found orbiting Sun-like stars 22.02.2017 | Carnegie Institution for Science On January 15, 2009, Chesley B. Sullenberger was celebrated world-wide: after the two engines had failed due to bird strike, he and his flight crew succeeded after a glide flight with an Airbus A320 in ditching on the Hudson River. All 155 people on board were saved. On January 15, 2009, Chesley B. Sullenberger was celebrated world-wide: after the two engines had failed due to bird strike, he and his flight crew succeeded... Cells need to repair damaged DNA in our genes to prevent the development of cancer and other diseases. Our cells therefore activate and send “repair-proteins”... The Fraunhofer IWS Dresden and Technische Universität Dresden inaugurated their jointly operated Center for Additive Manufacturing Dresden (AMCD) with a festive ceremony on February 7, 2017. Scientists from various disciplines perform research on materials, additive manufacturing processes and innovative technologies, which build up components in a layer by layer process. This technology opens up new horizons for component design and combinations of functions. For example during fabrication, electrical conductors and sensors are already able to be additively manufactured into components. They provide information about stress conditions of a product during operation. The 3D-printing technology, or additive manufacturing as it is often called, has long made the step out of scientific research laboratories into industrial... Nature does amazing things with limited design materials. Grass, for example, can support its own weight, resist strong wind loads, and recover after being... 13.02.2017 | Event News 10.02.2017 | Event News 09.02.2017 | Event News 27.02.2017 | Materials Sciences 27.02.2017 | Interdisciplinary Research 27.02.2017 | Life Sciences
Get the details behind our redesign (Or "arg") A value or reference passed to a function, procedure, subroutine, command or program, by the caller. For example, in the function definition square(x) = x * x x is the formal argument or "parameter", and in the call y = square(3+4) 3+4 is the actual argument. This will execute the function square with x having the value 7 and return the result 49. There are many different conventions for passing arguments to functions and procedures including call-by-value, call-by-name, call-by-reference, call-by-need. These affect whether the value of the argument is computed by the caller or the callee (the function) and whether the callee can modify the value of the argument as seen by the caller (if it is a variable). Arguments to functions are usually, following mathematical notation, written in parentheses after the function name, separated by commas (but see curried function). Arguments to a program are usually given after the command name, separated by spaces, e.g.: cat myfile yourfile hisfile Here "cat" is the command and "myfile", "yourfile", and "hisfile" are the arguments.
In microeconomic theory, the opportunity cost, also known as alternative cost, of making a particular choice is the value of the most valuable choice not taken. When an option is chosen from two mutually exclusive alternatives, the opportunity cost is the "cost" incurred by not enjoying the benefit associated with the alternative choice. The New Oxford American Dictionary defines it as "the loss of potential gain from other alternatives when one alternative is chosen." Opportunity cost is a key concept in economics, and has been described as expressing "the basic relationship between scarcity and choice." The notion of opportunity cost plays a crucial part in attempts to ensure that scarce resources are used efficiently. Opportunity costs are not restricted to monetary or financial costs: the real cost of output forgone, lost time, pleasure or any other benefit that provides utility should also be considered an opportunity cost. The term was first used in 1914 by Austrian economist Friedrich von Wieser in his book Theorie der gesellschaftlichen Wirtschaft (Theory of Social Economy). The idea had been anticipated by previous writers including Benjamin Franklin and Frédéric Bastiat. Franklin coined the phrase "Time is Money", and spelt out the associated opportunity cost reasoning in his “Advice to a Young Tradesman” (1746): “Remember that Time is Money. He that can earn Ten Shillings a Day by his Labour, and goes abroad, or sits idle one half of that Day, tho’ he spends but Sixpence during his Diversion or Idleness, ought not to reckon That the only Expence; he has really spent or rather thrown away Five Shillings besides.” Bastiat's 1848 essay "What Is Seen and What Is Not Seen" used opportunity cost reasoning in his critique of the broken window fallacy, and of what he saw as spurious arguments for public expenditure. Opportunity costs in productionEdit Explicit costs are opportunity costs that involve direct monetary payment by producers. The explicit opportunity cost of the factors of production not already owned by a producer is the price that the producer has to pay for them. For instance, if a firm spends $100 on electrical power consumed, its explicit opportunity cost is $100. This cash expenditure represents a lost opportunity to purchase something else with the $100. Implicit costs (also called implied, imputed or notional costs) are the opportunity costs that are not reflected in cash outflow but are implied by the choice of the firm not to allocate its existing (owned) resources, or factors of production, to the best alternative use. For example: a manufacturer has previously purchased 1000 tons of steel and the machinery to produce a widget. The implicit part of the opportunity cost of producing the widget is the revenue lost by not selling the steel and not renting out the machinery instead of using it for production. One example of opportunity cost is in the evaluation of "foreign" (to the US) buyers and their allocation of cash assets in real estate or other types of investment vehicles. During the downturn in circa June or July 2015 of the Chinese stock market, more and more Chinese investors from Hong Kong and Taiwan turned to the United States as an alternative vessel for their investment dollars; the opportunity cost of leaving their money in the Chinese stock market or Chinese real estate market is the yield available in the US real estate market. Opportunity cost is not the sum of the available alternatives when those alternatives are, in turn, mutually exclusive to each other. It is the highest value option forgone. The opportunity cost of a city's decision to build the hospital on its vacant land is the loss of net income from using the land for a sporting center, or the loss of net income from using the land for a parking lot, or the money the city could have made by selling the land, whichever is greatest. Use for any one of those purposes precludes all the others. If someone loses the opportunity to earn money, that is part of the opportunity cost. If someone chooses to spend money, that money could be used to purchase other goods and services so the spent money is part of the opportunity cost as well. Add the value of the next best alternative and you have the total opportunity cost. If you miss work to go to a concert, your opportunity cost is the money you would have earned if you had gone to work plus the cost of the concert. - Suppose that you have a free ticket to a concert by Band X. The ticket has no resale value. On the night of the concert your next-best alternative entertainment is a performance by Band Y for which the tickets cost $40. You like Band Y and would usually be willing to pay $50 for a ticket to see them. What is the opportunity cost of using your free ticket and seeing Band X instead of Band Y? - The benefit you forgo (that is, the value to you) is $10: the $50 benefit of seeing Band Y minus the ticket cost of $40. - "Opportunity Cost". Investopedia. Retrieved 2010-09-18. - James M. Buchanan (2008). "Opportunity cost". The New Palgrave Dictionary of Economics Online (Second ed.). Retrieved 2010-09-18. - "Opportunity Cost". Economics A–Z. The Economist. Retrieved 2010-09-18. - Friedrich von Wieser (1927). A. Ford Hinrichs (translator), ed. Social Economics (PDF). New York: Adelphi. Retrieved 2011-10-07. • Friedrich von Wieser (November 1914). Theorie der gesellschaftlichen Wirtschaft [Theory of Social Economics] (in German). Original publication. - Explicit vs. Implicit Cost - "AP Economics Review: Cost, Revenue, and Profit". ReviewEcon.com. Retrieved 2016-10-14. - Gittins, Ross (19 April 2014). "At the coal face economists are struggling to measure up". The Sydney Morning Herald. Retrieved 23 April 2014.
- Surface tension :"For the work of fiction, see Surface Tension (short story)."Surface tension is a property of the surface of a liquidthat causes it to behave as an elastic sheet. It allows insects, such as the water strider(pond skater, UK), to walk on water. It allows small objects, even metal ones such as needles, razor blades, or foil fragments, to float on the surface of water, and it is the cause of capillary action. An everyday observation of surface tension is the formation of water droplets on various surfaces or raindrops. The physical and chemical behavior of liquids cannot be understood without taking surface tension into account. It governs the shape that small masses of liquid can assume and the degree of contact a liquid can make with another substance. Newtonian physicsto the forces that arise due to surface tension accurately predicts many liquid behaviors that are so commonplace that most people take them for granted. Applying thermodynamicsto those same forces further predicts other more subtle liquid behaviors. Surface tension has the dimension of forceper unit length, or of energyper unit area. The two are equivalent — but when referring to energy per unit of area people use the term surface energy— which is a more general term in the sense that it applies also to solids and not just liquids. Surface tension is caused by the attraction between the liquid's molecules by various intermolecular forces. In the bulk of the liquid, each molecule is pulled equally in all directions by neighbouring liquid molecules, resulting in a net force of zero. At the surface of the liquid, the molecules are pulled inwards by other molecules deeper inside the liquid and are not attracted as intensely by the molecules in the neighbouring medium (be it vacuum, air or another liquid). Therefore, all of the molecules at the surface are subject to an inward force of molecular attraction which is balanced only by the liquid's resistance to compression, meaning there is no net inward force. However, there is a driving force to diminish the surface area, and in this respect a liquid surface resembles a stretched elastic membrane. Thus the liquid squeezes itself together until it has the locally lowest surface area possible. Another way to view it is that a molecule in contact with a neighbour is in a lower state of energy than if it wasn't in contact with a neighbour. The interior molecules all have as many neighbours as they can possibly have. But the boundary molecules have fewer neighbours than interior molecules and are therefore in a higher state of energy. For the liquid to minimize its energy state, it must minimize its number of boundary molecules and must therefore minimize its surface area.cite book|last=White|first=Harvey E.|title=Modern College Physics|publisher=van Nostrand|year=1948|isbn=0442294018] As a result of surface area minimization, a surface will assume the smoothest shape it can (mathematical proof that "smooth" shapes minimize surface area relies on use of the Euler–Lagrange equation). Since any curvature in the surface shape results in greater area, a higher energy will also result. Consequently the surface will push back against any curvature in much the same way as a ball pushed uphill will push back to minimize its gravitational potential energy. Effects in everyday life Where the two surfaces meet, they form a contact angle, , which is the angle the tangent to the surface makes with the solid surface. The diagram to the right shows two examples. Tension forces are shown for the liquid-air interface, the liquid-solid interface, and the solid-air interface. The example on the left is where the difference between the liquid-solid and solid-air surface tension, , is less than the liquid-air surface tension, , but is nevertheless positive, that is where:* is the liquid-solid surface tension,:* is the liquid-air surface tension,:* is the solid-air surface tension,:* is the contact angle, where a concave meniscushas contact angle less than 90° and a convex meniscushas contact angle of greater than 90°.Sears, Francis Weston; Zemanski, Mark W. "University Physics 2nd ed." Addison Wesley 1955] This means that although the difference between the liquid-solid and solid-air surface tension, , is difficult to measure directly, it can be inferred from the easily measured contact angle, , if the liquid-air surface tension, , is known. This same relationship exists in the diagram on the right. But in this case we see that because the contact angle is less than 90°, the liquid-solid/solid-air surface tension difference must be negative: pecial contact angles Observe that in the special case of a water-silver interface where the contact angle is equal to 90°, the liquid-solid/solid-air surface tension difference is exactly zero. Another special case is where the contact angle is exactly 180°. Water with specially prepared Teflonapproaches this. Contact angle of 180° occurs when the liquid-solid surface tension is exactly equal to the liquid-air surface tension. Methods of measurement Because surface tension manifests itself in various effects, it offers a number of paths to its measurement. Which method is optimum depends upon the nature of the liquid being measured, the conditions under which its tension is to be measured, and the stability of its surface when it is deformed. * Du Noüy Ring method: The traditional method used to measure surface or interfacial tension. Wetting properties of the surface or interface have little influence on this measuring technique. Maximum pull exerted on the ring by the surface is measured. * A miniaturized version of Du Noüy method uses a small diameter metal needle instead of a ring, in combination with a high sensitivity microbalance to record maximum pull. The advantage of this method is that very small sample volumes (down to few tens of microliters) can be measured with very high precision, without the need to correct for buoyancy (for a needle or rather, rod, with proper geometry). Further, the measurement can be performed very quickly, minimally in about 20 seconds. First commercial multichannel tensiometers [CMCeeker] were recently built based on this principle. * Wilhelmy plate method: A universal method especially suited to check surface tension over long time intervals. A vertical plate of known perimeter is attached to a balance, and the force due to wetting is measured.cite web|url=http://www.ksvinc.com/surface_tension1.htm|title=Surface and Interfacial Tension|accessdate=2007-09-08|publisher=Langmuir-Blodgett Instruments] * Spinning drop method: This technique is ideal for measuring low interfacial tensions. The diameter of a drop within a heavy phase is measured while both are rotated. * Pendant drop method: Surface and interfacial tension can be measured by this technique, even at elevated temperatures and pressures. Geometry of a drop is analyzed optically. For details, see Drop. * Bubble pressure method (Jaeger's method): A measurement technique for determining surface tension at short surface ages. Maximum pressure of each bubble is measured. * Drop volume method: A method for determining interfacial tension as a function of interface age. Liquid of one density is pumped into a second liquid of a different density and time between drops produced is measured.cite web|url=http://lauda.de/hosting/lauda/webres.nsf/urlnames/graphics_tvt2/$file/Tensio-dyn-meth-e.pdf|title=Surfacants at interfaces|accessdate=2007-09-08|publisher=lauda.de] * Capillary rise method: The end of a capillary is immersed into the solution. The height at which the solution reaches inside the capillary is related to the surface tension by the equation discussed below.cite web|url=http://mysite.du.edu/~jcalvert/phys/surftens.htm|title=Surface Tension (physics lecture notes)|author=Calvert, James B.|accessdate=2007-09-08|publisher=University of Denver] * Stalagmometric method: A method of weighting and reading a drop of liquid. * Sessile drop method: A method for determining surface tension and densityby placing a drop on a substrate and measuring the contact angle(see Sessile drop technique).cite web|url=http://www.dataphysics.de/english/messmeth_sessil.htm|title=Sessile Drop Method|accessdate=2007-09-08|publisher=Dataphysics] Liquid in a vertical tube An old style mercury barometerconsists of a vertical glass tube about 1 cm in diameter partially filled with mercury, and with a vacuum (called Toricelli's vacuum) in the unfilled volume (see diagram to the right). Notice that the mercury level at the center of the tube is higher than at the edges, making the upper surface of the mercury dome-shaped. The center of mass of the entire column of mercury would be slightly lower if the top surface of the mercury were flat over the entire crossection of the tube. But the dome-shaped top gives slightly less surface area to the entire mass of mercury. Again the two effects combine to minimize the total potential energy. Such a surface shape is known as a convex meniscus. The reason we consider the surface area of the entire mass of mercury, including the part of the surface that is in contact with the glass, is because mercury does not adhere at all to glass. So the surface tension of the mercury acts over its entire surface area, including where it is in contact with the glass. If instead of glass, the tube were made out of copper, the situation would be very different. Mercury aggressively adheres to copper. So in a copper tube, the level of mercury at the center of the tube will be lower rather than higher than at the edges (that is, it would be a concave meniscus). In a situation where the liquid adheres to the walls of its container, we consider the part of the fluid's surface area that is in contact with the container to have "negative" surface tension. The fluid then works to maximize the contact surface area. So in this case increasing the area in contact with the container decreases rather than increases the potential energy. That decrease is enough to compensate for the increased potential energy associated with lifting the fluid near the walls of the container. If a tube is sufficiently narrow and the liquid adhesion to its walls is sufficiently strong, surface tension can draw liquid up the tube in a phenomenon known as capillary action. The height the column is lifted to is given by: :* is the height the liquid is lifted,:* is the liquid-air surface tension,:* is the density of the liquid,:* is the radius of the capillary,:* is the acceleration due to gravity,:* is the angle of contact described above. Note that if is greater than 90°, as with mercury in a glass container, the liquid will be depressed rather than lifted. Puddles on a surface thumb|374px|right|Profile curve of the edge of a puddle where the contact angle is 180°. The curve is given by the formula">: where Pouring mercury onto a horizontal flat sheet of glass results in a puddle that has a perceptible thickness. (Do not try this except under a fume hood. Mercury vapor is a toxic hazard.) The puddle will spread out only to the point where it is a little under half a centimeter thick, and no thinner. Again this is due to the action of mercury's strong surface tension. The liquid mass flattens out because that brings as much of the mercury to as low a level as possible. But the surface tension, at the same time, is acting to reduce the total surface area. The result is the compromise of a puddle of a nearly fixed thickness. The same surface tension demonstration can be done with water, but only on a surface made of a substance that the water does not adhere to. Wax is such a substance. Water poured onto a smooth, flat, horizontal wax surface, say a waxed sheet of glass, will behave similarly to the mercury poured onto glass. The thickness of a puddle of liquid on a surface whose contact angle is 180° is given by:cite book|title=Capillary and Wetting Phenomena -- Drops, Bubbles, Pearls, Waves|author=Pierre-Gilles de Gennes, Françoise Brochard-Wyart, David Quéré|publisher=Springer|year=2002|translator=Alex Reisinger|isbn=0-387-00592-7] Capillary wave– short waves on a water surface, governed by surface tension and inertia Cheerio effect– the tendency for small wettable floating objects to attract one another. Dortmund Data Bank– contains experimental temperature-dependent surface tensions. Eötvös rule– a rule for predicting surface tension dependent on temperature. Hydrostatic Equilibrium– the effect of gravity pulling matter into a round shape. Meniscus– surface curvature formed by a liquid in a container. Mercury beating heart– a consequence of inhomogeneous surface tension. Specific surface energy– same as surface tension in isotropic materials. Surface tension values Sessile drop technique Surfactants – substances which reduce surface tension. Tears of wine– the surface tension induced phenomenon seen on the sides of glasses containing alcoholic beverages. Tolman length– leading term in correcting the surface tension for curved surfaces. James Blish, author of the short story "Surface Tension" (1957). * [http://www.ramehart.com/goniometers/surface_tension.htm Concise overview of surface tension] * [http://hyperphysics.phy-astr.gsu.edu/hbase/surten.html On surface tension and interesting real-world cases] * [http://web.mit.edu/1.63/www/Lec-notes/Surfacetension/ MIT Lecture Notes on Surface Tension] * [http://www.kruss.info/techniques/surface_tension_e.html Theory of surface tension measurements] * [http://www.kayelaby.npl.co.uk/general_physics/2_2/2_2_5.html Surface Tensions of Various Liquids] * [http://www.scientistlive.com/elab/20061201/analyticallab-equipment/2.1.282.286/16974/understanding-the-interaction-between-gases-and-liquids.thtml Understanding the interaction between gases and liquids] Scientist Live Wikimedia Foundation. 2010. Look at other dictionaries: surface tension — n the attractive force exerted upon the surface molecules of a liquid by the molecules beneath that tends to draw the surface molecules into the bulk of the liquid and makes the liquid assume the shape having the least surface area * * * the… … Medical dictionary surface tension — ► NOUN ▪ the tension of the surface film of a liquid, which tends to minimize surface area … English terms dictionary Surface tension — Sur face ten sion (Physics) That property, due to molecular forces, which exists in the surface film of all liquids and tends to bring the contained volume into a form having the least superficial area. The thickness of this film, amounting to… … The Collaborative International Dictionary of English surface tension — n [U] the way the ↑molecules in the surface of a liquid stick together so that the surface is held together … Dictionary of contemporary English surface tension — n. a property of liquids in which the exposed surface tends to contract to the smallest possible area because of unequal molecular cohesive forces near the surface: measured by the force per unit of length … English World dictionary surface tension — noun uncount TECHNICAL the force by which the molecules of a liquid stay close together at the surface to form the smallest possible area … Usage of the words and phrases in modern English surface tension — Physics. the elasticlike force existing in the surface of a body, esp. a liquid, tending to minimize the area of the surface, caused by asymmetries in the intermolecular forces between surface molecules. [1875 80] * * * Property of a liquid… … Universalium surface tension — noun a phenomenon at the surface of a liquid caused by intermolecular forces (Freq. 4) • Hypernyms: ↑physical phenomenon • Hyponyms: ↑capillarity, ↑capillary action, ↑interfacial tension, ↑interfacial surface tension … Useful english dictionary surface tension — paviršinės įtempties koeficientas statusas T sritis Standartizacija ir metrologija apibrėžtis Jėga, statmena ilginiam paviršiaus elementui, padalyta iš to elemento ilgio. atitikmenys: angl. surface tension vok. Oberflächenspannungskonstante, f… … Penkiakalbis aiškinamasis metrologijos terminų žodynas surface tension — paviršinė įtemptis statusas T sritis Standartizacija ir metrologija apibrėžtis Dydis, išreiškiamas jėgos, mažinančios skysčio paviršiaus plotą ir statmenos ilginiam paviršiaus elementui, ir to elemento ilgio dalmeniu. atitikmenys: angl. surface… … Penkiakalbis aiškinamasis metrologijos terminų žodynas
INTRODUCTION TO FREQUENCY DISTRIBUTIONS A frequency distribution is a tabulation which shows the number of times (i.e. the frequency) each different value occurs. Refer back to Study Unit 2 and make sure you understand the difference between “attributes” (or qualitative variables) and “variables” (or quantitative variables); the term “frequency distribution” is usually confined to the case of variables. PREPARATION OF FREQUENCY Simple Frequency Distribution A useful way of preparing a frequency distribution from raw data is to go through the records as they stand and mark off the items by the “tally mark” or “five-bar gate” method. First look at the figures to see the highest and lowest values so as to decide the range to be covered and then prepare a blank table. Grouped Frequency Distribution Sometimes the data is so extensive that a simple frequency distribution is too cumbersome and, perhaps, uninformative. Then we make use of a “grouped frequency distribution”. Choice of Class Interval When compiling a frequency distribution you should, if possible, make the length of the class interval equal for all classes so that fair comparison can be made between one class and another. Sometimes, however, this rule has to be broken (official publications often lump together the last few classes into one so as to save paper and printing costs) and then, before we use the information, it is as well to make the classes comparable by calculating a column showing “frequency per interval of so much”, as in this example for some wage statistics: RELATIVE FREQUENCY DISTRIBUTIONS All the frequency distributions which we have looked at so far in this study unit have had their class frequencies expressed simply as numbers of items. However, remember that proportions or percentages are useful secondary statistics. When the frequency in each class of a frequency distribution is given as a proportion or percentage of the total frequency, the result is known as a “relative frequency distribution” and the separate proportions or percentages are the “relative frequencies”. The total relative frequency is, of course, always 1.0 (or 100%). Cumulative relative frequency distributions may be compiled in the same way as ordinary cumulative frequency distributions GRAPHICAL REPRESENTATION OF FREQUENCY DISTRIBUTIONS Tabulated frequency distributions are sometimes more readily understood if represented by a diagram. Graphs and charts are normally much superior to tables (especially lengthy complex tables) for showing general states and trends, but they cannot usually be used for accurate analysis of data. The methods of presenting frequency distributions graphically are as follows: - Frequency dot diagram - Frequency bar chart - Frequency polygon – Histogram – We will now examine each of these in turn. Frequency Dot Diagram This is a simple form of graphical representation for the frequency distribution of a discrete variate. A horizontal scale is used for the variate and a vertical scale for the frequency. Above each value on the variate scale we mark a dot for each occasion on which that value occurs. Thus, a frequency dot diagram of the distribution of times taken to complete a given task, which we have used in this study unit, would look like Figure 4.1. Frequency Bar Chart We can avoid the business of marking every dot in such a diagram by drawing instead a vertical line the length of which represents the number of dots which should be there. The frequency dot diagram in Figure 4.1 now becomes a frequency bar chart, as in Figure 4.2. Instead of drawing vertical bars as we do for a frequency bar chart, we could merely mark the position of the top end of each bar and then join up these points with straight lines. When we do this, the result is a frequency polygon, as in Figure 4.3. Note that we have added two fictitious classes at each end of the distribution, i.e. we have marked in groups with zero frequency at 3.3 and 4.0. This is done to ensure that the area enclosed by the polygon and the horizontal axis is the same as the area under the corresponding histogram which we shall consider in the next section. These three kinds of diagram are all commonly used as a means of making frequency distributions more readily comprehensible. They are mostly used in those cases where the variate is discrete and where the values are not grouped. Sometimes frequency bar charts and polygons are used with grouped data by drawing the vertical line (or marking its top end) at the centre point of the group. This is the best way of graphing a grouped frequency distribution. It is of great practical importance and is also a favourite topic among examiners. Refer back now to the grouped distribution given earlier in Table 4.4 (ages of office workers) and then study Figure 4.5. We call this kind of diagram a “histogram”. The frequency in each group is represented by a rectangle and – this is a very important point – it is the AREA of the rectangle, not its height, which represents the frequency. When the lengths of the class intervals are all equal, then the heights of the rectangles represent the frequencies in the same way as do the areas (this is why the vertical scale has been marked in this diagram); if, however, the lengths of the class intervals are not all equal, you must remember that the heights of the rectangles have to be adjusted to give the correct areas. Do not stop at this point if you have not quite grasped the idea, because it will become clearer as you read on. Look once again at the histogram of ages given in Figure 4.5 and note particularly how it illustrates the fact that the frequency falls off towards the higher age groups – any form of graph which did not reveal this fact would be misleading. Now let us imagine that the original table had NOT used equal class intervals but, for some reason or other, had given the last few groups as: The last two groups have been lumped together as one. A WRONG form of histogram, using heights instead of areas, would look like Figure 4.6. Now, this clearly gives an entirely wrong impression of the distribution with respect to the higher age groups. In the correct form of the histogram, the height of the last group (50-60) would be halved because the class interval is double all the other class intervals. The histogram in Figure 4.7 gives the right impression of the falling off of frequency in the higher age groups. I have labelled the vertical axis “Frequency density per 5-year interval” as five years is the “standard” interval on which we have based the heights of our rectangles. Often it happens, in published statistics, that the last group in a frequency table is not completely specified. The last few groups may look as in Table 4.9: How do we draw the last group on the histogram? If the last group has a very small frequency compared with the total frequency (say, less than about 1% or 2%) then nothing much is lost by leaving it off the histogram altogether. If the last group has a larger frequency than about 1% or 2%, then you should try to judge from the general shape of the histogram how many class intervals to spread the last frequency over in order not to create a false impression of the extent of the distribution. In the example given, you would probably spread the last 30 people over two or three class intervals but it is often simpler to assume that an open-ended class has the same length as its neighbour. Whatever procedure you adopt, the important thing in an examination paper is to state clearly what you have done and why. A distribution of the kind we have just discussed is called an “openended” distribution. This is the name given to the graph of the cumulative frequency. It can be drawn in either the “less than” or the “or more” form, but the “less than” form is the usual one. Ogives for two of the distributions already considered in this study unit are now given as examples; Figure 4.8 is for ungrouped data and Figure 4.9 is for grouped data. Study these two diagrams so that you are quite sure that you know how to draw them. There is only one point which you might be tempted to overlook in the case of the grouped distribution – the points are plotted at the ends of the class intervals and NOT at the centre point. Look at the example and see how the 168,000 is plotted against the upper end of the 56-60 group and not against the mid-point, 58. If we had been plotting an “or more” ogive, the plotting would have to have been against the lower end of the group. This is the simplest method of presenting information visually. These diagrams are variously called “pictograms”, “ideograms”, “picturegrams” or “isotypes” – the words all refer to the same thing. Their use is confined to the simplified presentation of statistical data for the general public. Pictograms consist of simple pictures which represent quantities These diagrams, known also as circular diagrams, are used to show the manner in which various components add up to a total. Like pictograms, they are only used to display very simple information to non-expert readers. They are popular in computer graphics. We have already met one kind of bar chart in the course of our studies of frequency distributions, namely the frequency bar chart. A “bar” is simply another name for a thick line. In a frequency bar chart the bars represent, by their length, the frequencies of different values of the variate. The idea of a bar chart can, however, be extended beyond the field of frequency distributions, and we will now illustrate a number of the types of bar chart in common use. I say “illustrate” because there are no rigid and fixed types, but only general ideas which are best studied by means of examples. You can supplement the examples in this study unit by looking at the commercial pages of newspapers and magazines. Note that the lengths of the components represent the amounts, and that the components are drawn in the same order so as to facilitate comparison. These bar charts are preferable to circular diagrams because: - They are easily read, even when there are many components. - They are more easily drawn. - It is easier to compare several bars side by side than several circles. Horizontal Bar Chart A typical case of presentation by a horizontal bar chart is shown in Figure 4.17. Note how a loss is shown by drawing the bar on the other side of the zero line. Pie charts and bar charts are especially useful for “categorical” variables as well as for numerical variables. The example in Figure 4.17 shows a categorical variable, i.e. the different branches form the different categories, whereas in Figure 4.15 we have a numerical variable, namely, time. Figure 4.17 is also an example of a multiple or compound bar chart as there is more than one bar for each category. GENERAL RULES FOR GRAPHICAL PRESENTATION There are a number of general rules which must be borne in mind when planning and using graphical methods: - Graphs and charts must be given clear but brief titles. - The axes of graphs must be clearly labelled, and the scales of values clearly marked. - Diagrams should be accompanied by the original data, or at least by a reference to the source of the data. - Avoid excessive detail, as this defeats the object of diagrams. - Wherever necessary, guidelines should be inserted to facilitate reading. - Try to include the origins of scales. Obeying this rule sometimes leads to rather a waste of paper space. In such a case the graph could be “broken” as shown in Figure 4.18, but take care not to distort the graph by over-emphasising small variations. THE LORENZ CURVE One of the problems which frequently confronts the statistician working in economics or industry is that of CONCENTRATION Although usually used to show the concentration of wealth (incomes, property ownership, etc.), Lorenz curves can also be employed to show concentration of any other feature. For example, the largest proportion of a country’s output of a particular commodity may be produced by only a small proportion of the total number of factories, and this fact can be illustrated by a Lorenz curve. Concentration of wealth or productivity, etc. may become more or less as time goes on. A series of Lorenz curves on one graph will show up such a state of affairs. In some countries, in recent years, there has been a tendency for incomes to be more equally distributed. A Lorenz curve reveals this because the curves for successive years lie nearer to the straight diagonal.
Fill in the blank. (4 marks) (1) There are _____ main types of motion. (2) An accelerated body is one whose velocity is _____ . (3) The unit 〖cm s〗^(-1)is the unit of _____ . (4) If a body starts from rest , its initial velocity is _____ . (5) The unit of speed and velocity are the _____ . (6) If the starting point and the end point are the same , the displacement is _____ . (7) A body with decreasing positive velocity has a _____ acceleration. (8) _____ is the magnitude of the velocity. (9) If a body is moving with constant velocity , its acceleration is _____ . (10) _____ is the rate of change of velocity. (11) The velocity of a body is decreasing is has _____ acceleration. (12) The equation v ̅ =(v_0+v)/2 can be used when the body is moving with _____ acceleration. (13) Displacement does not depend on the _____ travel. (14)Speed is a _____ quantity. (15) Velocity has both _____ and direction. (16) The rate of change of distance travelled is called _____ . (17) Uniform motion is motion with _____ velocity. (18) Any change in velocity gives rise to _____ . (19) Motion with changing velocity is called _____ motion. (20) The change in position along a certain direction is _____ . (21) The slope of the distance -time graph gives _____ . (22) The grater the slope of a straight line distance-time graph , the _____ is the speed. (23) The slope of the distance-time graph gives _____ . (24) The slope of the velocity-time graph gives _____ . (25) The area under the speed-time graph gives _____. 1.two 2.changing 3.acceleration 4.zero 5. same 6.zero 7.negative 8.speed 9.zero 10.acceleration 11.negative 12.constant 13.path 14.scalar 15.magnitude 16.speed 17.constant 18.acceleration 19.accelerated 20.displacement 21.speed 22.grater 23.velocity 24. acceleration 25.distance Are the following statement True (or) False? (1) There are two types of motion. (2) Uniform motion is motion at constant speed. (3) Average velocity is a vector quantity. (4) Acceleration is a scalar quantity. (5) Displacement is the rate of change of velocity. (6) Acceleration has only magnitude. (7) If the speed changes, there will be acceleration. (8) The motion at constant acceleration is the uniform motion. (9) The SI unit of acceleration is cm s-1. (10) Motion along a straight line with constant acceleration is linear motion. (11) When a body is in motion there is a change in position. (12) The change in position along a certain direction is called distance. (13) The velocity gives the speed and the direction of the body. (14) Displacement has only magnitude. (15) Speed is a vector quantity. (16) Average velocity has only direction. (17) Acceleration is the rate of change of speed. (18) Average speed and average velocity have the same unit. (19) Speed may not be measured in ms-2. (20) Although speed changes , there is no acceleration. (21) The speed of a body tell us how far it travels during every unit of time. (22) The slope of the displacement time graph gives the acceleration of the body . (23) If the speed changes, the velocity also changes. (24) The units of sp
|This article needs additional or better citations for verification. (February 2015) (Learn how and when to remove this template message)| In mathematics, a rate is the ratio between two related quantities. If the denominator of the ratio is expressed as a single unit of one of these quantities, and if it is assumed that this quantity can be changed systematically (i.e., is an independent variable), then the numerator of the ratio expresses the corresponding rate of change in the other (dependent) variable. The most common type of rate is "per unit of time", such as speed, heart rate and flux. Ratios that have a non-time denominator include exchange rates, literacy rates and electric field (in volts/meter). In describing the units of a rate, the word "per" is used to separate the units of the two measurements used to calculate the rate (for example a heart rate is expressed "beats per minute"). A rate defined using two numbers of the same units (such as tax rates) or counts (such as literacy rate) will result in a dimensionless quantity, which can be expressed as a percentage (for example, the global literacy rate in 1998 was 80%) or fraction or as a multiple. Rates and ratios often vary with time, location, particular element (or subset) of a set of objects, etc. Thus they are often mathematical functions. For example, velocity v (distance traveled per unit time) of a transportation vehicle on a certain trip may be represented as a function of x (the distance traveled from the start of the trip) as v(x). Alternatively, one could express velocity as a function of time t from the start of the trip as v(t). Another representation of velocity on a trip is to partition the trip route into N segments and let vi be the constant velocity on segment i (v is a function of index i). Here each segment i, of the trip is a subset of the trip route. A rate (or ratio) may often be thought of as an output-input ratio, benefit-cost ratio, all considered in the broad sense. For example, miles per hour in transportation is the output (or benefit) in terms of miles of travel, which one gets from spending an hour (a cost in time) of traveling (at this velocity). A set of sequential indices i may be used to enumerate elements (or subsets) of a set of ratios under study. For example, in finance, one could define i by assigning consecutive integers to companies, to political subdivisions (such as states), to different investments, etc. The reason for using indices i, is so a set of ratios (i=0,N) can be used in an equation so as to calculate a function of the rates such as an average of a set of ratios. For example, the average velocity found from the set of vi's mentioned above. Finding averages may involve using weighted averages and possibly using the Harmonic mean. Rate of change Consider the case where the numerator of a rate is a function where happens to be the denominator of the rate . A rate of change of with respect to (where is incremented by ) can be formally defined in two ways: where f(x) is the function with respect to x over the interval from a to a+h. An instantaneous rate of change is equivalent to a derivative. An example to contrast the differences between the unit rates are average and instantaneous definitions: the speed of a car can be calculated: - An average rate can be calculated using the total distance travelled between a and b, divided by the travel time - An instantaneous rate can be determined by viewing a speedometer. However these two formulas do not directly apply where either the range or the domain of is a set of integers or where there is no given formula (function) for finding the numerator of the ratio from its denominator. In chemistry and physics: - Speed, being the distance covered per unit of time; e.g., miles per hour and meters per second - Acceleration, the rate of change in speed, or the change in speed per unit of time - Reaction rate, the speed at which chemical reactions occur - Volumetric flow rate, the volume of fluid which passes through a given surface per unit of time; e.g., cubic meters per second - Radioactive decay, the amount of radioactive material in which one nucleus decays per second, measured in Becquerels - Bit rate, the number of bits that are conveyed or processed by a computer per unit of time - Symbol rate, the number of symbol changes (signalling events) made to the transmission medium per second - Sampling rate, the number of samples (signal measurements) per second - Rate of reinforcement, number of reinforcements per unit of time, usually per minute - Heart rate, usually measured in beats per minute - Exchange rate, how much one currency is worth in terms of the other - Inflation rate, ratio of the change in the general price level during a year to the starting price level - Interest rate, the price a borrower pays for the use of money they do not own (ratio of payment to amount borrowed) - Price–earnings ratio, market price per share of stock divided by annual earnings per share - Rate of return, the ratio of money gained or lost on an investment relative to the amount of money invested - Tax rate, the tax amount divided by the taxable income - Unemployment rate, the ratio of the number of people who are unemployed to the number in the labor force - Wage Rate, the amount paid for working a given amount of time (or doing a standard amount of accomplished work) (ratio of payment to time) - Birth rate, and mortality rate, the number of births or deaths scaled to the size of that population, per unit of time - Literacy rate, the proportion of the population over age fifteen that can read and write - Sex ratio or Gender ratio, the ratio of males to females in a population - See Webster's new international dictionary of the English language, second edition, unabridged. Merriam Webster Co. 2016. p.2065 definition 3. while this definition doesn't say "related" and while the ratio of two non-related quantities is technically a ratio, such a ratio has little (if any meaning). For example, what would be the utility of finding the ratio of such unrelated numbers as ratio of the weight of ones residence to an integer selected at random between -10−9 and +109? - Adams, Robert A. (1995). Calculus: A Complete Course (3rd ed.). Addison-Wesley Publishers Ltd. p. 129. ISBN 0-201-82823-5.
For years, researchers have been looking into the far reaches of space for the universe’s oldest objects. One team of Caltech researchers believe they have detected the most distant galaxy ever. In an article published in Astrophysical Journal Letters, the researchers describe a galaxy known as EGS8p7. This galaxy is more than 13.2 billion years old EGS8p7 was on researchers’ radar after it was observed by NASA’s Hubble Space Telescope and the Spitzer Space Telescope. Using the MOSFIRE spectrometer at the W.M. Keck Observatory in Hawaii, researchers took another look at the galaxy. A spectrographic analysis of the galaxy was performed to determine its redshift. W.M. Keck Observatory Redshift and the Doppler effect We all see examples of the Doppler effect everyday. You know how an ambulance siren sounds higher in pitch when it is coming towards you, then after it passes? That’s the Doppler effect. The frequency of a sound wave changes depending on where you are in relation to the object producing the sound. Redshift is similar. But instead of sound, it’s light. Light is shifted from the actual color to redder wavelengths. With redshift, the light source is moving away from you. The opposite of this would be blueshift, or a light source moving towards you. Redshift and Galaxies Redshift is the typical method used by scientists to measure the distance to galaxies. But, finding the most distant galaxy brings its own challenges. I’ll let the Caltech press release explain: Immediately after the Big Bang, the universe was a soup of charged particles—electrons and protons—and light (photons). Because these photons were scattered by free electrons, the early universe could not transmit light. After about 380,000 years, the universe cooled enough for protons and free electrons to combine and form neutral hydrogen atoms. About 500,000 years after the Big Bang, the first galaxies switched on and reionized the neutral gas. Detecting galaxies before reionization was believed to be impossible. Clouds of neutral hydrogen atoms would have absorbed certain radiation emitted by the first galaxies. This includes a spectral signature known as the Lyman-alpha line. “If you look at the galaxies in the early universe, there is a lot of neutral hydrogen that is not transparent to this emission,” says Adi Zitrin, a NASA Hubble Postdoctoral Scholar in Astronomy at Caltech. “We expect that most of the radiation from this galaxy would be absorbed by the hydrogen in the intervening space. Yet still we see Lyman-alpha from this galaxy.” The redshift for EGS8p7 came in at 8.68. Before this discovery, the most distant galaxy had a redshift of 7.73. Why can EGS8p7 be seen? Researchers say it could be because hydrogen reionization was not consistent. “Evidence from several observations indicate that the reionization process probably is patchy,” Zitrin says. Sirio Belli, a Caltech graduate student who assisted Zitrin on the project, explains how EGS8p7 could have ionized hydrogen earlier than other galaxies. “The galaxy we have observed, EGS8p7, which is unusually luminous, may be powered by a population of unusually hot stars, and it may have special properties that enabled it to create a large bubble of ionized hydrogen much earlier than is possible for more typical galaxies at these times,” says Belli. This finding could have huge implications for astronomy. The timeline for reionization may even need to be revised according to Zitrin. Follow News Ledge This post may contain affiliate links, which means we receive a commission if you make a purchase using one of the affiliated links.
The Dictionary for Cambridge Math: What Do Examiners Want? Part I: Wording of Questions Evaluate: Literally a derivative of the word value, this means your answer must be a number. Calculate: Usually involves — surprise surprise — calculations on your part, and for questions greater than 1 mark, involves showing your working. Find: Usually short for “find the value of”, e.g. find the value of x; find 20% of 700. See evaluate and calculate above. Total: Total means total, the sum of all parts. Circumference: The perimeter of a circle Perimeter: The marked boundary of any shape. Try not to involve too many formulas in perimeter questions. Trust your gut. If you have been asked to find the perimeter of three sides of a square, it doesn’t make sense to use your 4l formula. Term: Any number of numbers/variables clumped together. a, b, ab, 4a, 25fdjksh, are all terms. In terms of: In terms of x means x must be in your answer. In terms of pi means pi must be in your answer. 3 x pi in terms of pi is 3pi, not 9.425. This sometimes means less work for you, e.g. the area of a circle with radius 7cm is 153.9cm2, but the area of a circle with radius 7cm in terms of pi is simply 49pi. Woo! Less work for me. Write down. These are usually one mark problems that are either literally written on the page or require a simple inference. Try not to whip out your calculator for this one. Solve. Similar to calculate, but usually has a variable they want you to find the value of. You are usually solving for something, e.g. solving for x (finding x), solving for y (finding y), etc. If there are no variables, they want a simple answer (some kind of number, e.g. rational, whole, fraction, decimal, whatever). Given. Never assume something, unless it is given. Sometimes you will see the word “given” on the page. Other times they will write numbers on the page, e.g. angles, lengths, etc. Otherwise, never say “hmm, that looks like a right angle” — unless you can infer it. Never. Give an example. Give an example means give an example, and that’s what the marks are for. There’s no getting away from it, no matter if you show a page of (irrelevant) working. State. When you are asked to state something, it does not mean to come up with a constitution for it. (Har-har). It means to write down what the examiner assumes you already know, e.g. state a principle, a law, a formula, a common equation. Work out. Play an LMFAO song, and see our note on the word evaluate. Show that. This means the answer is given to you, but you need to show you the working involved to get to that answer, using the information they have given you already. A general note for Math paper one. It is a fill in the blanks paper. They have even given you the units for each question. There is no room for silly mistakes, like “whoops I wrote the wrong units”. They are telling you exactly what form your question should be in.
The arrival of water on our planet wasn’t a last-minute job. Water came to Earth on icy comets after most of the planet and its core were formed, about 4.5 billion years ago, according to a leading theory. But now an analysis of isotopes from meteorites born earlier, when the solar system was formed, seems to imply that the wet stuff got here much sooner. To pin down when meteorites could have delivered Earth’s water, Mario Fisher-Gödde and Thorsten Kleine at the University of Münster, Germany, looked at the Tagish Lake meteorites that fell in British Columbia, Canada, in January 2000. They compared the abundance of ruthenium isotopes in these meteorites with the abundance in Earth’s mantle. “Meteorites impacted Earth during formation and they can leave signatures,” says Katherine Bermingham, at the University of Maryland. “Ruthenium isotopes are stable. That means they can act as fingerprints.” If this kind of meteorite brought water to Earth during a late heavy bombardment, then the isotopes inside them should match the isotopes in Earth’s mantle. “These isotopes were produced in a stellar environment. Their signature can’t be erased by later processes,” says Fisher-Gödde. “That’s why it’s a good tool.” But Fisher-Gödde and Kleine found that the ruthenium isotopes were distinctly different in meteorites and those found in Earth’s mantle. “We can exclude a late water delivery,” Fisher Gödde says. Move the clock back This doesn’t rule out the possibility that meteorites may have brought water to Earth earlier in its formation, during the growth of Earth’s core and before the impact that formed the moon, about 30 to 50 million years after the origin of the solar system. Lydia Hallis, a planetary scientist at the University of Glasgow, UK, previously used hydrogen isotope ratios in volcanic basalt rocks to conclude that Earth’s water may in fact have been part of the dust cloud out of which the planet condensed. “The conclusions do align with our research, in that they predict Earth’s water must have been delivered during accretion, rather than later on,” she says. “The ruthenium data suggest comets could not have played a large part in the late addition of material to Earth.” Nature DOI: 10.1038/nature21045 Read more: Deepest water found 1000km down, a third of way to Earth’s core
A screw thread, often shortened to thread, is a helical structure used to convert between rotational and linear movement or force. A screw thread is a ridge wrapped around a cylinder or cone in the form of a helix, with the former being called a straight thread and the latter called a tapered thread. A screw thread is the essential feature of the screw as a simple machine and also as a fastener. The mechanical advantage of a screw thread depends on its lead, which is the linear distance the screw travels in one revolution. In most applications, the lead of a screw thread is chosen so that friction is sufficient to prevent linear motion being converted to rotary, that is so the screw does not slip even when linear force is applied so long as no external rotational force is present. This characteristic is essential to the vast majority of its uses. The tightening of a fastener's screw thread is comparable to driving a wedge into a gap until it sticks fast through friction and slight plastic deformation. - 1 Applications - 2 Design - 2.1 Gender - 2.2 Handedness - 2.3 Form - 2.4 Angle - 2.5 Lead, pitch, and starts - 2.6 Diameters - 2.7 Classes of fit - 2.8 Standardization and interchangeability - 2.9 Thread depth - 2.10 Taper - 3 Standardization - 4 Engineering drawing - 5 Generation - 6 Inspection - 7 See also - 8 Notes - 9 References - 10 External links Screw threads have several applications: - Gear reduction via worm drives - Moving objects linearly by converting rotary motion to linear motion, as in the leadscrew of a jack. - Measuring by correlating linear motion to rotary motion (and simultaneously amplifying it), as in a micrometer. - Both moving objects linearly and simultaneously measuring the movement, combining the two aforementioned functions, as in a leadscrew of a lathe. In all of these applications, the screw thread has two main functions: - It converts rotary motion into linear motion. - It prevents linear motion without the corresponding rotation. Every matched pair of threads, external and internal, can be described as male and female. For example, a screw has male threads, while its matching hole (whether in nut or substrate) has female threads. This property is called gender. The helix of a thread can twist in two possible directions, which is known as handedness. Most threads are oriented so that the threaded item, when seen from a point of view on the axis through the center of the helix, moves away from the viewer when it is turned in a clockwise direction, and moves towards the viewer when it is turned counterclockwise. This is known as a right-handed (RH) thread, because it follows the right hand grip rule. Threads oriented in the opposite direction are known as left-handed (LH). By common convention, right-handedness is the default handedness for screw threads. Therefore, most threaded parts and fasteners have right-handed threads. Left-handed thread applications include: - Where the rotation of a shaft would cause a conventional right-handed nut to loosen rather than to tighten due to fretting induced precession. Examples include: - In combination with right-hand threads in turnbuckles and clamping studs. - In some gas supply connections to prevent dangerous misconnections, for example in gas welding the flammable gas supply uses left-handed threads. - In a situation where neither threaded pipe end can be rotated to tighten/loosen the joint, e.g. in traditional heating pipes running through multiple rooms in a building. In such a case, the coupling will have one right-handed and one left-handed thread - In some instances, for example early ballpoint pens, to provide a "secret" method of disassembly. - In mechanisms to give a more intuitive action as: - Some Edison base lamps and fittings (such as those formerly used on the New York City Subway) have a left-hand thread to deter theft, since they cannot be used in other light fixtures. The term chirality comes from the Greek word for "hand" and concerns handedness in many other contexts. The cross-sectional shape of a thread is often called its form or threadform (also spelled thread form). It may be square, triangular, trapezoidal, or other shapes. The terms form and threadform sometimes refer to all design aspects taken together (cross-sectional shape, pitch, and diameters). Most triangular threadforms are based on an isosceles triangle. These are usually called V-threads or vee-threads because of the shape of the letter V. For 60° V-threads, the isosceles triangle is, more specifically, equilateral. For buttress threads, the triangle is scalene. The theoretical triangle is usually truncated to varying degrees (that is, the tip of the triangle is cut short). A V-thread in which there is no truncation (or a minuscule amount considered negligible) is called a sharp V-thread. Truncation occurs (and is codified in standards) for practical reasons: - The thread-cutting or thread-forming tool cannot practically have a perfectly sharp point; at some level of magnification, the point is truncated, even if the truncation is very small. - Too-small truncation is undesirable anyway, because: - The cutting or forming tool's edge will break too easily; - The part or fastener's thread crests will have burrs upon cutting, and will be too susceptible to additional future burring resulting from dents (nicks); - The roots and crests of mating male and female threads need clearance to ensure that the sloped sides of the V meet properly despite (a) error in pitch diameter and (b) dirt and nick-induced burrs. - The point of the threadform adds little strength to the thread. Ball screws, whose male-female pairs involve bearing balls in between, show that other variations of form are possible. Roller screws use conventional thread forms but introduce an interesting twist on the theme. The angle characteristic of the cross-sectional shape is often called the thread angle. For most V-threads, this is standardized as 60 degrees, but any angle can be used. Lead, pitch, and starts Lead // and pitch are closely related concepts.They can be confused because they are the same for most screws. Lead is the distance along the screw's axis that is covered by one complete rotation of the screw (360°). Pitch is the distance from the crest of one thread to the next. Because the vast majority of screw threadforms are single-start threadforms, their lead and pitch are the same. Single-start means that there is only one "ridge" wrapped around the cylinder of the screw's body. Each time that the screw's body rotates one turn (360°), it has advanced axially by the width of one ridge. "Double-start" means that there are two "ridges" wrapped around the cylinder of the screw's body. Each time that the screw's body rotates one turn (360°), it has advanced axially by the width of two ridges. Another way to express this is that lead and pitch are parametrically related, and the parameter that relates them, the number of starts, very often has a value of 1, in which case their relationship becomes equality. In general, lead is equal to pitch times the number of starts. Whereas metric threads are usually defined by their pitch, that is, how much distance per thread, inch-based standards usually use the reverse logic, that is, how many threads occur per a given distance. Thus inch-based threads are defined in terms of threads per inch (TPI). Pitch and TPI describe the same underlying physical property—merely in different terms. When the inch is used as the unit of measurement for pitch, TPI is the reciprocal of pitch and vice versa. For example, a 1⁄4-20 thread has 20 TPI, which means that its pitch is 1⁄20 inch (0.050 in or 1.27 mm). As the distance from the crest of one thread to the next, pitch can be compared to the wavelength of a wave. Another wave analogy is that pitch and TPI are inverses of each other in a similar way that period and frequency are inverses of each other. Coarse versus fine Coarse threads are those with larger pitch (fewer threads per axial distance), and fine threads are those with smaller pitch (more threads per axial distance). Coarse threads have a larger threadform relative to screw diameter, whereas fine threads have a smaller threadform relative to screw diameter. This distinction is analogous to that between coarse teeth and fine teeth on a saw or file, or between coarse grit and fine grit on sandpaper. The common V-thread standards (ISO 261 and Unified Thread Standard) include a coarse pitch and a fine pitch for each major diameter. For example, 1⁄2-13 belongs to the UNC series (Unified National Coarse) and 1⁄2-20 belongs to the UNF series (Unified National Fine). Similarly, ISO261 M10 (10mm (398 thou) nominal outer diameter) has a coarse thread version at 1.25mm pitch (49 thou) and a fine thread version at 1 mm (39 thou) pitch. The term coarse implies here does not mean lower quality, nor does the term fine imply higher quality. The terms when used in reference to screw thread pitch have nothing to do with the tolerances used (degree of precision) or the amount of craftsmanship, quality, or cost. They simply refer to the size of the threads relative to the screw diameter. There are three characteristic diameters of threads: major diameter, minor diameter, and pitch diameter: Industry standards specify minimum (min) and maximum (max) limits for each of these, for all recognized thread sizes. The minimum limits for external (or bolt, in ISO terminology), and the maximum limits for internal (nut), thread sizes are there to ensure that threads do not strip at the tensile strength limits for the parent material. The minimum limits for internal, and maximum limits for external, threads are there to ensure that the threads fit together. The major diameter of threads is the larger of two extreme diameters delimiting the height of the thread profile, as a cross-sectional view is taken in a plane containing the axis of the threads. For a screw, this is its outside diameter (OD). The major diameter of a nut may not be directly measured, but it may be tested with go/no-go gauges. The major diameter of external threads is normally smaller than the major diameter of the internal threads, if the threads are designed to fit together. But this requirement alone does not guarantee that a bolt and a nut of the same pitch would fit together: the same requirement must separately be made for the minor and pitch diameters of the threads. Besides providing for a clearance between the crest of the bolt threads and the root of the nut threads, we must also ensure that the clearances are not so excessive as to cause the fasteners to fail. The minor diameter is the lower extreme diameter of the thread. Major diameter minus minor diameter, divided by two, equals the height of the thread. The minor diameter of a nut is its inside diameter. The minor diameter of a bolt can be measured with go/no-go gauges or, directly, with an optical comparator. As shown in the figure at right, threads of equal pitch and angle that have matching minor diameters, with differing major and pitch diameters, may appear to fit snugly, but only do so radially; threads that have only major diameters matching (not shown) could also be visualized as not allowing radial movement. The reduced material condition, due to the unused spaces between the threads, must be minimized so as not to overly weaken the fasteners. The pitch diameter (PD, or D2) of a particular thread, internal or external, is the diameter of a cylindrical surface, axially concentric to the thread, which intersects the thread flanks at equidistant points, when viewed in a cross-sectional plane containing the axis of the thread, the distance between these points being exactly one half the pitch distance. Equivalently, a line running parallel to the axis and a distance D2 away from it, the "PD line," slices the sharp-V form of the thread, having flanks coincident with the flanks of the thread under test, at exactly 50% of its height. We have assumed that the flanks have the proper shape, angle, and pitch for the specified thread standard. It is generally unrelated to the major (D) and minor (D1) diameters, especially if the crest and root truncations of the sharp-V form at these diameters are unknown. Everything else being ideal, D2, D, & D1, together, would fully describe the thread form. Knowledge of PD determines the position of the sharp-V thread form, the sides of which coincide with the straight sides of the thread flanks: e.g., the crest of the external thread would truncate these sides a radial displacement D - D2 away from the position of the PD line. Provided that there are moderate non-negative clearances between the root and crest of the opposing threads, and everything else is ideal, if the pitch diameters of a screw and nut are exactly matched, there should be no play at all between the two as assembled, even in the presence of positive root-crest clearances. This is the case when the flanks of the threads come into intimate contact with one another, before the roots and crests do, if at all. However, this ideal condition would in practice only be approximated and would generally require wrench-assisted assembly, possibly causing the galling of the threads. For this reason, some allowance, or minimum difference, between the PDs of the internal and external threads has to generally be provided for, to eliminate the possibility of deviations from the ideal thread form causing interference and to expedite hand assembly up to the length of engagement. Such allowances, or fundamental deviations, as ISO standards call them, are provided for in various degrees in corresponding classes of fit for ranges of thread sizes. At one extreme, no allowance is provided by a class, but the maximum PD of the external thread is specified to be the same as the minimum PD of the internal thread, within specified tolerances, ensuring that the two can be assembled, with some looseness of fit still possible due to the margin of tolerance. A class called interference fit may even provide for negative allowances, where the PD of the screw is greater than the PD of the nut by at least the amount of the allowance. The pitch diameter of external threads is measured by various methods: - A dedicated type of micrometer, called a thread mic or pitch mic, which has a V-anvil and a conical spindle tip, contacts the thread flanks for a direct reading. - A general-purpose micrometer (flat anvil and spindle) is used over a set of three wires that rest on the thread flanks, and a known constant is subtracted from the reading. (The wires are truly gauge pins, being ground to precise size, although "wires" is their common name.) This method is called the 3-wire method. Sometimes grease is used to hold the wires in place, helping the user to juggle the part, mic, and wires into position. - An optical comparator may also be used to determine PD graphically. Classes of fit The way in which male and female fit together, including play and friction, is classified (categorized) in thread standards. Achieving a certain class of fit requires the ability to work within tolerance ranges for dimension (size) and surface finish. Defining and achieving classes of fit are important for interchangeability. Classes include 1, 2, 3 (loose to tight); A (external) and B (internal); and various systems such as H and D limits. Thread limit or pitch diameter limit is a standard used for classifying the tolerance of the thread pitch diameter for taps. For imperial, H or L limits are used which designate how many units of 5 ten thousandths of an inch over or undersized the pitch diameter is from its basic value, respectively. Thus a tap designated with an H limit of 3, denoted H3, would have a pitch diameter 5 ten thousandths × 3 = 1.5 thousandths of an inch larger than base pitch diameter and would thus result in cutting an external thread with a loser fit than say an H2 tap. Metric uses D or DU limits which is the same system as imperial, but uses D or DU designators for over and undersized respectively, and goes by units of 0.013 mm (0.51 mils). Generally taps come in the range of H1 to H5 and rarely L1. Standardization and interchangeability To achieve a predictably successful mating of male and female threads and assured interchangeability between males and between females, standards for form, size, and finish must exist and be followed. Standardization of threads is discussed below. Screw threads are almost never made perfectly sharp (no truncation at the crest or root), but instead are truncated, yielding a final thread depth that can be expressed as a fraction of the pitch value. The UTS and ISO standards codify the amount of truncation, including tolerance ranges. A perfectly sharp 60° V-thread will have a depth of thread ("height" from root to crest) equal to .866 of the pitch. This fact is intrinsic to the geometry of an equilateral triangle—a direct result of the basic trigonometric functions. It is independent of measurement units (inch vs mm). However, UTS and ISO threads are not sharp threads. The major and minor diameters delimit truncations on either side of the sharp V, typically about one eighth of the pitch (expressed with the notation 1/8p or .125p), although the actual geometry definition has more variables than that. This means that a full (100%) UTS or ISO thread has a height of around .65p. Threads can be (and often are) truncated a bit more, yielding thread depths of 60 percent to 75 percent of the .65p value. For example, a 75 percent thread sacrifices only a small amount of strength in exchange for a significant reduction in the force required to cut the thread. The result is that tap and die wear is reduced, the likelihood of breakage is lessened and higher cutting speeds can often be employed. Truncation is achieved by using a slightly larger tap drill in the case of female threads, or by slightly reducing the diameter of the threaded area of workpiece in the case of male threads, the latter effectively reducing the thread's major diameter. In the case of female threads, tap drill charts typically specify sizes that will produce an approximate 75 percent thread. A 60 percent thread may be appropriate in cases where high tensile loading will not be expected. In both cases, the pitch diameter is not affected. The balancing of truncation versus thread strength is similar to many engineering decisions involving the strength, weight and cost of material, as well as the cost to machine it. Tapered threads are used on fasteners and pipe. A common example of a fastener with a tapered thread is a wood screw. The threaded pipes used in some plumbing installations for the delivery of fluids under pressure have a threaded section that is slightly conical. Examples are the NPT and BSP series. The seal provided by a threaded pipe joint is created when a tapered externally threaded end is tightened into an end with internal threads. Normally a good seal requires the application of a separate sealant in the joint, such as thread seal tape, or a liquid or paste pipe sealant such as pipe dope, however some threaded pipe joints do not require a separate sealant. Standardization of screw threads has evolved since the early nineteenth century to facilitate compatibility between different manufacturers and users. The standardization process is still ongoing; in particular there are still (otherwise identical) competing metric and inch-sized thread standards widely used. Standard threads are commonly identified by short letter codes (M, UNC, etc.) which also form the prefix of the standardized designations of individual threads. Additional product standards identify preferred thread sizes for screws and nuts, as well as corresponding bolt head and nut sizes, to facilitate compatibility between spanners (wrenches) and other tools. ISO standard threads These were standardized by the International Organization for Standardization (ISO) in 1947. Although metric threads were mostly unified in 1898 by the International Congress for the standardization of screw threads, separate metric thread standards were used in France, Germany, and Japan, and the Swiss had a set of threads for watches. Other current standards In particular applications and certain regions, threads other than the ISO metric screw threads remain commonly used, sometimes because of special application requirements, but mostly for reasons of backwards compatibility: - Unified Thread Standard (UTS), is the dominant thread standard used in the United States and Canada. It is defined in ANSI/ASME B1.1 Unified Inch Screw Threads, (UN and UNR Thread Form). This standard includes: - Unified Coarse (UNC), commonly referred to as National Coarse (NC) in retailing. - Unified Fine (UNF), commonly referred to as National Fine (NF) in retailing. - Unified Extra Fine (UNEF) - Unified Special (UNS) - National pipe thread (NPT), used for plumbing of water and gas pipes, and threaded electrical conduit. - NPTF (National Pipe Thread Fuel) - British Standard Whitworth (BSW), and for other Whitworth threads including: - British standard pipe thread (BSP) which exists in a taper and non taper variant; used for other purposes as well - British Standard Pipe Taper (BSPT) - British Association screw threads (BA), primarily electronic/electrical, moving coil meters and to mount optical lenses - British Standard Buttress Threads (BS 1657:1950) - British Standard for Spark Plugs BS 45:1972 - British Standard Brass a fixed pitch 26tpi thread - Glass Packaging Institute threads (GPI), primarily for glass bottles and vials - Power screw threads - Royal Microscopical Society (RMS) thread, also known as society thread, is a special 0.8" diameter x 36 thread-per-inch (tpi) Whitworth thread form used for microscope objective lenses. - Microphone stands: - ⅝″ 27 threads per inch (tpi) Unified Special thread (UNS, USA and the rest of the world) - ¼″ BSW (not common in the USA, used in the rest of the world) - ⅜″ BSW (not common in the USA, used in the rest of the world) - Stage lighting suspension bolts (in some countries only; some have gone entirely metric, others such as Australia have reverted to the BSW threads, or have never fully converted): - ⅜″ BSW for lighter luminaires - ½″ BSW for heavier luminaires - Tapping screw threads (ST) – ISO 1478 - Aerospace inch threads (UNJ) – ISO 3161 - Aerospace metric threads (MJ) – ISO 5855 - Tyre valve threads (V) – ISO 4570 - Metal bone screws (HA, HB) – ISO 5835 - Panzergewinde (Pg) (German) is an old German 80° thread (DIN 40430) that remained in use until 2000 in some electrical installation accessories in Germany. - Fahrradgewinde (Fg) (English: bicycle thread) is a German bicycle thread standard (per DIN 79012 and DIN 13.1), which encompasses a lot of CEI and BSC threads as used on cycles and mopeds everywhere (http://www.fahrradmonteur.de/fahrradgewinde.php) - Edison base Incandescent light bulb holder screw thread - Fire hose connection (NFPA standard 194) - Hose Coupling Screw Threads (ANSI/ASME B1.20.7-1991 [R2003]) for garden hoses and accessories - Löwenherz thread, a German metric thread used for measuring instruments - Sewing machine thread History of standardization Standardization of screw threads began many centuries ago, the first time a craftsman who carved and filed screw threads ever tried to make two screws, or two mated pairs of screw and nut, come out alike. However, in craft production of individual threads or mated pairs of threads, interchangeability was not a requirement; custom fitting was the norm. Therefore, the first historically important intra-company standardization of screw threads began with Henry Maudslay around 1800, when the modern screw-cutting lathe made interchangeable V-thread machine screws a practical commodity. During the next 40 years, standardization continued to occur on the intra- and inter-company levels. No doubt many mechanics of the era participated in this zeitgeist; Joseph Clement was one of those whom history has noted. In 1841, Joseph Whitworth created a design that, through its adoption by many British railroad companies, became a national standard for the United Kingdom called British Standard Whitworth. During the 1840s through 1860s, this standard was often used in the United States and Canada as well, in addition to myriad intra- and inter-company standards. In April 1864, William Sellers presented a paper to the Franklin Institute in Philadelphia, proposing a new standard to replace the US' poorly standardized screw thread practice. Sellers simplified the Whitworth design by adopting a thread profile of 60° and a flattened tip (in contrast to Whitworth's 55° angle and rounded tip). The 60° angle was already in common use in America, but Sellers's system promised to make it and all other details of threadform consistent. The Sellers thread, easier for ordinary machinists to produce, became an important standard in the U.S. during the late 1860s and early 1870s, when it was chosen as a standard for work done under U.S. government contracts, and it was also adopted as a standard by highly influential railroad industry corporations such as the Baldwin Locomotive Works and the Pennsylvania Railroad. Other firms adopted it, and it soon became a national standard for the U.S., later becoming generally known as the United States Standard thread (USS thread). Over the next 30 years the standard was further defined and extended and evolved into a set of standards including National Coarse (NC), National Fine (NF), and National Pipe Taper (NPT). Meanwhile, in Britain, the British Association screw threads were also developed and refined. During this era, in continental Europe, the British and American threadforms were well known, but also various metric thread standards were evolving, which usually employed 60° profiles. Some of these evolved into national or quasi-national standards. They were mostly unified in 1898 by the International Congress for the standardization of screw threads at Zurich, which defined the new international metric thread standards as having the same profile as the Sellers thread, but with metric sizes. Efforts were made in the early 20th century to convince the governments of the U.S., UK, and Canada to adopt these international thread standards and the metric system in general, but they were defeated with arguments that the capital cost of the necessary retooling would drive some firms from profit to loss and hamper the economy. (The mixed use of dueling inch and metric standards has since cost much, much more, but the bearing of these costs has been more distributed across national and global economies rather than being borne up front by particular governments or corporations, which helps explain the lobbying efforts.) Sometime between 1912 and 1916, the Society of Automobile Engineers (SAE) created an "SAE series" of screw thread sizes reflecting parentage from earlier USS and ASME standards. During the late 19th and early 20th centuries, engineers found that ensuring the reliable interchangeability of screw threads was a multi-faceted and challenging task that was not as simple as just standardizing the major diameter and pitch for a certain thread. It was during this era that more complicated analyses made clear the importance of variables such as pitch diameter and surface finish. A tremendous amount of engineering work was done throughout World War I and the following interwar period in pursuit of reliable interchangeability. Classes of fit were standardized, and new ways of generating and inspecting screw threads were developed (such as production thread-grinding machines and optical comparators). Therefore, in theory, one might expect that by the start of World War II, the problem of screw thread interchangeability would have already been completely solved. Unfortunately, this proved to be false. Intranational interchangeability was widespread, but international interchangeability was less so. Problems with lack of interchangeability among American, Canadian, and British parts during World War II led to an effort to unify the inch-based standards among these closely allied nations, and the Unified Thread Standard was adopted by the Screw Thread Standardization Committees of Canada, the United Kingdom, and the United States on November 18, 1949 in Washington, D.C., with the hope that they would be adopted universally. (The original UTS standard may be found in ASA (now ANSI) publication, Vol. 1, 1949.) UTS consists of Unified Coarse (UNC), Unified Fine (UNF), Unified Extra Fine (UNEF) and Unified Special (UNS). The standard was widely taken up in the UK, although a small number of companies continued to use the UK's own British standards for Whitworth (BSW), British Standard Fine (BSF) and British Association (BA) micro-screws. However, internationally, the metric system was eclipsing inch-based measurement units. In 1947, ISO was founded; and in 1960, the metric-based International System of Units (abbreviated SI from the French Système International) was created. With continental Europe and much of the rest of the world turning to SI and ISO metric screw thread, the UK gradually leaned in the same direction. The ISO metric screw thread is now the standard that has been adopted worldwide and is slowly displacing all former standards, including UTS. In the U.S., where UTS is still prevalent, over 40% of products contain at least some ISO metric screw threads. The UK has completely abandoned its commitment to UTS in favour of ISO metric threads, and Canada is in between. Globalization of industries produces market pressure in favor of phasing out minority standards. A good example is the automotive industry; U.S. auto parts factories long ago developed the ability to conform to the ISO standards, and today very few parts for new cars retain inch-based sizes, regardless of being made in the U.S. Even today, over a half century since the UTS superseded the USS and SAE series, companies still sell hardware with designations such as "USS" and "SAE" to convey that it is of inch sizes as opposed to metric. Most of this hardware is in fact made to the UTS, but the labeling and cataloging terminology is not always precise. In American engineering drawings, ANSI Y14.6 defines standards for indicating threaded parts. Parts are indicated by their nominal diameter (the nominal major diameter of the screw threads), pitch (number of threads per inch), and the class of fit for the thread. For example, “.750-10UNC-2A” is male (A) with a nominal major diameter of 0.750 in, 10 threads per inch, and a class-2 fit; “.500-20UNF-1B” would be female (B) with a 0.500 in nominal major diameter, 20 threads per inch, and a class-1 fit. An arrow points from this designation to the surface in question. There are many ways to generate a screw thread, including the traditional subtractive types (e.g., various kinds of cutting [single-pointing, taps and dies, die heads, milling]; molding; casting [die casting, sand casting]; forming and rolling; grinding; and occasionally lapping to follow the other processes); newer additive techniques; and combinations thereof. - Inspection of thread geometry is discussed at Threading (manufacturing) > Inspection. Another common inspection point is the straightness of a bolt or screw. This topic comes up often when there are assembly issues with predrilled holes as the first troubleshooting point is to determine if the fastener or the hole is at fault. ASME B18.2.9 "Straightness Gage and Gaging for Bolts and Screws" was developed to address this issue. Per the scope of the standard, it describes the gage and procedure for checking bolt and screw straightness at maximum material condition (MMC) and provides default limits when not stated in the applicable product standard. |Wikimedia Commons has media related to Screw threads.| - Brown, Sheldon. "Bicycle Glossary: Pedal". Sheldon Brown. Retrieved 2010-10-19. - Bhandari, p. 205. - Green, Robert, ed. (1996). Machinery's Handbook (25 ed.). p. 893. ISBN 0-8311-2575-6. - Löwenherz thread - Ryffel 1988, p. 1603. - Sewing machine thread - Quentin R. Skrabec, Jr. (2005). "The Metallurgic Age: The Victorian Flowering of Invention and Industrial Science". p. 169. McFarland - Roe 1916, pp. 9–10. - ASME 125th Anniversary: Special 2005 Designation of Landmarks: Profound Influences in Our Lives: The United States Standard Screw Threads - Roe 1916, pp. 248–249. - Roe 1916, p. 249. - Wilson pp. 77–78 (page numbers may be from an earlier edition). - Bhandari, V B (2007), Design of Machine Elements, Tata McGraw-Hill, ISBN 978-0-07-061141-2. - Degarmo, E. Paul; Black, J T.; Kohser, Ronald A. (2003), Materials and Processes in Manufacturing (9th ed.), Wiley, ISBN 0-471-65653-4. - Green, Robert E. et al. (eds) (1996), Machinery's Handbook (25 ed.), New York, NY, USA: Industrial Press, ISBN 978-0-8311-2575-2. - Roe, Joseph Wickham (1916), English and American Tool Builders, New Haven, Connecticut: Yale University Press, LCCN 16011753. Reprinted by McGraw-Hill, New York and London, 1926 (LCCN 27-24075); and by Lindsay Publications, Inc., Bradley, Illinois, (ISBN 978-0-917914-73-7). - Wilson, Bruce A. (2004), Design Dimensioning and Tolerancing (4th ed.), Goodheart-Wilcox, ISBN 1-59070-328-6.
|A total solar eclipse occurs when the Moon completely covers the Sun's disk, as seen in this 1999 solar eclipse. Solar prominences can be seen along the limb (in red) as well as extensive coronal filaments.| |An annular solar eclipse (left) occurs when the Moon is too far away to completely cover the Sun's disk (May 20, 2012). During a partial solar eclipse (right), the Moon blocks only part of the Sun's disk (October 23, 2014).| A solar eclipse occurs when a portion of the Earth is engulfed in a shadow cast by the Moon which fully or partially blocks sunlight. This occurs when the Sun, Moon and Earth are aligned. Such alignment coincides with a new moon (syzygy) indicating the Moon is closest to the ecliptic plane. In a total eclipse, the disk of the Sun is fully obscured by the Moon. In partial and annular eclipses, only part of the Sun is obscured. If the Moon were in a perfectly circular orbit, a little closer to the Earth, and in the same orbital plane, there would be total solar eclipses every new moon. However, since the Moon's orbit is tilted at more than 5 degrees to the Earth's orbit around the Sun, its shadow usually misses Earth. A solar eclipse can occur only when the Moon is close enough to the ecliptic plane during a new moon. Special conditions must occur for the two events to coincide because the Moon's orbit crosses the ecliptic at its orbital nodes twice every draconic month (27.212220 days) while a new moon occurs one every synodic month (29.53059 days). Solar (and lunar) eclipses therefore happen only during eclipse seasons resulting in at least two, and up to five, solar eclipses each year; no more than two of which can be total eclipses. Total eclipses are rare because the timing of the new moon within the eclipse season needs to be more exact for an alignment between the observer (on Earth) and the centers of the Sun and Moon. In addition, the elliptical orbit of the Moon often takes it far enough away from Earth that its apparent size is not large enough to block the Sun entirely. Total solar eclipses are rare at any particular location because totality exists only along a narrow path on the Earth's surface traced by the Moon's full shadow or umbra. An eclipse is a natural phenomenon. However, in some ancient and modern cultures, solar eclipses were attributed to supernatural causes or regarded as bad omens. A total solar eclipse can be frightening to people who are unaware of its astronomical explanation, as the Sun seems to disappear during the day and the sky darkens in a matter of minutes. Since looking directly at the Sun can lead to permanent eye damage or blindness, special eye protection or indirect viewing techniques are used when viewing a solar eclipse. It is safe to view only the total phase of a total solar eclipse with the unaided eye and without protection. This practice must be undertaken carefully, though the extreme fading of the solar brightness by a factor of over 100 times in the last minute before totality makes it obvious when totality has begun and it is for that extreme variation and the view of the solar corona that leads people to travel to the zone of totality (the partial phases span over two hours while the total phase can last only a maximum of 7.5 minutes for any one location and is usually less). People referred to as eclipse chasers or umbraphiles will travel even to remote locations to observe or witness predicted central solar eclipses. There are four types of solar eclipses: The Sun's distance from Earth is about 400 times the Moon's distance, and the Sun's diameter is about 400 times the Moon's diameter. Because these ratios are approximately the same, the Sun and the Moon as seen from Earth appear to be approximately the same size: about 0.5 degree of arc in angular measure. A separate category of solar eclipses is that of the Sun being occluded by a body other than the Earth's Moon, as can be observed at points in space away from the Earth's surface. Two examples are when the crew of Apollo 12 observed the Earth eclipse the Sun in 1969 and when the Cassini probe observed Saturn eclipsing the Sun in 2006. The Moon's orbit around the Earth is slightly elliptical, as is the Earth's orbit around the Sun. The apparent sizes of the Sun and Moon therefore vary. The magnitude of an eclipse is the ratio of the apparent size of the Moon to the apparent size of the Sun during an eclipse. An eclipse that occurs when the Moon is near its closest distance to Earth (i.e., near its perigee) can be a total eclipse because the Moon will appear to be large enough to completely cover the Sun's bright disk or photosphere; a total eclipse has a magnitude greater than or equal to 1.000. Conversely, an eclipse that occurs when the Moon is near its farthest distance from Earth (i.e., near its apogee) can be only an annular eclipse because the Moon will appear to be slightly smaller than the Sun; the magnitude of an annular eclipse is less than 1. A hybrid eclipse occurs when the magnitude of an eclipse changes during the event from less to greater than one, so the eclipse appears to be total at locations nearer the midpoint, and annular at other locations nearer the beginning and end, since the sides of the Earth are slightly further away from the Moon. These eclipses are extremely narrow in their path width and relatively short in their duration at any point compared with fully total eclipses; the 2023 April 20 hybrid eclipse's totality is over a minute in duration at various points along the path of totality. Like a focal point, the width and duration of totality and annularity are near zero at the points where the changes between the two occur. Because the Earth's orbit around the Sun is also elliptical, the Earth's distance from the Sun similarly varies throughout the year. This affects the apparent size of the Sun in the same way, but not as much as does the Moon's varying distance from Earth. When Earth approaches its farthest distance from the Sun in early July, a total eclipse is somewhat more likely, whereas conditions favour an annular eclipse when Earth approaches its closest distance to the Sun in early January. Central eclipse is often used as a generic term for a total, annular, or hybrid eclipse. This is, however, not completely correct: the definition of a central eclipse is an eclipse during which the central line of the umbra touches the Earth's surface. It is possible, though extremely rare, that part of the umbra intersects with the Earth (thus creating an annular or total eclipse), but not its central line. This is then called a non-central total or annular eclipse. Gamma is a measure of how centrally the shadow strikes. The last (umbral yet) non-central solar eclipse was on April 29, 2014. This was an annular eclipse. The next non-central total solar eclipse will be on April 9, 2043. The phases observed during a total eclipse are called: The diagrams to the right show the alignment of the Sun, Moon, and Earth during a solar eclipse. The dark gray region between the Moon and Earth is the umbra, where the Sun is completely obscured by the Moon. The small area where the umbra touches Earth's surface is where a total eclipse can be seen. The larger light gray area is the penumbra, in which a partial eclipse can be seen. An observer in the antumbra, the area of shadow beyond the umbra, will see an annular eclipse. The Moon's orbit around the Earth is inclined at an angle of just over 5 degrees to the plane of the Earth's orbit around the Sun (the ecliptic). Because of this, at the time of a new moon, the Moon will usually pass to the north or south of the Sun. A solar eclipse can occur only when a new moon occurs close to one of the points (known as nodes) where the Moon's orbit crosses the ecliptic. As noted above, the Moon's orbit is also elliptical. The Moon's distance from the Earth can vary by about 6% from its average value. Therefore, the Moon's apparent size varies with its distance from the Earth, and it is this effect that leads to the difference between total and annular eclipses. The distance of the Earth from the Sun also varies during the year, but this is a smaller effect. On average, the Moon appears to be slightly smaller than the Sun as seen from the Earth, so the majority (about 60%) of central eclipses are annular. It is only when the Moon is closer to the Earth than average (near its perigee) that a total eclipse occurs. |Mean radius||1,737.10 km The Moon orbits the Earth in approximately 27.3 days, relative to a fixed frame of reference. This is known as the sidereal month. However, during one sidereal month, Earth has revolved part way around the Sun, making the average time between one new moon and the next longer than the sidereal month: it is approximately 29.5 days. This is known as the synodic month and corresponds to what is commonly called the lunar month. The Moon crosses from south to north of the ecliptic at its ascending node, and vice versa at its descending node. However, the nodes of the Moon's orbit are gradually moving in a retrograde motion, due to the action of the Sun's gravity on the Moon's motion, and they make a complete circuit every 18.6 years. This regression means that the time between each passage of the Moon through the ascending node is slightly shorter than the sidereal month. This period is called the nodical or draconic month. Finally, the Moon's perigee is moving forwards or precessing in its orbit and makes a complete circuit in 8.85 years. The time between one perigee and the next is slightly longer than the sidereal month and known as the anomalistic month. The Moon's orbit intersects with the ecliptic at the two nodes that are 180 degrees apart. Therefore, the new moon occurs close to the nodes at two periods of the year approximately six months (173.3 days) apart, known as eclipse seasons, and there will always be at least one solar eclipse during these periods. Sometimes the new moon occurs close enough to a node during two consecutive months to eclipse the Sun on both occasions in two partial eclipses. This means that, in any given year, there will always be at least two solar eclipses, and there can be as many as five. Eclipses can occur only when the Sun is within about 15 to 18 degrees of a node, (10 to 12 degrees for central eclipses). This is referred to as an eclipse limit, and is given in ranges because the apparent sizes and speeds of the Sun and Moon vary throughout the year. In the time it takes for the Moon to return to a node (draconic month), the apparent position of the Sun has moved about 29 degrees, relative to the nodes. Since the eclipse limit creates a window of opportunity of up to 36 degrees (24 degrees for central eclipses), it is possible for partial eclipses (or rarely a partial and a central eclipse) to occur in consecutive months. During a central eclipse, the Moon's umbra (or antumbra, in the case of an annular eclipse) moves rapidly from west to east across the Earth. The Earth is also rotating from west to east, at about 28 km/min at the Equator, but as the Moon is moving in the same direction as the Earth's rotation at about 61 km/min, the umbra almost always appears to move in a roughly west-east direction across a map of the Earth at the speed of the Moon's orbital velocity minus the Earth's rotational velocity. Rare exceptions can occur in polar regions where the path may go over or near the pole, as in 2021 on June 10 and December 4. The width of the track of a central eclipse varies according to the relative apparent diameters of the Sun and Moon. In the most favourable circumstances, when a total eclipse occurs very close to perigee, the track can be up to 267 km (166 mi) wide and the duration of totality may be over 7 minutes. Outside of the central track, a partial eclipse is seen over a much larger area of the Earth. Typically, the umbra is 100-160 km wide, while the penumbral diameter is in excess of 6400 km. Besselian elements are used to predict whether an eclipse will be partial, annular, or total (or annular/total), and what the eclipse circumstances will be at any given location.:Chapter 11 Calculations with Besselian elements can determine the exact shape of the umbra's shadow on the Earth's surface. But at what longitudes on the Earth's surface the shadow will fall, is a function of the Earth's rotation, and on how much that rotation has slowed down over time. A number called ?T is used in eclipse prediction to take this slowing into account. As the Earth slows, ?T increases. ?T for dates in the future can only be roughly estimated because the Earth's rotation is slowing irregularly. This means that, although it is possible to predict that there will be a total eclipse on a certain date in the far future, it is not possible to predict in the far future exactly at what longitudes that eclipse will be total. Historical records of eclipses allow estimates of past values of ?T and so of the Earth's rotation. Total solar eclipses are rare events. Although they occur somewhere on Earth every 18 months on average, it is estimated that they recur at any given place only once every 360 to 410 years, on average. The total eclipse lasts for only a maximum of a few minutes at any location, because the Moon's umbra moves eastward at over 1700 km/h. Totality currently can never last more than 7 min 32 s. This value changes over the millennia and is currently decreasing. By the 8th millennium, the longest theoretically possible total eclipse will be less than 7 min 2 s. The last time an eclipse longer than 7 minutes occurred was June 30, 1973 (7 min 3 sec). Observers aboard a Concorde supersonic aircraft were able to stretch totality for this eclipse to about 74 minutes by flying along the path of the Moon's umbra. The next total eclipse exceeding seven minutes in duration will not occur until June 25, 2150. The longest total solar eclipse during the 11,000 year period from 3000 BC to at least 8000 AD will occur on July 16, 2186, when totality will last 7 min 29 s. For comparison, the longest total eclipse of the 20th century at 7 min 8 s occurred on June 20, 1955, and there are no total solar eclipses over 7 min in duration in the 21st century. It is possible to predict other eclipses using eclipse cycles. The saros is probably the best known and one of the most accurate. A saros lasts 6,585.3 days (a little over 18 years), which means that, after this period, a practically identical eclipse will occur. The most notable difference will be a westward shift of about 120° in longitude (due to the 0.3 days) and a little in latitude (north-south for odd-numbered cycles, the reverse for even-numbered ones). A saros series always starts with a partial eclipse near one of Earth's polar regions, then shifts over the globe through a series of annular or total eclipses, and ends with a partial eclipse at the opposite polar region. A saros series lasts 1226 to 1550 years and 69 to 87 eclipses, with about 40 to 60 of them being central. Between two and five solar eclipses occur every year, with at least one per eclipse season. Since the Gregorian calendar was instituted in 1582, years that have had five solar eclipses were 1693, 1758, 1805, 1823, 1870, and 1935. The next occurrence will be 2206. On average, there are about 240 solar eclipses each century. |January 5||February 3||June 30||July 30||December 25| Total solar eclipses are seen on Earth because of a fortuitous combination of circumstances. Even on Earth, the diversity of eclipses familiar to people today is a temporary (on a geological time scale) phenomenon. Hundreds of millions of years in the past, the Moon was closer to the Earth and therefore apparently larger, so every solar eclipse was total or partial, and there were no annular eclipses. Due to tidal acceleration, the orbit of the Moon around the Earth becomes approximately 3.8 cm more distant each year. Millions of years in the future, the Moon will be too far away to fully occlude the Sun, and no total eclipses will occur. In the same timeframe, the Sun may become brighter, making it appear larger in size. Estimates of the time when the Moon will be unable to occlude the entire Sun when viewed from the Earth range between 650 million and 1.4 billion years in the future. Historical eclipses are a very valuable resource for historians, in that they allow a few historical events to be dated precisely, from which other dates and ancient calendars may be deduced. A solar eclipse of June 15, 763 BC mentioned in an Assyrian text is important for the chronology of the ancient Near East. There have been other claims to date earlier eclipses. The Book of Joshua 10:13 describes the sun staying still for an entire day in the sky; a group of University of Cambridge scholars concluded this to be the annular solar eclipse that occurred on 30 October 1207 BC. The Chinese king Zhong Kang supposedly beheaded two astronomers, Hsi and Ho, who failed to predict an eclipse 4,000 years ago. Perhaps the earliest still-unproven claim is that of archaeologist Bruce Masse, who putatively links an eclipse that occurred on May 10, 2807 BC with a possible meteor impact in the Indian Ocean on the basis of several ancient flood myths that mention a total solar eclipse. Eclipses have been interpreted as omens, or portents. The ancient Greek historian Herodotus wrote that Thales of Miletus predicted an eclipse that occurred during a battle between the Medes and the Lydians. Both sides put down their weapons and declared peace as a result of the eclipse. The exact eclipse involved remains uncertain, although the issue has been studied by hundreds of ancient and modern authorities. One likely candidate took place on May 28, 585 BC, probably near the Halys river in Asia Minor. An eclipse recorded by Herodotus before Xerxes departed for his expedition against Greece, which is traditionally dated to 480 BC, was matched by John Russell Hind to an annular eclipse of the Sun at Sardis on February 17, 478 BC. Alternatively, a partial eclipse was visible from Persia on October 2, 480 BC. Herodotus also reports a solar eclipse at Sparta during the Second Persian invasion of Greece. The date of the eclipse (August 1, 477 BC) does not match exactly the conventional dates for the invasion accepted by historians. Attempts have been made to establish the exact date of Good Friday by assuming that the darkness described at Jesus's crucifixion was a solar eclipse. This research has not yielded conclusive results, and Good Friday is recorded as being at Passover, which is held at the time of a full moon. Further, the darkness lasted from the sixth hour to the ninth, or three hours, which is much, much longer than the eight-minute upper limit for any solar eclipse's totality. Contemporary chronicles wrote about an eclipse at the beginning of May 664 that coincided with the beginning of the plague of 664 in the British isles. In the Western hemisphere, there are few reliable records of eclipses before AD 800, until the advent of Arab and monastic observations in the early medieval period. The Cairo astronomer Ibn Yunus wrote that the calculation of eclipses was one of the many things that connect astronomy with the Islamic law, because it allowed knowing when a special prayer can be made. The first recorded observation of the corona was made in Constantinople in AD 968. The first known telescopic observation of a total solar eclipse was made in France in 1706. Nine years later, English astronomer Edmund Halley accurately predicted and observed the solar eclipse of May 3, 1715. By the mid-19th century, scientific understanding of the Sun was improving through observations of the Sun's corona during solar eclipses. The corona was identified as part of the Sun's atmosphere in 1842, and the first photograph (or daguerreotype) of a total eclipse was taken of the solar eclipse of July 28, 1851. Spectroscope observations were made of the solar eclipse of August 18, 1868, which helped to determine the chemical composition of the Sun. John Fiske summed up myths about the solar eclipse like this in his 1872 book Myth and Myth-Makers, the myth of Hercules and Cacus, the fundamental idea is the victory of the solar god over the robber who steals the light. Now whether the robber carries off the light in the evening when Indra has gone to sleep, or boldly rears his black form against the sky during the daytime, causing darkness to spread over the earth, would make little difference to the framers of the myth. To a chicken a solar eclipse is the same thing as nightfall, and he goes to roost accordingly. Why, then, should the primitive thinker have made a distinction between the darkening of the sky caused by black clouds and that caused by the rotation of the earth? He had no more conception of the scientific explanation of these phenomena than the chicken has of the scientific explanation of an eclipse. For him it was enough to know that the solar radiance was stolen, in the one case as in the other, and to suspect that the same demon was to blame for both robberies. Looking directly at the photosphere of the Sun (the bright disk of the Sun itself), even for just a few seconds, can cause permanent damage to the retina of the eye, because of the intense visible and invisible radiation that the photosphere emits. This damage can result in impairment of vision, up to and including blindness. The retina has no sensitivity to pain, and the effects of retinal damage may not appear for hours, so there is no warning that injury is occurring. Under normal conditions, the Sun is so bright that it is difficult to stare at it directly. However, during an eclipse, with so much of the Sun covered, it is easier and more tempting to stare at it. Looking at the Sun during an eclipse is as dangerous as looking at it outside an eclipse, except during the brief period of totality, when the Sun's disk is completely covered (totality occurs only during a total eclipse and only very briefly; it does not occur during a partial or annular eclipse). Viewing the Sun's disk through any kind of optical aid (binoculars, a telescope, or even an optical camera viewfinder) is extremely hazardous and can cause irreversible eye damage within a fraction of a second. Viewing the Sun during partial and annular eclipses (and during total eclipses outside the brief period of totality) requires special eye protection, or indirect viewing methods if eye damage is to be avoided. The Sun's disk can be viewed using appropriate filtration to block the harmful part of the Sun's radiation. Sunglasses do not make viewing the Sun safe. Only properly designed and certified solar filters should be used for direct viewing of the Sun's disk. Especially, self-made filters using common objects such as a floppy disk removed from its case, a Compact Disc, a black colour slide film, smoked glass, etc. must be avoided. The safest way to view the Sun's disk is by indirect projection. This can be done by projecting an image of the disk onto a white piece of paper or card using a pair of binoculars (with one of the lenses covered), a telescope, or another piece of cardboard with a small hole in it (about 1 mm diameter), often called a pinhole camera. The projected image of the Sun can then be safely viewed; this technique can be used to observe sunspots, as well as eclipses. Care must be taken, however, to ensure that no one looks through the projector (telescope, pinhole, etc.) directly. Viewing the Sun's disk on a video display screen (provided by a video camera or digital camera) is safe, although the camera itself may be damaged by direct exposure to the Sun. The optical viewfinders provided with some video and digital cameras are not safe. Securely mounting #14 welder's glass in front of the lens and viewfinder protects the equipment and makes viewing possible. Professional workmanship is essential because of the dire consequences any gaps or detaching mountings will have. In the partial eclipse path, one will not be able to see the corona or nearly complete darkening of the sky. However, depending on how much of the Sun's disk is obscured, some darkening may be noticeable. If three-quarters or more of the Sun is obscured, then an effect can be observed by which the daylight appears to be dim, as if the sky were overcast, yet objects still cast sharp shadows. When the shrinking visible part of the photosphere becomes very small, Baily's beads will occur. These are caused by the sunlight still being able to reach the Earth through lunar valleys. Totality then begins with the diamond ring effect, the last bright flash of sunlight. It is safe to observe the total phase of a solar eclipse directly only when the Sun's photosphere is completely covered by the Moon, and not before or after totality. During this period, the Sun is too dim to be seen through filters. The Sun's faint corona will be visible, and the chromosphere, solar prominences, and possibly even a solar flare may be seen. At the end of totality, the same effects will occur in reverse order, and on the opposite side of the Moon. A dedicated group of eclipse chasers have pursued the observation of solar eclipses when they occur around the Earth. A person who chases eclipses is known as an umbraphile, meaning shadow lover. Umbraphiles travel for eclipses and use various tools to help view the sun including solar viewing glasses, also known as eclipse glasses, as well as telescopes. Photographing an eclipse is possible with fairly common camera equipment. In order for the disk of the Sun/Moon to be easily visible, a fairly high magnification long focus lens is needed (at least 200 mm for a 35 mm camera), and for the disk to fill most of the frame, a longer lens is needed (over 500 mm). As with viewing the Sun directly, looking at it through the optical viewfinder of a camera can produce damage to the retina, so care is recommended. Solar filters are required for digital photography even if an optical viewfinder is not used. Using a camera's live view feature or an electronic viewfinder is safe for the human eye, but the Sun's rays could potentially irreparably damage digital image sensors unless the lens is covered by a properly designed solar filter. A total solar eclipse provides a rare opportunity to observe the corona (the outer layer of the Sun's atmosphere). Normally this is not visible because the photosphere is much brighter than the corona. According to the point reached in the solar cycle, the corona may appear small and symmetric, or large and fuzzy. It is very hard to predict this in advance. As the light filters through leaves of trees during a partial eclipse, the overlapping leaves create natural pinholes, displaying mini eclipses on the ground. Phenomena associated with eclipses include shadow bands (also known as flying shadows), which are similar to shadows on the bottom of a swimming pool. They occur only just prior to and after totality, when a narrow solar crescent acts as an anisotropic light source. The observation of a total solar eclipse of May 29, 1919, helped to confirm Einstein's theory of general relativity. By comparing the apparent distance between stars in the constellation Taurus, with and without the Sun between them, Arthur Eddington stated that the theoretical predictions about gravitational lenses were confirmed. The observation with the Sun between the stars was possible only during totality since the stars are then visible. Though Eddington's observations were near the experimental limits of accuracy at the time, work in the later half of the 20th century confirmed his results. There is a long history of observations of gravity-related phenomena during solar eclipses, especially during the period of totality. In 1954, and again in 1959, Maurice Allais reported observations of strange and unexplained movement during solar eclipses. The reality of this phenomenon, named the Allais effect, has remained controversial. Similarly, in 1970, Saxl and Allen observed the sudden change in motion of a torsion pendulum; this phenomenon is called the Saxl effect. Observation during the 1997 solar eclipse by Wang et al. suggested a possible gravitational shielding effect, which generated debate. In 2002, Wang and a collaborator published detailed data analysis, which suggested that the phenomenon still remains unexplained. In principle, the simultaneous occurrence of a solar eclipse and a transit of a planet is possible. But these events are extremely rare because of their short durations. The next anticipated simultaneous occurrence of a solar eclipse and a transit of Mercury will be on July 5, 6757, and a solar eclipse and a transit of Venus is expected on April 5, 15232. More common, but still infrequent, is a conjunction of a planet (especially, but not only, Mercury or Venus) at the time of a total solar eclipse, in which event the planet will be visible very near the eclipsed Sun, when without the eclipse it would have been lost in the Sun's glare. At one time, some scientists hypothesized that there may be a planet (often given the name Vulcan) even closer to the Sun than Mercury; the only way to confirm its existence would have been to observe it in transit or during a total solar eclipse. No such planet was ever found, and general relativity has since explained the observations that led astronomers to suggest that Vulcan might exist. During a total solar eclipse, the Moon's shadow covers only a small fraction of the Earth. The Earth continues to receive at least 92 percent of the amount of sunlight it receives without an eclipse - more if the penumbra of the Moon's shadow partly misses the Earth. Seen from the Moon, the Earth during a total solar eclipse is mostly brilliantly illuminated, with only a small dark patch showing the Moon's shadow. The brilliantly-lit Earth reflects a lot of light to the Moon. If the corona of the eclipsed Sun were not present, the Moon, illuminated by earthlight, would be easily visible from Earth. This would be essentially the same as the earthshine which can frequently be seen when the Moon's phase is a narrow crescent. In reality, the corona, though much less brilliant than the Sun's photosphere, is much brighter than the Moon illuminated by earthlight. Therefore, by contrast, the Moon during a total solar eclipse appears to be black, with the corona surrounding it. Artificial satellites can also pass in front of the Sun as seen from the Earth, but none is large enough to cause an eclipse. At the altitude of the International Space Station, for example, an object would need to be about 3.35 km (2.08 mi) across to blot the Sun out entirely. These transits are difficult to watch because the zone of visibility is very small. The satellite passes over the face of the Sun in about a second, typically. As with a transit of a planet, it will not get dark. The International Space Station transit across the Sun from any location can last from around 1 up to 8 seconds only taking into account, that the spacecraft is moving centrally alongside the diameter of the Sun. The longest International Space Station transits may occur just after the sunrise or just before the sunset when the way from observer to the object is the longest (see the Parallax phenomenon). Observations of eclipses from spacecraft or artificial satellites orbiting above the Earth's atmosphere are not subject to weather conditions. The crew of Gemini 12 observed a total solar eclipse from space in 1966. The partial phase of the 1999 total eclipse was visible from Mir. During the Apollo-Soyuz Test Project conducted in July 1975, the Apollo spacecraft was positioned to create an artificial solar eclipse giving the Soyuz crew an opportunity to photograph the solar corona. The solar eclipse of March 20, 2015, was the first occurrence of an eclipse estimated to potentially have a significant impact on the power system, with the electricity sector taking measures to mitigate any impact. The continental Europe and Great Britain synchronous areas were estimated to have about 90 gigawatts of solar power and it was estimated that production would temporarily decrease by up to 34 GW compared to a clear sky day. In addition to the drop in light level and air temperature, animals change their behavior during totality. For example, birds and squirrels return to their nests and crickets chirp. Every solar eclipse impacts the overall light level observed during the day. Normally, the level of light intensity varies in non-eclipse conditions, which is driven by the degree of cloud coverage and type of cloudiness. Thick cumulus clouds can reduce the daylight up to 1000 times. The illuminance level changes during the solar eclipse is one of the elements comprising the general atmosphere response. The model of changes the solar illuminance can be considered in two basic cases including the covered or uncovered center of the solar disk. The importance of these cases lies in the limb darkening phenomenon, which takes hold when the eclipse magnitude is higher than 0.5. Standard light level changes measurements during the solar eclipse cover the solar direction only. The total solar eclipse 2017 was first, where measurements such as this were headed away from the Sun. It was found, that light level changes are different between symmetrical moments of the eclipse. They depend on the umbral position on the sky as well as the haze concentration, which determines the strength of light scattering in the atmosphere. Light level changes are indispensable with sky surface brightness difference, which features a very similar scenario against the symmetrical moments of the eclipse considering the shadow-in and shadow-out directions. Regarding the solar and antisolar direction, these variations are more balanced with peak nearby the mid-eclipse moment. Eclipses occur only in the eclipse season, when the Sun is close to either the ascending or descending node of the Moon. Each eclipse is separated by one, five or six lunations (synodic months), and the midpoint of each season is separated by 173.3 days, which is the mean time for the Sun to travel from one node to the next. The period is a little less than half a calendar year because the lunar nodes slowly regress. Because 223 synodic months is roughly equal to 239 anomalistic months and 242 draconic months, eclipses with similar geometry recur 223 synodic months (about 6,585.3 days) apart. This period (18 years 11.3 days) is a saros. Because 223 synodic months is not identical to 239 anomalistic months or 242 draconic months, saros cycles do not endlessly repeat. Each cycle begins with the Moon's shadow crossing the Earth near the north or south pole, and subsequent events progress toward the other pole until the Moon's shadow misses the Earth and the series ends. Saros cycles are numbered; currently, cycles 117 to 156 are active.
How To Write A C Program To Find Factorial Using Recursion Guide on use recursion in c. Readers will also learn how to write a program to find the factorial of a number using recursive function. Necessary algorithms and code examples are given within the article. How to write a C program to find the factorial of a number using recursion. Recursion is a good technique in C which can be very beneficial in C if used in wise and error free manner. Recursion means the phenomenon of a function invoking itself. Use of recursion results in less amount of coding and lowers the bulk of the program but increases the logical complexity overall, so practice recursion if you have a good hold on your algorithm and it will help you if used successfully. Finding a factorial of a number using recursion is a good way to understand functionality and implementation of recursive Sequential Algorithm : 1.Take the number from user(say n,you will find the factorial of this number). 2.create an int variable (say var) and assign it to 1. 5.go to step 3 unless n==1 6.output var as the required factorial. Recursive Algorithm : module A(main function): 1.take the input from the user. 2.check and make sure it is a positive number. 3.pass the number to module B. 4.Print the value returned by module B as the required factorial. module B(user defined function) 1.accept the value from calling module. 2.check if it is 1 , if yes return 1 to calling module. 3.decrement the value of the argument by 1. 4.pass it to module B. 5.get the value returned by module B and return it to calling module after multiplying it by the actual argument. int fact(int n); printf("/nEnter an integer:"); printf("/nThe factorial of % is = %d",n,fact(n)); int fact(int n) Explanation : Well it is actually pretty self explanatory, the main function is pretty basic and doing only input output work and calling the user defined function fact() with an argument. fact() receives an argument and checks if it is 1 , if yes it immediately sends back 1 to the calling function. In the else case it again calls the fact() function (itself) with the original argument decremented by one and returns to the calling function whatever value it is returned by the called function multiplied by the original argument. This calling the oneself thing with C functions is known as recursion. Let us check how it works with an example, suppose the user enters 3 as input. fact(3) is called fact receives 3 , argument is not 1 , hence fact(2) is called fact receives 2 , argument is not 1 , hence fact(1) is called fact receives 1 , argument is 1 , hence 1 is returned 2*1 is returned 3*2*1 is returned 6 is printed as the factorial understanding the work of functions in these different levels is all you need to understand how this program works. Hope I have done a good job explaining. But keep one thing in mind if not used correctly recursion can easily lead to never ending loops inside your program execution, for example in this case if we pass a negative number to fact() it will never become 1 no matter how many times we decrement it and the recursion process will go on endlessly. That's it now you know how to find the factorial of a number using recursion in C. Did you like this resource? Share it with your friends and show your love! Responses to "How To Write A C Program To Find Factorial Using Recursion" No responses found. Be the first to respond... Notify me by email when others post comments to this article. Do not include your name, "with regards" etc in the comment. Write detailed comment, relevant to the topic. No HTML formatting and links to other web sites are allowed. This is a strictly moderated site. Absolutely no spam allowed. to fill automatically. (Will not be published, but to validate comment) Type the numbers and letters shown on the left. Subscribe to Email Get Jobs by Email Forum posts by Email Articles by Email Awards & Gifts Last 7 Days M. K. Singh Rajiv Saha Talukdar ISC Technologies, Kochi - India. Copyright © All Rights Reserved.
Ionization Energy in Periodic Table Ionization energy or ionization potential in chemistry is the minimum amount of energy required to remove the outer electron of an isolated gaseous atom present in the periodic table of chemical elements. It is generally represented by IE or IP and measured by unit electron volt (eV) and kilojoules per mole (kJ/mol). The process by which an element loses an electron to convert a cation is called ionization. It can be calculated from the required energy to complete this process. Ionization is an endothermic process because energy is supplied to affect it. Generally, the value of ionization energy of periodic table elements increses from left to right in a period because the nuclear charge (atomic number) of elements increses in the same direction. For learning chemistry, the periodic table trend of ionization energy or potential is affected mainly by the following factors, - Atomic radius - atomic number - Charge on the nucleus - Filled or half-filled orbitals - Shielding electron - Oxidation number of elements What is Ionization Energy? The electron is raised to a higher energy level by absorption of energy from external sources. If the ionization process continues, a stage comes where the electron goes fully out of the influence of the atomic nucleus. In simple words, ionization energy (IE) or ionization potential (IP) is the minimum amount of energy required to remove the most loosely bound electron of an isolated gaseous atom or ion of an element. In physics, ionization energy is generally expressed by unit electron volt (eV) but in chemistry, it can be expressed by unit kilojoules per mole (kJ/mol) or kilocalories per mole (kcal/mol). Ionization Energy Equation Electrons are raised to higher energy levels by the transfer of energy from external sources. But if energy transfers to electrons sufficiently, electrons go fully out of the influence of the nucleus of atoms. Ionization energy can be represented by the following equation, M (g) + IE → M+ (g) + e− From the above equation, M = an atom of the periodic table of elements, M+ = cation form due to ionization, and e− = electron removed from M atom. IE is positive for neutral atoms and ionization is an endothermic process. Therefore, the ionization energy or enthalpy of the periodic table element is an endothermic reaction in thermodynamics. During the ionization process, energy is consumed by atoms. Electron Volt to Joule For the conversion of electron volt to joule, first, we define electron volt. The energy required by an electron falling through a chemical potential difference of one volt is called an electron volt (eV). From the definition, 1 eV = charge of an electron × 1 volt = (1.6 × 10−19 coulomb) × (1 volt) = 1.6 × 10−19 Joule 1 eV = 1.6 × 10−12 erg First, Second, and Third Ionization Energies The electrons are removed in stages one after the other from an atom or ions. Therefore, the values of successive ionization energies of an element differ one from another. The successive ionization energies can be represented as first, second, third, fourth, etc. - First ionization energy: The amount of energy required for the removal of the first electron from a gaseous atom is called its first ionization energy (IE1). M (g) + IE1 → M+ (g) + e− - Second ionization energy: The energy required for the removal of the second electron from a unipositive cation is called second ionization energy (IE2). M+ (g) + IE2 → M+2 (g) + e− - Third and fourth ionization energies: Similarly, we have to define the term third, the fourth ionization energies of periodic table elements. M+2 (g) + IE2 → M+3 (g) + e− M+3 (g) + IE2 → M+4 (g) + e− Successive Ionization Energies Values The values of successive ionization energies increses in the following order: IE1 < IE2 < IE3 < IE4 … The ionization energy values increses successively because the removal of an electron from a cation having a higher positive charge is relatively more difficult. |Successive ionization energies (kJ/mol) Ionization Energy of Hydrogen The energy required for completely removing an electron from hydrogen energy levels is called the ionizing energy of a hydrogen atom. Therefore, the ionized energy of hydrogen can be measured by calculating the energy difference for the transition of an electron from n = ∞ to n = 1. From the Bohr model of hydrogen, the ionization potential of hydrogen, IEH = 2.179 × 10−11 erg = 2.179 × 10−18 Joule = 13.6 eV Ionization Energy of Helium The electron configuration of helium 1s2. The second ionization potential is found by the removal of the second electron from the 1s orbital against the nuclear charge of +2. Hence the calculated IE of He from the Bohr energy equation, IEHe = Z2 × IEH = 22 × 13.6 eV = 54.4 eV Factors Affecting Ionization Energy The magnitude of the ionization energy of the periodic table elements depends on the following factors, - Charge of the nucleus - Atomic radius - Half-filled and filled orbitals - Shielding effect of electrons Atomic Radius Trend The atomic radius decreases from left to right along a period in the periodic table. Therefore, when we move left to right along a period in the periodic table, ionization potential trends normally increase because the atomic radius decreases When we move from top to bottom in a group the value of the potentials of chemical elements decreases with the increasing size of the atom. Charge of the Nucleus With the increasing atomic number, the charge on the nucleus increases. Hence the electrostatic attraction between the outermost electrons and the nucleus of an atom increases and it is comparatively more difficult to remove an electron. Therefore, ionization energy value increses with the increasing charge of an atom. Generally, the value of ionization energy increases when moving from left to right in a period because the nuclear charge of the elements also increses in the same direction. |Element of 2nd Period |Ionization energy (kJ/mol) Atomic Radius and Ionization Energy The greater the atomic radius of elements in the periodic table, the weaker will be the attraction. Hence the required energy for the removal of the electron is lower. In the case of larger atoms, the attraction between the nucleus and the outermost electron is less. Hence it is easier to remove an electron from a larger atom than from a smaller atom. Generally, when we move from top to bottom in a group, the ionization energy of an atom decreases with increasing atomic radii. When we move from top to bottom in a group, the number of inner shells increses, and the ionization potential tends to decrease. Half-Filled and Filled Orbitals According to Hund’s rule, atoms having a half-filled or fully-filled orbital are comparatively more stable, and more energy is required to remove an electron from such atoms. Therefore, the ionization energy of such atoms is relatively higher than expected normally from their position in the periodic table. Half-filled and filled orbitals can create many regulations in ionization energy trends. For example, Be and N in the second period and Mg and P in the third period have slightly higher values of IE than the expected values. - The higher values of IE of beryllium (Be → 2s2) and magnesium (Mg →3s2) are explained by the extra stability of the completely filled 2s orbital in Be and 3s orbital in Mg. - Similarly, the higher values of IE of nitrogen (N → 2s2 2p3) and phosphorus (Mg →3s2 3p3) are explained by the extra stability of the half-filled 2p orbital in N and 3p orbital in P. Shielding Effect and Ionization Energy Electrostatic attraction between the electrons and nucleus shows that an outer electron is attracted by the nucleus and repelled by the electrons of the inner shell. The combined attractive and repulsive force acting on the outer electron experiences less attraction from the nucleus. It is called the shielding or screening effect. The larger the number of electrons in the inner shell, the lesser the attractive force for holding the outer electron. The radial distribution functions of the s, p, and d subshells show that for the same principal quantum number, the s-subshell is more shielding than the p-subshell and least shielding the d-orbital. Therefore, the shielding efficiency is in the following order: ns orbital > np orbital > nd orbital > nf orbital Where, n = principal quantum number Shielding Constant and Ionization Energy of Periodic Table Elements Generally, when we move down in a group, the number of inner shells increases or the shielding constant increses, and hence the determined ionization potential tends is decrease. |Element of Group-2 |Shielding constant for valence electron |Ionization energy (kJ/mol) Periodic Table Trends of Ionization Energy The greater the charge on the nucleus of an atom the more energy is required for removing an electron from the atom. With the increasing atomic number electrostatic attraction between the outermost electrons and the nucleus of an atom increases. Therefore, the ionizing of an atom is difficult in chemistry. Hence ionization energy values generally increase in moving left to right in a period. Ionization Energy of Second-Period Elements Due to the presence of a fully-filled and half-filled orbital of beryllium and nitrogen, the ionization energy of beryllium and nitrogen is slightly higher than the neighbor elements boron and oxygen. Therefore, the ionization potential trends of second-period elements maintain the following order, Li < B < Be < C < O < N < F < Ne Exceptions of Ionization Energy Trend A few exceptions in the value of the ionization energy trends in the periodic table are explained based on the half-filled and fully-filled orbitals. - Group-15 elements (nitrogen and phosphorus) in the periodic table have higher ionization potential than the group-16 elements (oxygen and sulfur). - Similarly, group-2 elements (beryllium and magnesium) have higher than the group-13 elements (boron and aluminum) in the periodic table. Ionization Energy of Nitrogen and Phosphorus Nitrogen and phosphorus in group-15 elements with atomic numbers 7 and 15. Therefore, the electron configuration of nitrogen and phosphorus are: - Nitrogen (N): 1s2 2s2 2p3 - Phosphorus (P): 1s2 2s2 2p6 3s2 3p3 From the above electronic configuration, the removal of an electron from half-filled 2p and 3p-orbital of nitrogen and phosphorus required more energy. Therefore, the IE of nitrogen is slightly greater than oxygen, and phosphorus is slightly greater than sulfur. Beryllium and Oxygen Removal of an electron from beryllium (Be) and magnesium (Mg) with a fully-filled s-orbital required more energy. Therefore, the ionization potential of beryllium is slightly greater than boron. Similarly, the ionization potential of magnesium is slightly greater than aluminum. Positively Charged Ions Ionization can be enormously influenced by the overall charge of the ionizing species such as M+, M+2, M+3, etc. During ionization, electron withdrawal from a positively charged species is more difficult than from a neutral atom. The first ionization potential chart of the elements varies with their positions in the periodic table. Among the periodic table elements, the noble gases have the highest ionization energy values and alkali metals have the lowest values. Ionization Energy and Chemical Properties In learning chemistry, the measure of ionization potential is an impotent chemical property of periodic table elements. We can explain various chemical properties with the help of ionization energies. Ionization Energy and Chemical Reactivity The lower value of ionization energies of alkali metals (lithium, sodium, potassium, rubidium, and cesium) suggest the highest reactivity of alkali metals. Similarly, the high value of ionization energies of noble gases suggests the low reactivity of noble gases. Such rules are only applicable to metals that have high electropositive character but do not apply to highly electronegative elements. For example, the IE of fluorine is very high but fluorine is very reactive among periodic table elements. Reducing Properties of Elements In redox reactions, the removal of an electron from an atom of a chemical element is called an oxidizing reaction. The element from where the electron is removed is called a reducing agent. The lower the value of the ionization potential, the greater its reducing power because it can easily remove electron from the element. Basic Character and Ionization Potential In chemical science, we can explain the acidic and basic character of elements by ionization energies. When the ionizing power of an element is lower, the basic character of such an element is greater but the acid character is lower. The ionization energy measurement of periodic table elements can also be used to calculate various other chemical properties such as bond energy, electronegativity, electron affinity by Mulliken, etc.
Spread sand on an even surface so that the layer is perfectly flat. You might think that gravity should keep the layer even. But even a flat layer of sand is unstable because grain sizes differ and respond differently to the same breeze. A light wind can form ripples on a sand surface in just minutes. Over years, centuries, and millennia, winds can sculpt sand into incredibly complex shapes. Some of our planet’s most elaborate sand dunes are found in the Badain Jaran Desert, in the western part of Inner Mongolia. Researchers have long puzzled over how to best study these dunes, but mapping them in three dimensions has become much easier, thanks to a NASA satellite. The images shown here offer two views of the Badain Jaran Desert as observed by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA’s Terra satellite. The top image is an elevation map made from the ASTER Global Digital Elevation Model (GDEM), released in 2009. The image shows dune peaks in off-white, and the low-lying areas between dunes in green. The white rectangle indicates the area shown in the bottom image, which was acquired on December 13, 2007, and made from a combination of visible and near-infrared (VNIR) light. Dunes and small lakes mingle in the Badain Jaran, with the lakes occurring in flat areas between dunes. (A few lakes appear in the VNIR image; the lake near the upper right corner of the image is tinged with blue.) In the VNIR image, sunlight illuminates south-facing dune surfaces while leaving north-facing surfaces in shadow. The dunes are incredibly complicated—described in a 2009 study as “complex reversing mega-dunes developed from compound barchanoid mega-dunes.” A barchan dune is crescent shaped, with “horns” pointing in the same direction as the wind blows. Barchan dunes tend to form in areas with steady wind direction and limited amounts of sand. Where more sand is available, and where wind direction is more variable, more complicated dunes can form. Reversing dunes often occur in areas where winds blow from opposite directions. The prevailing wind creates barchanoid dunes, and occasional winds from the opposite direction produce small dunes on the crests of the big dunes. Compound dunes are made from multiple dunes of the same type that overlap or overlie each other. Complex dunes are made from different types of dunes merging or overlapping. By offering three-dimensional data, the ASTER GDEM has enabled researchers to better understand the formation of complicated dunes. Besides the Badain Jaran, the GDEM has also shed light on the formation of dunes in the Empty Quarter (Rub’ al Khali) of Saudi Arabia. Because of the angle of sunlight, the bottom (gray scale) image may cause an optical illusion known as relief inversion. NASA Earth Observatory images created by Jesse Allen, using data provided courtesy of NASA/GSFC/METI/ERSDAC/JAROS, and U.S./Japan ASTER Science Team. Caption by Michon Scott. One of the main reasons that rainless regions like the Sahara Desert are interesting from the perspective of landscape science is that the work of flowing water—mainly streams and rivers—becomes less important than the work of wind. Over millennia, if enough sand is available, winds can generate dunes of enormous size, arranged in regular patterns. Long, linear dunes stretch generally north to south across much of northeast Algeria, covering a vast tract (~140,000 square kilometers) of the Sahara Desert known as the Erg Oriental. Erg means “dune sea” in Arabic, and the term has been adopted by modern geologists. Spanning this image from a point on the southwest margin of the erg (image center point: 28.9°N 4.8°W) are a series of 2-kilometer-wide linear dunes, comprised of red sand. The dune chains are more than 100 meters high. The “streets” between the dunes are grayer areas free of sand.
FUNCTION: How to create a line graph We began by discussing our central idea and the questions we generated to explore: We discussed briefly on what we had already discovered during our unit and what we still want to find out. We then watched this How to Create a Line Graph YouTube and discussed what it had taught us. We then discussed where we might see line graphs in our daily lives. - newspapers ( for articles and for weather forecasts) - sometimes on news TV programmes Why do we use graphs? - It makes understanding information easier. - We don't have to read long paragraphs; we can just read the graph. We then looked at 4 examples of data and thought about which we could show using a line graph: Andrea’s Bowling Scores 122, 156, 87, 112, 145 Average Temperature in Lausanne: Number of Countries We Have lived in: ° 1 country: IIII ° 2 countries: III ° 3 countries II ° 4 or more countries: III Number of Refugees in Past 20 Years: Most of us agreed we could create a line graph to show 'Temperature in Lausanne' and the 'Refugees Infographic'. - Eventually we came to the understanding that when we show a period of time, line graphs are a helpful graph choice. We looked at these 3 types of line graphs and chose 1 we felt would challenge us enough to create using the data provided (below) for Lausanne where we live: Choice 1: Create a line graph that shows the average temperature: Choice 2: Create a line graph that shows the average, maximum and minimum temperatures. Choice 3: Create a line graph that shows the average, minimum, maximum temperatures AND the average precipitation. We then looked at data for Lausanne where we live and used it to create one of the types of line graphs we felt matched our level of understanding. Some children felt the simpler version of the line graph was best as they hadn't created a line graph before: Others chose to create this slightly more challenging line graph: Some chose to create the more challenging line graph: By giving the children the choice and responsibility of selecting the level of graph they created, it helped cater to the various learning levels in our class. It also helped foster within each child, a sense of ownership towards their own learning. We concluded by sharing our graphs in small groups and gave constructive feedback to each other on what we felt was done well and what could be a focus on in future graph making as part of a reflection.
What is a function? A function is a block of code that performs a specific task. Functions help you organize your code and make it reusable. How to define a function in Python? In Python, you can define a function using the def keyword, followed by the function name and a set of parentheses (). Inside the parentheses, you can define any parameters that the function takes. The code block within every function starts with a colon : and is indented. def greet(name): print("Hello, " + name) This is a function called greet that takes a single parameter called name. When the function is called, it prints out a greeting with the name provided. How to call a function in Python? To call a function in Python, you simply need to use the function name followed by a set of parentheses (), and pass any required arguments inside the parentheses. This would call the greet function and pass the argument “John” to it. The function would then print out “Hello, John”. Returning a value from a function In Python, you can use the return keyword to specify a value that a function should return. When a function encounters the return keyword, it will immediately exit and return the specified value. def add(a, b): result = a + b return result sum = add(1, 2) print(sum) This function, called add, takes two arguments b, and returns their sum. When the function is called with the arguments 2, it returns the value 3, which is then stored in the variable sum and printed out. Here are some more examples of functions in Python: def is_even(n): if n % 2 == 0: return True else: return False def calculate_area(length, width): return length * width def add_and_multiply(a, b, c): result = a + b result *= c return result def say_hello(): print("Hello!") - pybase64 encode and decode Messages using Python - June 6, 2023 - Different Data Types in Dart - June 6, 2023 - What is flutter and dart and the difference - June 4, 2023
Firstly each type of data is collected through observation. This initial gathering of observation is called raw data. After collection of data, the person has to find ways to arrange them in tabular form in order to observe their features and study them. Such an arrangement is called presentation of data. The raw data can be arranged in any of following ways. - Ascending Order - Descending Order - Alphabetic Order When the raw data is put in ascending or descending order of magnitude it is called an array or arrayed data. This data can be studied using different methods of tally marks and frequency distribution. The number of times an observation occurs in the given data is called the frequency of the observation. Understand the concepts of frequency distribution and how to construct the distribution table by observing the solved examples. Learn these concepts easily by practicing the questions from the exercises given in RD Sharma solution for the chapter “Data handling I (Presentation of Data)”. |Chapter 21 Data Handling I (Presentation of Data)| |Data Handling I (Presentation of Data) Exercise 21.1|
The purpose of this course is to provide a comprehensive overview of pain assessment and pain management. By the completion of this activity, the learner will be able to: Pain is a subjective experience, and the context in which it happens influences both how the pain is experienced, and its meaning to the individual. Defining and quantifying pain has never been easy. As part of the human experience, pain has been described from the earliest times. Prehistoric man related pain and pain relief to the acceptance or anger of the gods. Early Greek histories describe pain in the context of injuries received during battles; the Greek physician Hippocrates was the first to regard pain as a symptom, a sensory experience that could be explained by the patient to the practitioner. In the Middle Ages Leonardo da Vinci wrote that:1 “the chief good is wisdom, the chief evil is body pain.” In 1842, a physician in the state of Georgia, Dr. Crawford Williamson Long was the first doctor to use ether as a general anesthetic while performing surgery to remove a tumor from the neck of a patient.1 The issue of pain during childbirth was hotly debated, with many in the medical profession supporting the tenet that experiencing pain during delivery was a religious principle. However, in 1853 the British Monarch, Queen Victoria was given chloroform during childbirth and for her next delivery in 1855. She described the experience of giving birth with the addition of anesthesia as: “soothing, quieting and delightful beyond measure.” This positive affirmation from Queen Victoria was an important first step in changing the prevailing views about pain relief during childbirth.2 The French physician, Dr. Albert Schweitzer, proclaimed in 1931 that, “Pain is a more terrible lord of mankind than even death itself.” (Partners for Understanding Pain 2016) However, from a positive viewpoint pain is an important diagnostic marker of injury or disease, and is significant in formulating a diagnosis.3 Nerve stimulation leads to the physical component of pain. Pain results from an injury and may be confined to a discrete area, or it can be generalized in conditions such as fibromyalgia. Nerve fibers transmit pain impulses to the brain where the brain can personalize the pain experience.4 Two definitions of pain that have changed how those involved in healthcare view pain are the definition from The International Association for the Study of Pain which states that: pain is “an unpleasant sensory and emotional experience associated with actual or potential tissue damage.” The second is Margo McCaffery’s landmark definition that describes pain as: “whatever the experiencing person says it is, existing whenever and wherever the person say it does.” Patients may experience acute, chronic, or cancer pain. Acute pain follows injury to the body and generally disappears as healing takes place. There is an identifiable pathology that accounts for the pain. It may arise from operative procedures or tissue trauma associated with an inflammatory process. It may be associated with objective physical signs such as increased heart rate, increased blood pressure, and pallor (autonomic nervous system activity); making patients "look" like they are in pain. Chronic non-malignant pain is pain that lasts for an extended period of time. There may or may not be known active pathology to account for the suffering that the individual is experiencing. Chronic pain, in contrast to acute pain, is rarely accompanied by signs of autonomic nervous system activity. Chronic pain is not defined as acute pain that refuses to go away, and it can be seen as a disease in its own right.5 Australia has been one of the first countries to recognize pain as a disease entity, and several other countries are set to follow suit.2 Cancer pain may be acute, chronic, or intermittent. It usually has a definable cause, which is typically related to tumor recurrence or treatment. Pain is a major problem in today’s society. Pain carries with it consequences across a broad range of categories including ethical, social, economic and legal arenas. In 2011, a conservative estimate put those suffering from chronic pain at least 100 million adults in this country. This figure does not include children with chronic pain conditions. Research shows that greater than 1.5 billion people worldwide suffer from chronic pain and that between 3- 4.5% of the world population is affected by neuropathic pain.6 These numbers make it easy to understand that pain is one of the most common reasons that people seek medical attention. Persistent pain is often associated with anxiety, depression, functional impairment, sleep disturbances, disability, and impairment in activities of daily living. Health economists from Johns Hopkins University have put the total annual cost of chronic pain to be as high as $635 billion yearly in the United States, which exceeds the annual costs for cancer, heart disease, and diabetes. The pervasiveness of pain has a huge impact on commerce, a report by the Institute of Medicine demonstrated that lost productivity in 2010 cost between $297.4 billion to 335.5 billion.6 Chronic non-malignant pain is defined as pain lasting more than three months and may affect any part of the body. Chronic pain can be divided into three broad classes depending on location, these are, localized, regional or widespread.2 A 2012 worldwide systematic review of the prevalence of low back pain indicated that it is a main source of chronic pain globally with the highest rates occurring in women between the ages of 40 years and 80 years.2 A consistent finding is that chronic pain occurs more frequently in women than in men. Complex Regional Pain Syndrome (CRPS) is found to have over a three times higher incidence in women than in men. Headaches and migraines also have a higher occurrence rate in women than in men, the annual prevalence rate for women reaches up to 33% and for men up to 16%. This pattern is found not just in the USA but worldwide. Fibromyalgia, osteoarthritis and irritable bowel syndrome are also found in consistently higher rates in women.5 Acute pain typically has an abrupt onset and is often described as sharp. It is often caused by events such as a broken bone, surgery, childbirth, dental pain or burns. Acute pain may last a short period of time or may last for a few months. The pain dissipates when the underlying cause has healed. When acute pain lasts longer than 3-6 months, it is then termed chronic. It is possible that acute pain that is not treated properly may lead to chronic pain. Multiple barriers to effective pain management exist. These include many individual, family, healthcare provider, society and political barriers. The good news is that we have the knowledge and skills to manage most pain effectively. So, what is the problem? Why is unrelieved pain still so prevalent? Knowledge is important. Clinicians, as well as patients, need to be made knowledgeable about methods of assessing and managing pain. Knowledge alone rarely changes practice. Efforts must go beyond education alone if pain treatment is to improve. Pain needs to be made visible so it will not go unnoticed by clinicians. Pain theories help clinicians understand pain and help in the guidance of the treatment of pain. The first theory of pain was formulated by the French philosopher Rene Descartes in the 17th Century and was titled the ‘Specificity Theory.’ He believed that the human soul resided in the pineal gland, and that was the source of sensations that the individual experienced including pain.2 The Specificity Theory of Pain suggests that certain pain receptors send out signals to the brain that creates the awareness of pain. According to the theory, pain is an independent sensation with particular peripheral sensory receptors, which act in response to damage to drive signals through the nervous system to centers in the brain. Other theories that came to light in the 1900’s include the Pattern Theory, Central Summation Theory, the Fourth Theory of Pain and the Sensory Interaction Theory. A more recent theory is the Gate Control Theory. Pain stimulation is transmitted by small, slow fibers that go into the dorsal horn of the spinal cord. The theory states that there is a gate in the spinal cord which controls sensory information through the spinal cord. When there is more pain stimulation, the gate blocking is less effective. When there is a lot of activity in the pain fibers, the gates open. When there is stimulation of the A-beta fibers, which provide stimulus for mild irritation such as lightly touching the skin (massage), the gates may be closed. The gates being closed inhibits the perception of pain. In addition, messages that descend from the brain – such as those in anxiety states or extreme excitement – can affect opening or closing of the gates. The Biopsychosocial Model of Pain suggests that pain involves not just physiological factors but also psychological and social factors. It suggests that family and culture influence the perception of pain and the individual’s response to pain. Anatomy and physiology are key factors in understanding pain. A primary function of the central nervous system is to deliver information about actual or potential threats or injury.2 Stimulation of pain receptors (nociceptors) results in physiological pain. It is also referred to as receptor or nociceptor pain. The two major types of physiological pain are somatic pain and visceral pain. Somatic pain occurs when there is injury to the skin, joint, muscles and ligaments, and by the inflammatory response, and because it alerts to presence of disease or injury, it serves an important protective role. Pain related to stimulation of the peripheral nerves and the cranial nerves also referred to as physiological pain. The brain tissue itself has no pain receptors. However, the dura mater, which is the outer lining of the brain and its large arteries are equipped with pain receptors and can cause physiological pain. Innervation for these pain receptors comes from the trigeminal nerve, cranial nerve V.3 Visceral pain arises from the stimulation of pain receptors in the internal organs such as the heart, the intestines, the lungs, the liver and the pancreas, and it is less specific than somatic pain. This vague pain may be due to the fact that the number of cells in the spinal cord receiving signals from the internal organs are less than those that get responses for the superficial body locations. It is often difficult for the patient to pinpoint the exact location of the pain, and it is frequently experienced as referred pain coming from the body surface. For the most part, pain receptors in the viscera are chemoreceptors that respond to different types of chemicals including those from the activation of the inflammatory process. Pain receptors in the viscera can also respond to stretching of the organs and organ ischemia.3 Primary afferent fibers are involved in the transmission of pain. A delta and C fibers transmit noxious stimuli from the body’s periphery to the dorsal horn of the spinal cord. The A delta fibers have a small diameter and are lightly myelinated and conduct slowly. They transmit rapid, sharp pain, and give precise information regarding the anatomical location of the pain. C fibers are small and unmyelinated fibers that conduct slowly and respond to multiple stimuli and lead to dull, achy pain. A delta and C fibers both react to mechanical stimulation and temperature, hot and cold. The skin surfaces and the joints in the body are well supplied with pain receptors, but they are not as abundant in the muscles.3 If the pain signal is strong enough, it transmits through the dorsal horn to the spinothalamic tract and the spinoreticular tract. Head pain is processed by the trigeminal nucleus before it reaches the brain. These are the ascending tracts to the cerebral cortex where the stimulus is recognized as pain. The Periaqueductal Grey (PAG) is an area of the midbrain, and it plays a major role in processing pain, particularly in the modulation of pain signals that are transmitted back to the dorsal horn of the spinal cord and to the trigeminal nucleus. Animal studies showed that electrical stimulation of the PAG had the ability to reduce pain. When the pain is identified in the cerebral cortex, it descends back to the periphery. Descending pathways for path begin in many areas of the brain and travel to cells in the dorsal horn of the spinal cord and the cells in the trigeminal nucleus. Descending pathways are usually parallel to the ascending pathways. It has been found that the vagus nerve provides an additional route for sensory and pain information to the brain, especially from the viscera and the head to some degree.3 The neurons are involved in the transmission of pain through the release of substances and neurotransmitters. The excitatory substances that contribute to pain include substance P, glutamate, prostaglandins and nitrous oxide. Inhibitory neurotransmitters inhibit, or partially, the transmission of pain. Common inhibitory neurotransmitters include glycine, serotonin, norepinephrine, acetylcholine, and gamma-aminobutyric acid (GABA). Medications are involved in modulating these neurotransmitters to help reduce pain. Neuropathic or pathological pain is the result of changes in the central nervous system in either the brain or the spinal cord. It is regarded as one of the most difficult forms of pain to successfully treat. An integrative approach that utilizes both pharmacological and non-pharmacological treatments provide the best outcomes in the pathological pain relief.3 Pain affects multiple aspects of life. Pain can lead to physiological changes and potentially to physical illness. It has the potential to lead to cognitive changes as many patients with chronic pain suffer from depression and anxiety. Studies suggest that at least 20% of those who suffer from chronic pain also have a serious mood disorder.5 Pain can change the way one thinks and acts leading to behavioral changes. The risk of suicide for those who are in chronic pain is approximately twice that of the general population, and the utmost risk is found among those who suffer from severe chronic headaches.5 Pain has the potential to affect an individual’s social life. Chronic pain may limit the patient’s desire to interact in social settings. Social consequences of unrelieved pain may include isolation, inability or reduced desire to go to work and an overall reduced quality of life. Expression and reporting of pain can vary by culture. Some cultures have a more stoic attitude to pain, while others may express more emotion to pain. The biopsychosocial model of pain care leads to a major step forward in how people with pain were perceived and managed. This holistic approach to pain recognizes the importance of the psychological and social factors that play a role in the patient’s subjective experience of pain. The biopsychosocial model takes into account cognitive, emotional, spiritual and cultural issues that are unique to the individual and his or her journey with pain. Suffering related to pain is an individual experience. One of the core principles of the biopsychosocial model is to give the patient the right to participate in and direct their own care.2 The political arena can strongly influence pain in society. The assessment of pain over the last 20 years has focused on eliminating and reducing pain at all costs. This attitude has contributed to the opioid epidemic and many problems for society. In recent years, politicians have been implementing laws to help assure that pain is adequately assessed and treated with extra caution being placed on the prevention of medication abuse. It is important to recognize the importance of an individual’s ability to cope with pain, and since coping is a skill that is learned, it is essential for healthcare providers to recognize the importance of teaching this essential skill to patients or refer them to appropriate source for learning coping techniques.3 Many regulatory issues surround pain management. Pain management will be optimized when regulatory barriers are limited. The Institute of Medicine (2011) is calling for laws to be reviewed that have the potential to prevent optimal pain management. It is a challenging venture as regulations and law must be set up to manage pain optimally, but also to deter the abuse, diversion and illegal use of controlled substances.7 Society, including regulations set in the political arena, must change to help assure that the negative effects of pain and the treatment of pain do not lead to further problems in society. Substance abuse/misuse and drug diversion are major problems associated with pain management. Substance abuse issues are a real concern in the management of pain. Opioid misuse and abuse is a major public health problem, and it affects 34.2 million Americans over the age of 12.8 According to the Center for Disease Control, in the United States, 46 people die each day from an overdose of prescription painkillers. In 2012, healthcare providers wrote 259 million prescriptions for painkillers. Two times as many painkiller prescriptions are written in the United States as in Canada.9 Misuse prevalence is variable. Of individuals over the age of twelve, 4.6 percent reported non-medical opioid use within the last year. Abuse of prescription medications, especially opioids, is higher in those who service in the armed forces than in the civilian population. In 2008, 11% of service members admitted abusing prescribed drugs, mostly opioids. Multiple deployments and injuries sustained on active duty are cited as some of the causes. The number of prescriptions for pain relief medications written by military doctors between the years 2001 and 2009 quadrupled. In 2008, the number of suicides among military personnel exceeded that of the civilian population, and in 2010, a report from the Army Suicide Preventions Task Force found that prescription drugs were implicated in close to one-third of military suicides in 2009. In an attempt to reverse the misuse of prescription drugs, the Army has instigated changes that incorporate the period of prescriptions for opioids to 6 months. Studies also indicate an increasing use of opioids by veterans. In 2010, over 43,000 patients were treated in VA facilities for opioid abuse, and this does not take into account the number of veterans who do not seek out treatment.10 12.2 percent of 12th graders reported abusing opioid, and 7.9 percent reported past-year use.11 The number of people who sought treatment for non-heroin opioid substance abuse increased from 1.0 percent in 1995 to 8.7 percent in 2010.12 Research also shows that white individuals account for 88 percent of those who reported non-heroin opioid substance abuse, and the majority of these individuals lived in a rural setting. Those who live in rural settings account for 10.6 percent of cases and urban individuals accounting for 4.0 percent of non-heroin opioid abuse individuals who sought treatment.12 Of the overdose deaths from opioid analgesia, thirty percent also involved benzodiazepines. In 2007, the total cost of prescription opioid abuse to the US economy was placed at $55.7 billion. Lost work productivity accounted for $25 billion of that amount, healthcare costs were around $25 billion, and the criminal justice cost was close to $5 billion.13 Opioids have the potential to provide analgesia and improve function. These benefits must be weighed against the potential risks including misuse, addiction, physical dependence, tolerance, overdose, abuse by others, drug-to-drug and drug-to-disease interactions. The Centers for Disease Control and Prevention (CDC) 2016 Guideline for Prescribing Opioids for Chronic Pain reiterate the need for adequate pain control, and the challenges that are involved, particularly in the area of chronic pain control. It recommends that providers utilize the “full range of therapeutic options” in the treatment of chronic pain. The report goes on to state that it is hard to quantify the number of individuals who could benefit from long-term use of opioid medications. Looking at the serious consequences of opioid use, the report states that between the years 1999 and 2014 more than 165,000 people died from opioid overdoses. Using DSM-IV diagnosis criteria, in 2013 approximately 1.9 million individuals either abused or were dependent on opioid drugs. Research has found practices that have contributed to the opioid overdose epidemic, the most important of which are prescribing high doses of opioids, overlapping opioid and benzodiazepine prescriptions, and the use of extended-release opioids for acute pain. The CDC guidelines make recommendations in three areas of opioid use: When treating chronic pain, the guideline recommends that non-pharmacologic therapies and non-opioid medications are the first line treatment. The CDC recommends the use of immediate-release opioids at the lowest effective dose, rather than the extended release form of the treatment of chronic pain. Prior to starting opioid therapy for the treatment of chronic pain clinicians are advised to the establish goals with the patient that involve realistic outcomes for pain relief and improved activity. Opioids should only be continued if there is sufficient improvement in the patient’s pain levels and positive impact on their quality of life that outweigh the possible negative side effects of opioid use. After one to four weeks of opioid therapy, the provider needs to reassess the patient, and for continued opioid therapy, evaluations for benefits and risks of opioid therapy need to be done at least every three months. When opioids are discontinued, a reduction of 10% of the medication dose per week is recommended to alleviate the symptoms of opioid withdrawal. Areas of risk for opioid use include patients with sleep apnea, pregnancy, patients with renal or hepatic insufficiency, those older than 65 years, people with mental health disorders, patients with substance abuse disorders, and individuals with a previous nonfatal overdose. The CDC guidelines also note the importance of the clinician using State Prescription Drug Monitoring Program (PDMP) data to verify whether or not the patient is getting opioid medications or dangerous medication combinations that place them at a high risk for overdose. In some states, the law requires the provider to review the PDMP before renewing each opioid prescription.14 Prescription drug misuse is the use of prescription medication in a method or intent inconsistent of how it was prescribed. Prescription drug misuse includes using a medication to get high, selling or sharing it with other (diversion), overuse, having multiple prescribers, and concurrent use of alcohol or other illicit substances. Misuse is a necessary but not solely a sufficient criterion for a substance use disorder. Susceptible individuals are at risk to misuse medications that stimulate the reward center of the brain which may include opioid analgesics, stimulants, benzodiazepines or tranquilizers. Drug abuse is when drugs are not used medically or socially appropriately. Controlled substances may lead to dependence, either physical or psychological. Physical dependence transpires when there are withdrawal symptoms such as anxiety, tachycardia, hypertension, diaphoresis, a volatile mood, or dysphoria after the rapid discontinuation of the substance. Psychological dependence is the perceived need for the substance. It makes the person feel as though they cannot function if they do not have the substance. Psychological dependence often kicks in after physical dependence wears off, and it typically lasts much longer than physical dependence and is a strong contributing factor to relapse. Addiction is psychological dependence and extreme behavior patterns associated with the drug. At this point, there is typically a loss of control regarding drug use. The drug is continued despite serious medical and/or social consequences. Tolerance, increasing doses of the medication needed to produce an equivalent effect, is typically seen by the time addiction is present. Physical dependence can occur without addiction. Individuals who take chronic pain medication may be dependent on the medication, but not addicted. Addiction is a major concern in those taking opioids. When opioids are prescribed, it is important to determine who is likely to participate in aberrant drug-related behaviors. Deviant use may occur in those with major depression, psychotropic medication use, younger age or those with a family or personal history of drug or alcohol misuse.15 Those at high risk for addiction should likely be managed in concert with a specialist.16 Aberrant behaviors may include abuse, misuse or addiction. Examples of aberrant drug-related behaviors include: The reality is that many patients do not report pain or minimize the severity of the pain they are experiencing. There are several reasons for this including poor communication between patients and healthcare providers. There are increased levels of anxiety for those who are admitted to an acute care facility, seeing several different providers, not being quite sure who to report their pain too, and the sense of helpless in finding themselves in a hospital gown and a strange environment. Nurses have a vital role in these situations as the patient’s advocate and navigator through the complexity of the system to ensure that pain is properly addressed. The Joint Commission (JC) recognizes that pain control is an important part of quality health care. They acknowledge that pain is considered the fifth vital sign and should be assessed with other vital signs. According to the JC, patients have the right to assessment and treatment of pain. Other beliefs of the JC include: However, pain is not well managed, and several factors play into the mishandling of pain. Most importantly is the reality that many doctors know very little about pain and pain management; it is not a topic that is dealt with extensively in medical schools –or in nursing education programs. This lack of education means that patients in pain see health care providers that are ill-equipped to understand and treat them. Added to this is the fact that very little is spent on pain research. In 2012, only around 1% of the National Institutes of Health (NIH) budget was devoted to pain research.5 Pain and its management are surrounded by multiple ethical issues. Healthcare providers should attempt to minimize pain and suffering while maintaining a balance between adequate pain management and minimizing harm from the treatment of pain. Ethical issues surround end of life care. The management of pain at the end of life is a moral duty for the provider caring for a terminal patient. While opioid use may suppress respirations, and may even hasten death, the treatment of pain is an important part of care for intractable pain as death nears. The goal of giving pain management is to relieve suffering, not accelerate death. The use of palliative sedation may be considered to manage refractory pain at the end of life. Basic pain assessment is simple and must regularly be performed. Action needs to be planned on the basis of the patient’s report of pain. It makes no difference whether patients are in the hospital, a long-term care facility, a behavioral health facility, an outpatient clinic or being cared for by a home care agency. No matter where patients are, the intensity of pain should be assessed and documented: Pain threshold refers to the minimal level at which an individual senses pain as a harmful stimulus. It is the level at which the patient first states that what they are feeling is painful, and as such, varies greatly from patient to patient. Pain tolerance refers to the degree of pain a person can tolerate before it becomes unbearable for them. Pain tolerance not only differs from patient to patient but can also vary for the same individual depending on numerous factors including time, setting and stimulus. Healthcare providers must resist the temptation to make comparisons between the pain threshold and pain tolerance of patients, especially those who have had similar procedures. Measuring the severity of pain is often done on scales. Pain scales are meant to compare the intensity of the patient’s pain at different points in time, not to compare one person’s pain to another. The use of pain scales assists the healthcare provider to determine the effectiveness of pain treatment. The best scales are those that are brief, valid, require minimal training to use, and use both behavioral and descriptive measures of pain. When selecting a scale, it is important to consider which type of scale would work best for the individual patient. Patient education regarding pain assessment is also important. For example, prior to a surgical procedure, the nurse needs to discuss with the patient how their pain will be assessed after the surgery and demonstrate to them how the Numbering Rating Scale and the Wong-Baker FACES Pain Rating Scale works; allow the patient to choose which scale will work better for them.2 A "0 to 10" numerical scale is the most widely used measure to assess pain intensity. When using the Numerical Rating Scale (NRS), patients are asked to rate their pain from 0 to 10, with "0" equaling no pain and "10" equaling the worst possible pain they can imagine. Another scale allows the patient to rate their pain as, “no pain, mild pain, moderate pain, severe pain, or unbearable pain." Pain maps can be used for those who have a difficult time speaking. In a pain map, there is a front and rear view of the body on a piece of paper, and the patient draws on the location of the pain and may rate the severity of the pain. Since we have no instrument to objectively measure pain intensity in the same way that blood pressure is measured by a sphygmomanometer, for example, the only valid measure of pain is the patient´s self-report (a subjective measure). Sometimes healthcare providers may believe that they are the best judges of a person´s pain; however, many studies demonstrate that healthcare providers either over or underestimate a patient´s pain. Besides current pain intensity, the complete pain assessment includes the following: |Nerve involvement ||Co—analgesics| |Tumors occupying the liver, pancreas, spleen; abdominal or thoracic surgery; ascites||Non-opioids | |Dull, achy, throbbing, sore||Bone metastases, musculoskeletal injury, mucositis, skin lesions| It is also important to document the impact pain has on quality of life. Key questions to ask include: Understanding how pain was treated in the past for the patient will help the clinician treat the current pain. A review of past medical records will help the pain management team evaluate the condition. Reviewing all previous history, diagnostic testing, treatment options and the efficacy of those treatment options will help the team make an accurate diagnosis and manage pain appropriately. Certain treatment modalities, including specific medications, are often more effective in one individual when compared to another based on individual genetic variations. Having a full understanding of all medical and surgical conditions can be very helpful in assuring proper pain management. Chronic disease may have a strong impact on the management of pain. Chronic kidney disease, for example, can affect the way that drugs are excreted. The use of non-steroidal anti-inflammatory medications can lead to kidney failure in those with chronic kidney disease. A mental health evaluation can help the clinician understand the best way to manage pain. Mood or cognitive disorders can affect the way the pain is managed. If mental illness is not appropriately identified and managed, chronic pain will likely never be adequately managed. A history of drug abuse is an important factor to ascertain, as this could profoundly affect how chronic pain is treated. Personal characteristics may have a strong effect on pain management. Factors that influence pain include race, age, culture, religion, sex and/or language. Review of the patient’s perception of the pain is important. Why does the patient believe they have persistent pain? Does the patient feel there was adequate workup done on their condition? What does the patient expect out of treatment and what are the patient’s goals? In addition, psychological factors that contribute to the pain should be assessed; this will help assess the patient’s expectations. It is important for patients to have realistic expectations about pain management. A complete physical exam is an essential part of the management of pain. It is important to have a baseline examination, so subsequent evaluations will allow the health care team to determine progress in pain management and functional capacity. The physical exam should include a detailed neurological exam including the patient’s ability to ambulate. While exams may include general observations, exams may be focused according to the presenting condition. Observing hygiene, posture, dress and appearance is important. Those with severe pain will often have poor hygiene, unkempt dress and appear to be in pain. Observe for any splinting, which may suggest a painful part of the body. Assessing skin and joints for redness, swelling or deformities helps determine the location and etiology of pain. An abdominal exam for any tenderness or distention should be done. In addition, checking joints for range of motion is an important part of the physical exam in chronic pain. The exam should include an evaluation of functional capacity, strength, endurance and any pain-related limitations. Ongoing monitoring for the efficacy and effectiveness of the implemented plan is important. Utilizing similar assessment tools, the healthcare provider can document the effectiveness of the pain management plan on the patient, which will include any improvement in quality of life. Diagnostic testing can be helpful in evaluating painful conditions. It is important to realize that an abnormal diagnostic test does not confirm the source of the pain. Blood tests can be helpful in some conditions to determine or monitor certain causes of pain. For example, an elevated C-reactive protein or an erythrocyte sedimentation rate may be seen in those with polymyalgia rheumatica, infection or rheumatoid arthritis (which are all conditions that may cause pain). Imaging may be necessary for some situation of chronic pain. X-rays, computed tomography and magnetic resonance imaging can help define the etiology of the pain. Caution must be utilized when using imaging as many abnormalities may be seen in imaging tests that are not the source of the pain. An electromyogram (EMG) or nerve condition studies are often done to assess the cause of pain. The EMG measures electrical activity of the muscle and can help find damaged muscle, nerves or neuromuscular abnormalities such as a herniated disc or myasthenia gravis. The nerve conduction study measures the ability of the nerves to send electrical signals and can help diagnose carpal tunnel syndromeor other neuropathies. Goals of pain management are not necessarily complete pain relief. They may include a reduction in the amount of pain, improved quality of life, improved physical and psychological functioning, improved ability to work, improved ability to function in society and a reduction in health care utilizations. A pain management plan is more than just a prescription for pain medication. In addition to pharmacotherapy, it should include psychological and physical modalities to manage pain. It should be modified when interventions are not effective. Successful treatment of chronic pain is more than a prescription for medication it requires the input of an interdisciplinary healthcare team, and a holistic approach to care with the goal of improving the patient’s overall quality of life. As nurses, one of our most important roles is to listen to our patients, and this is especially important for patients in pain particularly chronic pain. These patients often live with intense fear, anxiety, uncertainty about the future, and frequently social isolation and loneliness. The opportunity to be able to tell ones’ story to someone who is consciously listening is therapeutic in itself.2 The patient should be provided education regarding the plan. It should include information about medications prescribed, other treatment options and methods to contact the pain management team. When developing a treatment plan, there are many considerations, including the type of pain. In addition, the effect the pain has on lifestyle including psychological, social and biological components of life should be considered. Many factors affect the success of the treatment plan. Issues related to the patient, such as their ability to understand and apply the management plan, will help determine the success of the plan. The patient’s willingness to implement the whole plan can have a profound effect on the success of the plan. If a patient is willing to take a pill but is not willing to work on non-pharmacologic interventions (such as physical therapy or weight loss), then the plan will lose its effectiveness. Referral to a pain management specialist may be indicated for those who have debilitating symptoms, those that need increased doses of pain medications, those who are non-responsive to treatments, or those with symptoms at multiple sites. Caregiver or healthcare provider issues often affect the pain management plan. Many caregivers and healthcare providers do not have an accurate comprehension of the patient’s pain and may hold false beliefs regarding pain management. Caregivers and healthcare providers may be inhibited by fear of side effects from medications or concerns of drug addiction so may withhold giving medication to those who are in pain. In addition, caregivers/healthcare providers and patients may have discordant goals. Controlled substances should be prescribed for a legitimate medical purpose with a careful consideration of the patient’s safety, goals of therapy and efficacy of the treatment. Treatment of pain should include pharmacotherapy but also physical and psychological therapies. Numerous non-pharmacologic therapies are used in the management of pain. These may include a combination of physical and psychological techniques. Methods used other than medications include: For those with pain, a trial ofphysical therapy and/or occupational therapycan be helpful. With the help of a physical therapist, the use of exercises targeting a specific type of pathology can be helpful in the management of pain. Occupational therapists can be helpful in recommending devices that can assist in enhancing activities of daily living. With chronic pain, there is a tendency not to move which leads to deconditioning, and further incapacity results in greater pain with any form of movement. Maintaining muscle mass prevents this downward spiral. It has also been proven that the more an individual participates in exercise, the less probability there is of them developing back pain. For those who have lower back pain, physical activity has been shown to produce a considerable improvement in their overall health. Exercise, especially those performed in water are beneficial for patients with arthritis pain, and aerobic exercise has proven to very effective in decreasing pain connected with fibromyalgia.5 Yoga and Pilates have also become popular alternative treatments, especially for back pain. However, some yoga poses can be a potential cause of injury especially those that involve overstretching of the neck. Tai’ Chi which is another Chinese practice involving slow, gentle movements, along with deep breathing and relaxation is having positive effects on pain relief for several conditions including fibromyalgia, arthritis, and low back pain.5 Massage is soothing and relaxing, both physically and mentally. Massage may decrease pain by relaxing muscle tension and increasing capillary circulation, thereby improving general circulation. Vibration is a form of electric massage. When vibration is applied lightly, it may have a soothing effect similar to massage. Vibration applied with moderate pressure may relieve pain by causing numbness, paresthesia, and/or anesthesia of the area stimulated. Heat and cold therapies can assist in the management of pain. Heat reduces inflammation and promotes relaxation. It can be in the form of hot tub baths, heating pads, or heat packs. Cold is often more effective in relieving pain than heat. The application of cold reduces muscle spasms secondary to underlying skeletal muscle spasm, joint pathology, or nerve root irritation. Methods of cold application include ice massage, ice bags, and gel packs. Alternating heat and cold may be more effective than the use of either one alone. Multiple psychological techniques can aid in reducing pain. The basis for the use of these methods is that thought influences feelings, and if thought (and behaviors) can be changed, so can feelings and even sensations, such as pain. Cognitive-behavioral methods require the patient’s active participation. Cognitive behavioral therapy (CBT) has proven to be an effective coping skill for those with chronic pain. CBT emphasizes the present moment, where the individual becomes aware of their thoughts, feelings and behaviors, and journaling these perceptions. CBT helps to decrease the emotional distress associated with chronic pain, by focusing on how one perceives pain and adjusts to it. CBT has been shown to decrease pain in chronic fatigue syndrome, fibromyalgia, irritable bowel syndrome, and back pain. It has also been used successfully as a means to prevent headache.5 Relaxation is a state of relative freedom from both anxiety and skeletal muscle tension, a quieting or calming of the mind and muscles. Although relaxation is a learned technique, it can be achieved quickly in a motivated patient. Imagery/Visualization involves mentally creating a picture by using one’s imagination. This may be a focus on a close person, a place of enjoyment, a past event, or anything that is thought to bring pleasure. Since the mind is occupied, the pain is reduced in focus. Distraction from pain is the focusing of attention on stimuli other than the pain sensation. The stimuli focused upon can be auditory, visual, or tactile-kinesthetic (hearing, seeing, touching, and moving). By focusing attention and concentration on stimuli other than pain, pain is placed on the periphery of awareness. Distraction does not make the pain go away, nor does the effectiveness of the use of distraction indicate the absence of pain. Music and humor are extremely effective means of distraction. Transcutaneous Electrical Nerve Stimulation (TENS) provides low voltage electricity to the body via electrodes placed on the skin. TENS may help with acute or chronic pain. Electrical stimulation of sensory nerves helps block pain signals going to the brain. TENS is not to be used for patients with pacemakers, electrical implants, cardiac arrhythmias, problems with circulation, and during pregnancy.2 Biofeedback is a technique to harness the mind’s power to allow the patient to be more aware of the sensations in the body. The exact mechanism that it works is unclear, but it promotes relaxation and helps reduce pain.18 Acupuncture is a neurostimulation technique that treats pain by the insertion of small, solid needles into the skin at varying depths. Various theories exist to offer an explanation of how acupuncture works. The Chinese theory of acupuncture is that it allows the release of blocked energy in the body. In Chinese medicine, this energy source is known as Qi, and its ability to flow freely through the body is related to overall well-being.5 Music therapy may be used to treat pain. Music therapy is the clinical and evidence-based use of music interventions to accomplish individualized goals within a therapeutic relationship by a credentialed professional. This individual has to complete an approved music therapy program. Research in music therapy supports its effectiveness in a wide variety of health care and educational settings. Music therapists use music to facilitate changes that are non-musical in nature. Individuals doing music therapy listen to music created under the guidance of a specially educated and certified professional in music therapy. It is believed that music, like relaxation and guided imagery, can strengthen the right side of the brain, which controls the body's healing processes. The theory of music therapy's effect on chronic pain deals with how pain signals travel through the body. When the brain senses injury to the body, pain signals begin in the somatosensory cortex and the hypothalamus and work their way through the “pain pathway,” ultimately sending signals that provide pain relief. There are also signals that stimulate the release of neurotransmitters such as endorphins, dynorphins, and enkephalins. Music helps in pain reduction by activating these sensory pathways. Different surgical interventions or procedures can be used in the pain management plan. Procedures may include injections, spinal cord stimulation, deep brain stimulation, neural ablative techniques, and surgical interventions. These are potential options for those in whom other methods have not controlled the pain. Pain can be seen as having two parts, first of all, there is the perception of pain, which is related to the strength and characteristics of the pain, and secondly, there is the effect the pain has on the individual. What is its emotional impact, and how is it impacting their quality of life? When healthcare providers perform a pain assessment, it is the strength or intensity of the pain that is measured, which may not reflect how the pain is affecting the life of the person. What has been determined is that how pain affects an individual is intimately associated with their ability to cope with pain.3 Coping can be defined as what an individual thinks and does in a situation that causes them stress. It is greatly affected by the resources that are available to the person to manage the event.2 Coping is a crucial part of pain management especially for those with chronic pain. Nurses need to be aware of the need to assess patients coping patterns as they relate to pain, and to be able to discuss basic coping skills with them and be aware of resources where patients can be referred to learn positive coping techniques. Catastrophizing can be seen as the opposite of coping and is the belief that a situation will continue to get worse. The individual experiences heightened levels of worry and fear that have been shown to intensify the amount of pain the person feels (Moller 2014). Rather than looking for solutions to the problem, the individual tends to turn away from it and has a sense of hopelessness about their situation. Catastrophizing needs to be something that nurses are watchful for when assessing a patient’s pain by listening carefully to how the patient describes their pain and how they perceive its effect on their life now and in the future. Under treatment of chronic pain remains a persistent problem, with an estimate of approximately 30% of those who suffer from chronic pain receiving less than adequate treatment.5 Pain medication decreases (modulate) pain by altering transmission at various points of the pain pathways.3 Analgesic agents are often given orally as this is convenient and allows a relatively steady blood concentration of the drug. Pain medication may be administered on an as-needed basis for episodic pain, or it may be given routinely for chronic pain. The use of routine, around-the-clock medication sustains a steady state in the blood and offers better pain relief for those with persistent pain. When deciding on what medication to use, side effects must be taken into consideration. Classes of medications include non-opioid analgesic agents, antidepressants, muscle relaxants, antiepileptic medications, topical agents and opioids. Some get effective relief from one medication, but some get better pain relief from a combination of medications that work on different pathways. Unfortunately, research is sparse on combination medication in the management of pain. Considering all co-morbidities is an important step in the management of pain. When a patient is afflicted with chronic pain and depression, some medications may help effectively manage both conditions (for example, duloxetine is approved to treat chronic musculoskeletal pain, including discomfort from osteoarthritis and chronic lower back pain in addition to depression). It is also important to establish the pathophysiology of the pain syndrome, evaluate the medication list, and consider the side effects of the medications being prescribed. The clinician should distinguish between neuropathic pain and nociceptive pain. The etiology of neuropathic pain must be established and if the etiology is reversible, manage the underlying problem. For example, if a medication (e. g., metronidazole, nitrofurantoin, isoniazid, or many cancer agents) is the etiology of the neuropathy – stop that medication. Medications used in the treatment of neuropathic pain include calcium channel alpha 2-delta ligands (gabapentin and pregabalin), tricyclic antidepressants, serotonin-norepinephrine uptake inhibitors (SNRIs), the lidocaine patch and narcotic analgesics. Nociceptive pain is typically treated with non-narcotic and opioid analgesia. Common causes of nociceptive pain include arthritis and chronic low back pain. Acetaminophen is often used as a first-line agent in the management of nociceptive pain. Acetaminophen has become the chief cause of acute liver failure. According to government statistics, there are close to 30,000 hospital admissions annually associated with acetaminophen overdose. Patients must be warned that alcohol and acetaminophen is a particularly dangerous mixture, and alcohol consumption needs to be avoided when taking this medication. In January 2011, the Federal Drug Administration (FDA) requested drug manufacturers to limit the amount of acetaminophen in combined products to 325 milligrams per dose. The FDA also required that labels carry a ‘black-box’ warning highlighting the fact that acetaminophen can result in severe liver damage.5 Acetaminophen is not an anti-inflammatory agent but is a very common over the counter medication used for the management of pain. Acetaminophen is commonly administered with opioid medications to reduce the amount of opioid medication needed to manage the pain. There is some evidence of renal toxicity with long-term use of high-dose acetaminophen over the years. Acetaminophen is dosed 325 to 650 mg every four hours or 500-1000 mg every 6 hours, not to exceed 3000 to 4000 mg a day. In the pediatric population, acetaminophen is dosed at 10-15 mg/kg/dose every 4-6 hours with a maximum of 75 mg/kg/day, but no more than 4000 mg a day. The dose should be reduced in those with hepatic insufficiency or alcohol abuse. Absolute contraindication to acetaminophen is liver failure while relative contraindications include chronic alcohol abuse or hepatic insufficiency. Those who are on a statin cholesterol medication may need a lower dose of acetaminophen. Before going to a stronger pain medication, it is important that clinicians determine that acetaminophen is given in the proper dose. The use of up to 1000 mg per dose (in adults) may be necessary to provide relief. NSAIDs are used as alternative options to acetaminophen and are indicated for mild to moderate pain, while some are indicated for severe pain. Like acetaminophen, they act synergistically with opioids. Because they act as an anti-inflammatory agent, they are often used for arthritis, strains, sprains, bursitis and tendonitis. The most frequently used NSAIDs are acetylsalicylic acid (Aspirin), ibuprofen (Advil, Motrin), naproxen (Aleve) and diclofenac (Voltaren). NSAIDs are believed to work by inhibiting the production of the enzymes cyclooxygenase 2 (COX-2) and (COX-1) that are involved in the synthesis of prostaglandins that mediate inflammatory responses and also cause pain. COX-1 is involved in the protection of the stomach lining, and one of the most frequently cited side effects of NSAIDs is stomach bleeding.3 NSAIDs are associated with more side effects and are potentially more problematic, especially in older adults. In older adults, the American Geriatric Society guidelines recommend that persistent pain due to osteoarthritis not be primarily managed with non-steroidal anti-inflammatory agents. The use of topical NSAIDs is a good option for those with localized pain. Absolute contraindications to NSAIDs include an active peptic ulcer, chronic kidney disease or heart failure. Relative contraindications include a history of peptic ulcer disease, Helicobacter pylori infection, hypertension, or concomitant use of selective serotonin receptor inhibitors or corticosteroids. Other side effects of NSAIDs include renal insults, adverse cardiovascular effects, headaches, constipation and mental status changes. Gastrointestinal effects may include gastric ulceration and dyspepsia. Taking the medication with food or antacids may reduce the risk of dyspepsia. Those at high risk of gastric ulceration – older age, on corticosteroids, bleeding problems, or a history of gastric ulceration – should likely not use NSAIDs. The use of a proton pump inhibitor reduces the risk of gastric ulceration with the use of NSAIDs. NSAIDs have the potential to interact with many antihypertensive medications, aspirin, selective serotonin reuptake inhibitors, corticosteroids and warfarin. NSAIDs have the potential to cause nephrotoxicity. NSAIDs inhibit prostaglandin synthesis which leads to vasoconstriction of the afferent arteriole in the kidney, resulting in a reduction in the glomerular filtration rate. NSAIDs should be used cautiously in those with renal impairment. NSAIDs have the potential to lead to cardiovascular complications and have been implicated in increasing the risk of myocardial infarctions, especially in patients who take high doses over a prolonged period of time. For those with high cardiovascular risk, the use of NSAIDs should be limited. They should also be avoided in those with thrombocytopenia (low platelet count). Patients receiving warfarin or heparin should not receive NSAIDs. NSAIDs have been shown to impede thrombocyte aggregation, which can increase the risk of bleeding.3 Antidepressant use in Pain Management Antidepressant medications are effective for multiple types of chronic pain. They have shown effectiveness in neuropathic pain, fibromyalgia and pain associated with depression. This next section will look at some of the antidepressants used in the management of pain. Tricyclic antidepressants (TCA) modify pain by inhibiting the uptake of norepinephrine and serotonin and block multiple channels including the sodium, adrenergic, cholinergic and histaminergic channels. Medications in this class include nortriptyline, desipramine, amitriptyline and imipramine. Nortriptyline and desipramine (secondary amine tricyclic antidepressants) are preferred agents in this class as they are associated with a better side effect profile. These agents are often used in the management of neuropathic pain, but can also be used in chronic pain management as adjuvant agents. Tricyclic antidepressants need to be used cautiously in older adults as they have many side effects (constipation, dry mouth, mental status changes, blurred vision, urinary retention, blood pressure change, tachycardia, and heart block). They should be used very cautiously or not at all in those with cardiac or electrocardiographic abnormalities. The analgesic effect is typically noticed in a shorter period of time and at a lower dose than when treating for depression. Some patients will have a diminishing of side effects as their body adapts to the medications. In adults, most TCAs are often started at 10 mg per day and is then titrated up to 75 mg per day. Older individuals rarely tolerate doses more than 75-100 mg per day. It may take up to 8 weeks before analgesia is appreciated, but pain relief may be noticed as soon as one week. Serotonin-norepinephrine reuptake inhibitors are used for neuropathic pain but can be used for other types of pain. Duloxetine (Cymbalta) is indicated for diabetic neuropathy and painful chronic musculoskeletal conditions such as osteoarthritis and chronic low back pain. It is also approved for fibromyalgia. Common side effects include insomnia, drowsiness, dry mouth, fatigue, nausea and dizziness. It should not be used in those with severe renal insufficiency or hepatic insufficiency. When stopped, it should be tapered slowly due to withdrawal symptoms. Venlafaxine (Effexor), another SNRI, is sometimes used for neuropathic pain, but it is an unlabeled use. Venlafaxine may lead to increased blood pressure. When the medications stopped, it should be tapered slowly to minimize withdrawal symptoms. Gabapentin is approved in adults for post-herpetic neuralgia up to 3600 mg per day in divided doses. Dosage adjustment is needed in those with renal disease. It comes in an extended release form called Gralise. Gabapentin is often used off-label for other neuropathic conditions including diabetic neuropathy, generalized neuropathic pain, anxiety, and post-operative pain. Pregabalin (Lyrica) can be used in adults for fibromyalgia, neuropathic pain (diabetes related), neuropathic pain in those with spinal cord injury and post-herpetic neuralgia. Both Pregabalin and duloxetine have been given regulatory approval for pain management in neuropathic diabetic pain in the United States, Canada and Europe.19 Topical lidocaine is used as first-line therapy for post-herpetic neuralgia. It must be applied to intact skin, and up to three patches may be applied for no more than 12 hours in a 24-hour period. Muscle relaxants can be used in the management of acute and chronic pain. Cyclobenzaprine (Flexeril) was initially classified as a tricyclic antidepressant but was then remade as a muscle relaxer. Side effects are similar between the TCAs and the cyclobenzaprine including sedation, dry mouth, constipation, urinary retention and mental status changes. Carisoprodol (Soma) is another commonly used muscle relaxer that has been increasingly linked to dependence. Due to the concerns of dependence, this medication is less commonly used. The most common side effect of all muscle relaxers is sedation. In recent times, opioids therapy has become more commonly used; in the past, it was only used for severe acute pain and cancer pain. Approximately 8 million Americans with chronic pain are being treated with opioids.5 A recent position paper from the American Academy of Neurology suggested that there is evidence for good short-term pain relief with opioids, but no good evidence exists for continuation of pain relief or improved function for extended periods of time without sustaining serious risk of dependence, overdose, or addiction.20 Opioids function by activating opioid receptors that are located in the spinal cord and brain. The majority of the pain relief related to opioids is as a result of their actions on the cells in the PAG and the descending pain pathways. Opioid medications are associated with multiple side effects including constipation, nausea, vomiting, pruritus, abdominal cramping, sedation, and mental status changes. Multiple interventions are available to reduce side effects. Constipation is a frequent issue in those who use opioids. Risk factors for constipation include older age, those with intra-abdominal pathology and those who eat a low-fiber diet. Those on opiates should be encouraged to increase fiber intake, drink plenty of fluids and be encouraged to exercise. Stool softeners (e.g., docusate sodium) and stimulants (e.g., bisacodyl) may be needed to manage constipation. An osmotic laxative such as polyethylene glycol or lactulose may also be considered, which may be added to stools softeners/stimulants for resistant constipation. Antiemetic medication can help treat nausea. Antihistamines can treat pruritus. Opioids are associated with somnolence and other mental status changes. Patients do develop tolerance to these symptoms over weeks. Reducing the dose may lessen the mental status changes. An adjunctive medication may be added to the lower dose of opioid to help manage the pain. Rarely, the use of a stimulant can be used to manage the sedation due to opioid use. Respiratory depression may occur, but it is uncommon when the medication is used carefully. Starting low and slowly titrating the dose will reduce the risk of respiratory depression. Problems arise with rapid titration, the addition of another drug that may suppress the respiratory drive (benzodiazepine, alcohol or a barbiturate) or the patient overdoses. Sedation precedes respiratory depression, so when starting a patient on opioid therapy encourage them to take the first dose in the office to be monitored or in the presence of a responsible adult who can help monitor the patient. The level of consciousness should be assessed 30-60 minutes after the opioid is given. The next dose should be held and immediately contact the prescriber if there is a reduced level of consciousness, has hypoxia or has a respiratory rate less than 10 per minute.21 Tolerance and addiction are two serious concerns with opioid use. Tolerance refers to the fact that over a period time the amount of the drug taken must be increased to achieve the same amount of pain relief. Tolerance has become a problem with the long-term use of opioids and results from desensitization and down-regulation of opioid receptors in the body. Addiction refers to the fact that withdrawal symptoms occur if the drug is stopped. Different opioids have different levels of risk for causing addiction, i.e. morphine, which is a naturally occurring opioid is highly addictive and produces a sense of euphoria. Synthetic opioids result in little or no feelings of euphoria and have less risk of causing addiction. Use of opioids over a prolonged period of time can have negative consequences for pain control - central sensitization caused by opioids can result in increased pain sensitivity known as hyperalgesia.3 The most serious risk linked with opioid use is overdose.5 Death from overdosing on semi-synthetic opioids occurs every 19 minutes in this country, and after car accidents, it is the second major cause of accidental deaths While there are many opioids, morphine is considered by many as a standard comparator for other drugs. Morphine can be given orally, rectally, intravenously, subcutaneously or intramuscularly. Morphine is used for moderate to severe acute pain and chronic serious pain. It comes in multiple formulations. For acute pain, it is dosed at 10-30 mg every 4 hours for those who are opioid naïve. It is available as tablet, solution, suppository and parenteral solution. The immediate release tablet is dosed 15-30 mg every 4 hours as needed and the oral solution is dosed 10-20 mg every 4 hours as needed. It can also be given rectally and is often dosed 10-20 mg every 4 hours as needed. Morphine also comes in a controlled release form, a sustained release form and an extended release form. Longer-acting formulations include Avinza, Kadian, and MS Contin. Side effects of morphine are similar to other opioid analgesics and include dry mouth, constipation, bradycardia, hypotension, nausea, drowsiness, dizziness, mental status changes, fever, itching, weakness, hypoxia and urinary retention. Morphine should not be used in those with a hypersensitivity to morphine, those with toxin-mediated diarrheal disease, those with severe/acute asthma, paralytic ileus or severe respiratory depression. The extended release form should not be used in those with GI obstruction. The extended release forms of morphine are not interchangeable. Changing from one medication to another should be done only by those experienced in how to do this. Extreme caution should be used when using highly concentrated solution, so overdoses do not occur. Drug interactions commonly seen with morphine include: Morphine is pregnancy category C. It does enter breast milk, and it is not recommended in those who are breastfeeding. Fentanyl is a very strong synthetic opioid. It is 100 times more powerful than morphine and can be given as an injection, transdermal patch (Duragesic), an oral transmucosal lozenge (Actiq), a sublingual tablet (Abstral), a sublingual spray (Subsys), a buccal tablet (Fentora), a buccal film (Onsolis) and a nasal spray (Lazanda). The transdermal patch is used in opioid tolerant patients with moderate to severe pain and is often started at 25 mcg per hour and changed every 72 hours. Fentanyl can be used for multiple reasons including premedication for surgery, for general anesthesia, as an adjunct to general and regional anesthesia, and chronic pain management. The transdermal patch is indicated for around the clock pain management in those with chronic severe pain. Fentanyl transmucosal and intranasal is indicated for cancer pain. While no official dosage adjustment is recommended in those with renal or hepatic impairment, those with mild to moderate renal or hepatic impairment should likely have the dose reduced by 50 percent with the patch, and use is not recommended in severe renal or hepatic impairment. Transmucosal and nasal spray have no specific recommendations for dose reduction in renal or hepatic impairment. Common side effects of fentanyl include dry mouth, edema, bradycardia, dehydration, respiratory depression, shortness of breath, diaphoresis, nausea/vomiting, constipation, application site erythema (patch), weakness, muscle rigidity, mental status changes, headache, sedation, and CNS depression. As with most opioids, contraindications includes hypersensitivity, toxin-mediated diarrheal disease, and paralytic ileus. It should not be used for short-term pain, post-operative pain and should not be used for those who have a severe respiratory disease. The transmucosal and nasal form of fentanyl are typically only used by specialists for opioid-tolerant cancer patients. The patch form should not be exposed to external heat as this may increase absorption of the medication. Exercising with the patch on has the potential to increase absorption of fentanyl. In addition, patients with a fever may also notice an increase in absorption of the medication. The patch should only be applied to intact skin; it contains aluminum and must be removed prior to an MRI. Like many medications, there are multiple potential interactions. Some more common interactions include: Fentanyl is pregnancy category C. It does enter the breast milk and is not recommended in the breastfeeding mother. Oxycodone is a schedule II controlled substance and is available in multiple forms. Oxycodone is often combined with other analgesic agents such as acetaminophen (e.g., Percocet, Roxicet, Tylox), aspirin (e.g., Percodan, Endodan, Oxycodan) and ibuprofen (Combunox). Those with a creatinine clearance less than 60 mL/min should have the dose adjusted down as serum concentration of oxycodone will increase in renal insufficiency. Those with hepatic impairment should have doses reduced; with the extended release formulation, the starting dose should be lowered one-third to one-half and slowly titrated up to affect. Side effects include drowsiness, dizziness, itching, constipation, nausea and vomiting. Less common side effects include dry mouth, headache, abnormal dreaming, blood pressure changes, diaphoresis, weakness and fever. Oxycodone is contraindicated in those with paralytic ileus, significant respiratory depression, hypercarbia, acute or severe bronchial asthma, and GI obstruction. Caution should be used in those with biliary tract impairment such as acute pancreatitis as it may lead to constriction of the sphincter of Oddi. It may lead to an elevation of intracranial pressure (ICP) and should be used carefully in those with intracranial lesions, elevated ICP or a head injury. Extended-release tablets may be lodged in the GI tract including the throat in those with swallowing issues. It may also lead to intestinal obstruction or diverticulitis. Common drug interactions with oxycodone: Oxycodone is pregnancy category B and D if used for an extended period of time or near term. It does enter breast milk and is not recommended in those who are breastfeeding. Hydrocodone, which was classified as a Schedule II Controlled Substance in October of 2014, is available as a combination pill with non-narcotic analgesia (e.g., Lorcet, Lortab, Norco and Vicodin) and by its self in an extended release form. The combination pill has a short acting version of hydrocodone and is dosed 2.5 to 10 mg of hydrocodone every 4-6 hours as needed for moderate to severe pain. Hydrocodone extended-release (Zohydro ER) is typically dosed 10 mg every 12 hours in treatment-naive patients. It is used for severe pain requiring around the clock dosing of hydrocodone. The dose may be increased every 3-7 days in 10 mg increments. Those with severe hepatic impairment should start at the lowest dose and titrate up very slowly while monitoring for side effects. Caution should be used with renal impairment as plasma concentration may rise. Side effects include constipation, nausea, vomiting, dry mouth, drowsiness, headache, dizziness, pruritus and nausea. Contraindications to hydrocodone include paralytic ileus, severe asthma, severe respiratory depression and hypercarbia. As of August 18, 2014, the DEA placed tramadol into a Schedule IV of the Controlled Substance Act. It is indicated for moderate-to-severe pain, and the immediate release form is dosed at 50-100 mg every 4-6 hours for a maximum of 400 mg a day. Tramadol is also indicated for chronic moderate-to-severe pain. Tramadol also comes in an extended release form, ConZip and Ultram ER. When prescribing tramadol to older adults, use the lower end of the dosage range and titrate slowly. In those over 75 years old, 300 mg a day should not be exceeded and utilize extreme caution with the extended release form. In those with a creatinine clearance less than 30 mL/min, only the immediate release formulation should be used with doses of 25-100 mg split every 12 hours (maximum dose of 200 mg a day). In those with severe liver impairment, the immediate release form should be given to a maximum of 50 mg every 12 hours. Side effects include flushing, dizziness, constipation, nausea, vomiting, dyspepsia, itching, headache, somnolence, insomnia and weakness. Less common side effects include orthostatic hypotension, mental status changes, euphoria, rash, hot flashes, diarrhea, dry mouth, anorexia, joint pain, blurred vision and sweating. Patients may experience withdrawal symptoms from tramadol that may include nausea, diarrhea, anxiety, pain, sweating, tremor and rigors. Extended use of tramadol may lead to dependence, and these medications should be tapered slowly to reduce the risk of withdrawal symptoms. Tramadol is contraindicated in those who are hypersensitive to the agent and those with severe liver or kidney impairment. The extended-release tablet should not be used with psychotropic drugs, opioids, hypnotics, acute intoxication with alcohol or centrally acting analgesics and the extended release capsule formulation should not be used in those with severe respiratory depression, severe asthma or hypercapnia. Tramadol has been shown to increase the risk of seizures. This risk is increased in those who take serotonin reuptake inhibitors, tricyclic antidepressants, neuroleptics, other opioids, or other drugs that lower the seizure threshold. The risk may also be increased in those who have seizures or are at risk for seizures such as those who have a CNS infection, cancer, history of head trauma or while patients are going through drug or alcohol withdrawal. Caution should be used in those with respiratory disease as those with significant disease may be at increased risk for respiratory depression. Ms. L is a 52-year-old female with a history of bilateral knee osteoarthritis; she currently rates the pain as a 7/10 in her right knee and 6/10 in her left knee. She takes celecoxib 200 mg twice a day and uses 1000 mg of acetaminophen for breakthrough pain about 3 times a day. She has been stable with these medications for the past 6 months, but over the last month, she has not been getting adequate relief from her pain and has been progressively disabled. She also reports having to stop exercising because of the pain in her knees. In addition to osteoarthritis, she has a past medical history of hypertension, dyslipidemia, depression, and obesity. She has a past surgical history of an appendectomy as a child. She is currently on atorvastatin, lisinopril, celecoxib and acetaminophen. She has no known allergies. She has no history of alcohol, drug or substance abuse. She has a strong family network including a supportive husband of 25 years and two sons who live within twenty miles of her home. She has a past history of depression but is currently not depressed. The physical exam shows significant crepitus in both knees and obesity (BMI of 34). She is unable to fully extend the right knee due to pain. An x-ray demonstrates moderate arthritic changes in both knees. The patient is unwilling to consider surgery of her knees. The prescriber offers tramadol immediate-release 25 mg in the morning, which is titrated every three days in 25 mg increments as distinct doses to 100 mg/day (25 mg four times a day). Pain control was still not adequate, and the dose was then increased by 25 mg every three days to 50 mg every 6 hours. Pain control was significantly improved, and then the patient was given tramadol SR 200 mg once a day. The patient was able to function and exercise. Her quality of life was much improved. Oxymorphone, a schedule II medication, can be given intravenously, subcutaneously, intramuscularly or orally. Oxymorphone is pregnancy category C, and it is unclear if it is excreted in breast milk and should, therefore, be used cautiously. Hydromorphone can be given orally, rectally, subcutaneously, intramuscularly or intravenously. Hydromorphone is pregnancy category C and is excreted in breast milk. It is not recommended for lactating women. Methadone can be given intravenously, subcutaneously, intramuscularly or orally. Methadone is a high-risk drug to lead to overdose. It has a half-life of up to five days and may accumulate in the body. Methadone may also prolong the QT interval leading to cardiac arrhythmias especially at doses higher than 120 mg a day. Methadone should be used for severe pain that has not been responsive to other agents and only by clinicians with specific training in the use of methadone. Methadone is also used in detoxification. Tapentadol (Nucynta, Nucynta ER) is used for acute moderate to severe pain. This medication is not recommended for those with severe liver or renal insufficiency. It is also indicated for diabetic peripheral neuropathy. Meperidine is not recommended as a first-line agent for chronic pain as it is associated with high rates of central nervous system toxicity. |Drug||Initial dose (TreatmentNaïve)||Duration of effect (in hours)||Notes| |Immediate-release||10-30 mg every 3-4 hours as needed||3-6| |Controlled-released (MS Contin, Oramorph SR)||15 mg two times a day||8-12| |Sustained-release (Kadian)||30 mg one to two times a day||12-24| |Extended-release (Avinza)||30 mg once a day||24| |Immediate-release||2-4 mg every 3-4 hours as needed||3-6| |Extended-release (Exalgo)||8 mg every 24 hours||24| |Immediate-release||5-15 mg every 4-6 hours||3-6||Often combined with acetaminophen or aspirin| |Controlled-release (OxyContin)||10 mg two times per day||8-12| |Extended-release (with acetaminophen) (Xartemis XR)||15 mg oxycodone with 650 mg acetaminophen every 12 hours||8-12| |Immediate-release||5-10 mg every 6 hours||4-8||Combined with acetaminophen or ibuprofen| |Extended-release (Zohydro ER)||10 mg every 12 hours||12| |Fentanylpatch||25 mcg per hour changed every 72 hours||48-72 (12 hours after removal)||Not for opioidnaïvepatients, or acute pain; onset 12-24 hours.| |Immediate-release (Opana)||5-20 mg every 4-6 hours||4-6| |Extended-release (OpanaER)||5 mg tow times a day||12| |Methadone||2.5 mg every 8-12 hours||First dose 4-8 hours, up to 48 hours with repeated doses||High risk for overdose partly due to the long half-life; prescribed only by a trained prescriber| |Immediate-release (Nucynta)||50-100 mg every 6 hours||3-6| |Extended-release (Nucynta ER)||50 mg every 12 hours||unsure| |Immediate-release (tramadol)||50-100mg every 4-6 hours||4-6||Max dose 400 mg/day| |Extended-release (UltramER, ConZip)||100mg one a day||Unsure||Max dose 300 mg/day| Oxymorphone, a schedule II medication, can be given intravenously, subcutaneously, intramuscularly or orally. For acute pain, the immediate-release tablet (Opana) is used at 5-20 mg every 4-6 hours as needed for opioid naïve patients. For those with chronic severe pain, the extended release tablet is used (Opana® ER) and is started at 5 mg every 12 hours and may be titrated up at 5-10 mg increments every 12 hours every three to seven days. Opioid Dosing in Pediatrics Many narcotics are available in liquid form for pediatric use. Acetaminophen with hydrocodone is available as an elixir. Acetaminophen with oxycodone and oxycodone alone are also available in liquid form. The dose is based on the oxycodone and is dosed at 0.05 to 0.15 mg/kg/dose every 4-6 hours to a maximum of 5 mg per dose. Morphine is available as an immediate release formulation and is dosed at 0.2 to 0.5 mg/kg every 4-6 hours to a maximum of 30 mg per dose. Hydromorphone is dosed at 0.05 mg/kg The Controlled Substance Act divided drugs and other substances into five schedules, which is updated annually at http://www.deadiversion.usdoj.gov/21cfr/cifr/2108cfrt.htm. Schedule I controlled substances have no accepted medical use in the United States; it includes heroin and lysergic acid diethylamide (LSD). Schedule II and IIN substances may potentially be abused and may lead to severe physical or psychological dependence. Schedule II narcotics include oxycodone (OxyContin, Percocet), hydrocodone (Vicodin, Zohydro ER), Fentanyl (Sublimaze, Duragesic), methadone (Dolophine), hydromorphone (Dilaudid), morphine, opium, and codeine. Schedule III or IIINsubstances have less abuse potential than those substances that are Schedule I or II. They are high risk for psychological dependence and low to moderate risk of physical dependence. Examples of medications in this class include buprenorphine (Suboxone) and products that have less than 90 milligrams of codeine per dosage unit such as Tylenol with codeine. Schedule IIIN include anabolic steroids such as Depo-Testosterone and ketamine. Schedule IV controlled substances have a lower potential for abuse when compared to Schedule III controlled substances. Examples of this class include benzodiazepines, midazolam (Versed), tramadol (Ultram) and carisoprodol (Soma). Schedule V controlled substances have a low abuse potential relative to Schedule IV and include cough preparations that contain less than 200 milligrams of codeine per 100 milliliters or per 100 grams such as Robitussin AC. Most controlled substances alter mood, feeling or thought due to their effect on the central nervous system. Medications likely to produce euphoria are more likely to be abused, but medications may be abused to aid in sleep, reduce pain, reduce anxiety, reduce depression and improve energy. When opioid therapy is prescribed, it is important to have a record of the discussion between the patient and provider. The documentation must include the diagnosis being treated and the medication that will be used to manage the diagnosis. In addition, the goals of therapy along with the anticipative results should be documented. Any alternative or additional therapies should be discussed. When discussing the medications, it is important to document significant adverse reactions, risk for addiction or withdrawal, and medication interactions. To prevent prescription drug abuse, the clinician needs to assure: Patients risk should be assessed and contraindications should be immediately identified. Contraindications to opioid treatment include those who have erratic follow-up, suffer from current untreated addiction or have poorly controlled mental illness.22 Patients should not be prescribed an opioid medication alone for pain control a non-opioid analgesia should also be included, particularly acetaminophen which functions centrally as an opioid sparing medication.2 When taking a patient history, document the opioid currently prescribed, its dose, the frequency of use and the duration of use. It is important to query the State Prescription Drug Monitoring Program (PDMP) to confirm the patient’s report of prescription use. In addition, it is important to contact past providers to obtain medical records. Before controlled substances are prescribed, history of illegal substances use, alcohol use, tobacco use, prescription drugs use, family history of substance abuse and psychiatric disorders, history of sexual abuse, legal history, behavioral problems, employment history, marital history, social network and cultural background should be assessed. History of substance abuse does not prohibit treatment with opioids but may necessitate more intensive monitoring or referral to an addiction specialist. Multiple tools are available to evaluate for opioid risk. The Opioid Risk Tool is a tool that is used in primary care to screen adults for the risk of aberrant behaviors when they are prescribed opioids for chronic pain. It is a copyrighted tool, encompasses five questions and takes about one minute to use. It classifies a patient as low, moderate or high risk to abuse opioids. Those who are high risk have a high likelihood of aberrant drug-related behavior. It is not validated in the individuals without pain. The five questions include asking about family and personal history of substance abuse (alcohol, prescription drugs or illegal drugs), age (risk is 16-45 years old), psychological disease and a history of preadolescence sexual abuse. The questions are scored with different points assigned to each question which is variable between men and women, and the total score is tallied. The patient is placed into low, moderate or high risk. Regular follow-up is important and should occur at a minimum of every three months. When assessing the pain patient, the five A’s should be assessed: analgesia, addiction, activities of daily living, adherence and adverse effects. Part of follow-up should be urine drug testing which can be used to detect medication adherence as well as illicit and non-prescription drug use. It is critical that the clinician adequately documents any and all interactions with patients, assessments, results of testing and treatment plans. Written treatment agreements, which should be used between prescribers and patients when controlled substances are used, help guide the conversation between patient and prescriber. It discusses expectations, the risks and the monitoring that will occur to limit the complications of controlled substances (Table 4). Prescription monitoring programs are available in the majority of states, including Oregon. They provide an online database which lists all prescriptions of controlled substances dispensed to each patient by pharmacies. Ideally, the prescriber should check the database before prescribing controlled substances. If a patient has an undisclosed prescription for controlled substances, it is prescription drug misuse. When abuse/misuse is detected how should the clinician respond? If it is a single, minor deviation, then counseling along with more intensive monitoring may be all that is needed. Tapering controlled substance to reduce the risk of withdrawal is appropriate in more severe or persistent cases of misuse. When diversion is the cause of misuse, immediate removal of the prescription is likely the best course. If a substance abuse disorder is suspected, a referral to an addiction specialist is recommended. Marijuana use is still a controversial issue and is regarded by many as an addictive and dangerous substance leading to serious illegal drug use. But all indications point to marijuana being a safe substance when used in a medically controlled way. Unlike opioids, it does not cause respiratory depression, and studies have shown that it can reduce chronic pain by greater than 30%, which is comparable to the results achieve with opioids. It appears to be most effective with neuropathic pain, pain related to multiple sclerosis and fibromyalgia. The most common side-effect noted with the use of medical marijuana is dizziness. 5 The two active constituents of marijuana are Tetrahydrocannabinol (THC) and cannabidiol (CBD). Studies indicate that CBD provides good results in controlling pain, especially neuropathic pain. Research proposes that marijuana affects pain by its interaction with pain receptors located in the frontal region of the brain and the limbic area.3 One of the terrible fallacies that existed up until the recent past, including the 1980s, was that pediatric patients did not experience pain. The hypothesis was that babies, in particular, premature babies, had nervous systems that were too immature to allow them to feel pain.5 However, the reality is neonates feel pain like any other patient. Untreated or pain not treated appropriately may lead to long-term effects including altered sensitivity to pain.23 It is important to have a standard method to assess pain in neonates. It is difficult to assess pain in neonates and infants because they have limited ability to communicate. In this population, assessment is based on physiological and behavioral factors. Factors that suggest pain in the neonate and infant include vital signs, oxygen saturation, skin color, crying pattern, facial expressions, muscle tone and consolability. Scales for pain in neonates used in the neonatal intensive care unit include Neonatal Facial Coding System, Neonatal Infant Pain Scale and the Neonatal Pain Agitation and Sedation Scale. No tools are universally accepted to assess pain in infants and children. In neonates, pain assessment tools have a difficult time detecting pain in those with a very low birth weight, on paralytic medications or those that have prolonged pain.23 Due to the difficulty in finding and quantifying pain in the neonate and young child, pain management should include an attempt to reduce or prevent pain in the face of potentially painful situations. It is important to limit the number of painful procedures performed on young children. Pain in children is similar to adults. The source of the pain, along with its location and severity should be ascertained. In older children, self-reporting is a reasonable technique to assess pain. For those too young to understand self-reporting, the use of scales such as the facial expression scale can be used. With the help of a caregiver, observing the child for verbal responses, motor responses or facial expressions will help the clinician determine the degree of pain in a non-verbal child. Pain management in children should work to control, lower or prevent the pain. Pain management techniques are based on the severity, type, duration and source of pain. Non-pharmacological measures to control pain include physical/occupational therapy and cognitive/behavioral therapy. Pharmacological agents may also be considered. When prescribing pharmacological agents for neonates and infants, there needs to be extra thought of immature body systems involved, and the increased risk of drug-induced toxicity due to decreased rates of hepatic metabolism and renal excretion.2 Mild pain can be managed with NSAIDs or acetaminophen. When pain is not responsive to these medications, the use of stronger medications including opioids are considered. Regular assessment of pain control during treatment will help assure proper pain management. When pain is moderate to severe, providing pain medication around the clock is a reasonable option. Adjunctive therapy can be used in children including medications to manage co-morbid depression and anxiety. The use of anticonvulsants for neuropathic pain may also be considered. Age does not cause pain, but many conditions that cause pain are more common in older adults. The national Health and Aging Trends Study conducted in 2011 showed that the incidence of troublesome pain among those 65 years and older was 52.9%, and almost 75% of older adults identified several pain sites. Among the most prevalent pain sites in older adults are the joints, typically the hips, and knees, as well as back pain. Osteoporosis is also a common cause of both acute and chronic pain in the older population and is associated with both spinal and hip fractures. Falls are a major concern, and it is estimated that one in three adults over the age of 65 years sustain a fall annually.2 Those with reduced vision, reduced hearing or impairments in cognition present a bigger challenge in the assessment of pain. In individuals who are cognitively intact, self-report of pain is the most reliable method to assess pain.24 For those who have cognitive impairment, simple questions and basic screening tools can often reliably identify pain. Long-term care residents are often afflicted with some degree of cognitive impairment. Residents of long-term care facilities may present with behavioral changes or some physical change as the presentation of pain. Older adults may not report pain as readily as younger adults. Some older adults believe that pain is part of aging and therefore do not bother to discuss it with the health care team. When assessing the older adult, it is important to determine their perception of pain. Some patients perceive severe pain as a sign of a serious illness or loss of independence, or they may believe this is just a consequence of aging. When evaluating the older adult, it is important to have an accurate medication history including herbal medication and dietary supplements. Patients should also be asked about alcohol use, drug use and tobacco use. It is also important to determine the patient’s coping techniques as this will help the nurse understand how the patient functions and to help them deal with the pain in the most effective way. Many older adults use prayer and hope to assist in coping with pain. Goals should be set for the patient to determine an acceptable level of pain to allow the patient to have a satisfactory quality of life. Closely monitoring for adverse drug reactions is an important part of the management of chronic pain as many medications have many side effects. A balance should be sought between quality of life and side effects/risks of the treatment. Older adults have some physiological changes that affect the way medications are used. There is slowing of the gastrointestinal transit time which may extend the effects of continuous release medications. Changes in gastric pH may affect absorption of some medications. Chronic liver changes may lead to changes in drug metabolism. Chronic renal insufficiency is common in older adults and may lead to a reduced clearance of medications. Pain is often an ignored symptom of Parkinson’s disease; however, studies show that pain can be present for around 40% of individuals affected with Parkinson’s disease and that it can also be the first presenting symptom of the disease. Pain experienced with Parkinson’s disease may be central pain or musculoskeletal pain due to poor posture.3 The cognitive decline that occurs with Alzheimer’s disease may mask pain, and in advance stages of the disease, the patient is unable to express his or her pain experience. Even though pain can be difficult to assess in dementia patients, it’s estimated that at least 50% of those affected with dementia routinely experience pain. The most common types of pain that occur with dementia are often related to the musculoskeletal system, with osteoarthritis being a frequent finding in this population. Research has indicated that patients with Alzheimer’s disease experience the same pain sensitivity as those without the condition. Research has also shown that there is insufficient pain control for Alzheimer’s patients.3 Since pain is a subjective experience, we measure the existence and intensity of it by the patient’s self-report. Unfortunately, adult patients who have cognitive/expressive deficits or who are intubated, sedated, and/or unconscious may not be able to provide a self-report. Individuals who cannot communicate their pain remain a challenge and are at even greater risk for inadequate pain control. When patients cannot self-report, other measures need to be used to detect pain. Even if they cannot speak for themselves, these patients have the right to pain assessment and management. Valid and reliable methods to assess pain in nonverbal patients are clearly needed. The American Society for Pain Management recommends the following multifaceted approach for consideration in detecting pain in this population.25 Pregnancy is associated with many changes that have the potential to cause pain such as changing body shape, increasing weight, hormonal shifts and joint laxity. Acetaminophen is thought to be a safe option for pain control throughout pregnancy. NSAIDs should not be used in late pregnancy. NSAIDs have the potential to lead to premature closing of the ductus arteriosus if used in the third trimester. Many different pain syndromes are commonly seen in pregnancy. Mechanical back pain due to weight distribution changes is one of the most common types of pain. Pain in the pubic symphysis is common and can be managed with position changes and the use of pelvic support devices. Leg cramps may be prevented and treated with calf stretching. Carpal tunnel syndrome is often seen during pregnancy and is likely related to fluid retention which causes compression on the nerves in the carpal tunnel. Symptoms of carpal tunnel syndrome most commonly come on in the third trimester and resolve after pregnancy, but may be prolonged by breastfeeding. Labor is a painful period and treatment may involve a variety of techniques. The most reliable method to manage pain is with epidural and spinal analgesic techniques. The use of opioids induces sedation and thereby contributes to pain control. Unfortunately, opioids act systemically, and some effect may be transferred to the fetus leading to respiratory depression in the neonate. Psychiatric disorders are up to three times higher in those with chronic pain when compared to the general population. Depression, anxiety and post-traumatic stress disorder are the most prevalent psychiatric disorders in patients with chronic pain.26 The patient with pain and psychiatric disease typically reports more intense pain than the patient without co-morbid mental illness. Chronic pain management has multiple challenges in psychiatric patients. Optimizing treatment of the underlying psychiatric illness is an important step in order to achieve an optimal reduction in pain. It is also important to screen and treat for any substance abuse or substance-induced disorder as this will help assure appropriate and adequate treatment of pain. Medications with abuse potential should be used cautiously, as there is a high prevalence of drug use disorders in psychiatric patients. The use of exercise and cognitive behavioral therapy is an important step in the management of pain in the psychiatric patient. In addition, monitoring for compliance is an important part of the management of the psychiatric patient who suffers from chronic pain. Many conditions lead to visceral pain. Visceral pain occurs when there is stimulation of nociceptors of the organs in the abdomen, pelvis or chest. Visceral pain is diffuse, hard to pinpoint and often referred to a remote structure. Visceral structures are aggravated by ischemia, inflammation and stretch. Chest pain can occur from many different etiologies. There are a few life-threatening situations that must be considered including myocardial infarction, pulmonary embolism, aortic dissection, tension pneumothorax and esophageal rupture. The majority of chest pain is not life threatening, and selected causes include chest wall pain (costochondritis, muscle strain), panic attacks, pneumonia, pleurisy, myocarditis, gastroesophageal reflux disease and pericarditis. Abdominal pain is a common problem, and most cases are not life threatening. Like chest pain, it is important to rule out serious causes of abdominal pain immediately. Serious causes of abdominal pain are suggested by unstable vital signs, high fever, an inability to pass gas or have a bowel movement, vomiting blood or having dark/tarry stools. Common diagnoses that are potentially life-threatening include acute bowel obstruction, acute mesenteric ischemia, bowel perforation, ulcer, acute myocardial infarction and ectopic pregnancy. Other causes of abdominal pain include appendicitis, gallbladder disease, diverticulitis, constipation, kidney stones, lactose intolerance and inflammatory bowel disease. Many patients have chronic abdominal pain, and many of these cases are benign – functional dyspepsia or irritable bowel syndrome. If no organic disease is found, then that patient should be treated symptomatically. Those individuals over the age of 50 are more likely to have a more serious cause of chronic abdominal pain, and functional abdominal pain should be thought of only after more serious causes have been ruled out. Pelvic pain is a common problem in women and may represent a urologic, gynecologic, gastrointestinal, musculoskeletal, metabolic or vascular issue. Acute pelvic pain may be of visceral or somatic origin. In all women who have the possibility of being pregnant, a pregnancy test should be done. Other testing to rule out other causes of pelvic pain includes a complete blood count, sedimentation rate, chlamydia/gonorrhea testing, a serum hCG level and a urinalysis. Diagnostic testing may include a pelvic ultrasound to rule out a mass or ectopic pregnancy, or a laparoscopy can help determine if endometriosis is present. Features that suggest a serious cause of pelvic pain include peritoneal signs, brisk vaginal bleeding, high fever or unstable vital signs. There are many potential causes of chronic pelvic pain. Diagnosing and treating chronic pelvic pain can be challenging. Determining the exact cause of the abdominal pain may include the use of extensive laboratory evaluation, imaging modalities and at times exploratory surgery. For those with chronic pelvic pain, the examination may use a pain map to identify tender areas and see if physical exam tender areas match the pain map. Ideally, the clinician should attempt to treat the underlying cause of the pelvic pain, but the use of non-specific treatment may be considered when there is no specific diagnosis. Sickle cell crisis is a vaso-occlusive phenomenon leading to pain associated with blood cell destruction and subsequent anemia. While not the only feature of sickle cell disease, pain is a major component of the condition. Acute sickle cell disease pain is secondary to vaso-occlusion and the consequent tissue ischemia and inflammation. Over time chronic pain may result. Assessment of pain is challenging in sickle cell crisis as there are no objective findings that definitively confirm a crisis or the degree of pain. An acute painful episode can be precipitated by multiple events such as stress, infection, weather conditions, dehydration or alcohol consumption. Pain can affect many parts of the body such as the chest, back, extremities or abdomen. Many times the pain is associated with fever, elevated breathing rate, hypertension, nausea and vomiting. Treatment of pain in chronic disease can be challenging to manage. If mild pain is present and the patient is not on chronic opioid therapy, pain management should be started with non-opioid therapy moving to opioids when pain becomes more severe. Individuals who are on chronic opioids will require additional opioids for breakthrough pain. When treated in the emergency room, the use of intravenous morphine, hydromorphone or fentanyl can be used. If pain cannot be relieved with 2 doses, then admitting the patient to the hospital for pain management may be necessary. Many patients with sickle cell disease have chronic pain that is managed with long-acting opioids.27 Headaches are a frequent cause of recurrent pain and one of the most common diagnoses seen in health care. There are multiple types of headaches including migraine, tension, and cluster headache. Tension headache is the most common. It is important for the healthcare provider to understand red flags that suggest a serious cause of a headache. When a serious cause of a headache is suspected, urgent evaluation is necessary and may include the use of brain imaging to rule out an underlying secondary cause of a headache. Signs/symptoms that suggest a more serious cause of headache include: Tension headaches may occur every day and have a variable presentation. Typically, they are described are pressure, tightness or aching. They may feel like a band around the head, and they may be bifrontal or bitemporal or generalized. Tension headaches can be intermittent with a variable duration or constant Migraine headaches are classically one sided (but may generalize), are pounding or throbbing. Patients with migraines often have co-existent nausea/vomiting and/or photophobia. An acute migraine can be managed with multiple agents. The use of acetaminophen or NSAIDs may be considered. When simple analgesics are not effective in the management of the pain, the use of migraine-specific agents (triptans or dihydroergotamine) may be considered. These agents are available in oral, rectal and injectable formulations. Oral agents are preferred by many patients, but for those with severe nausea that accompanies a migraine, a non-oral route is the best option. First line prophylactic agents for migraines include propranolol, amitriptyline, topiramate and valproic acid. Pain medication for headaches should be taken in such a way that the pain disappears, many times patients take less than the require amount, or they don’t take medication until the pain has reached an intolerable level which can result in poorly controlled prolonged periods of pain.3 Many conditions lead to neuropathic pain including multiple sclerosis, post-stroke pain, spinal cord injury, traumatic brain injury, syringomyelia, trigeminal neuralgia, peripheral neuropathy and post-herpetic neuralgia. Chronic neuropathic pain can be difficult to treat and result in a great deal of patient suffering.3 Multiple sclerosis (MS) is commonly associated with pain. It is estimated that 43 percent of MS patients have at least one painful symptom.28 Common painful symptoms include dysesthetic pain, back pain, spasms, Lhermitte sign, visceral pain and trigeminal neuralgia. Central post-stroke pain is experienced as unilateral head/facial pain that starts within six months of a stroke. It affects up to 8 percent of stroke victims.29 The pain is typically persistent but may come and go. The severity of the pain may be variable, and stress often exacerbates the pain. Treatment of central post-stroke pain include benzodiazepines; anticonvulsants such as gabapentin, pregabalin, lamotrigine or carbamazepine; baclofen; antidepressants such as amitriptyline or a SSRIs; and clonidine. When pain is resistant to pharmacotherapy, the use of neuromodulation (deep brain stimulation) and surgery may be considered. Spinal cord injury (SCI) patients often develop chronic pain after spinal cord injury that affects the quality of life. Pain is often poorly localized and neuropathic in nature (e.g., burning, stabbing). The pain can be evoked or spontaneous. Pain can be at-level pain (pain at the level of the SCI) that is caused by injury to the nerve roots and dorsal gray matter causing pain at the level of the injury. Pain can also be below the level of the SCI which is thought to be caused by injury to the spinothalamic tractsand/orthalamic deafferentation. Pain may be managed with antidepressants (e.g., tricyclic antidepressants), antiepileptics (e.g., gabapentin, lamotrigine or valproate), and standard analgesic medications (opiates). When medications are not effective, the use of invasive treatments is considered. These may include deep brain stimulation, cordotomy or motor cortex stimulation. Syringomyelia is a delayed progressive intramedullary cystic degeneration that affects a small number or patients after spinal cord injury. It is thought to occur from scarring and subsequent obstruction of cerebral spinal fluid flow and altered tissue compliance leading to extension of the central canal which presses on the nearby cord tissue.30 Trigeminal neuralgia results in head/facial pain coming from one or more of the branches of the trigeminal nerve. Classically the pain is unilateral, brief, stabbing and/or lancinating that is sudden in onset. Imaging, typically with an MRI, is sometimes done to distinguish primary from secondary trigeminal neuralgia. Primary disease has no identifiable lesion causing the symptoms. Secondary causes of trigeminal neuralgia include acoustic neuromas, multiple sclerosis, cerebral aneurysms and trigeminal neuromas. Secondary disease is more common if there is bilateral involvement; it occurs at a younger age; or if there is associated sensory loss. Conditions that may mimic trigeminal neuralgia include dental pain, multiple sclerosis, herpes zoster or atypical headaches. Treatment for the pain of trigeminal neuralgia includes carbamazepine and oxcarbazepine. For those who are intolerant or non-responsive to these agents, baclofen or lamotrigine can be used. Surgical options are sometimes tried for refractory cases. Peripheral neuropathy can come from many etiologies including diabetes, cancer, alcohol and HIV. Peripheral neuropathy typically presents with distal sensory loss, weakness, numbness and/or burning. The presentation may be variable. Neuropathy due to diabetes is one of the more common types of neuropathies. They typically result in symptoms that begin in the lower extremities. Sensory symptoms are seen first; followed by motor symptoms. Patients complain of a gradual sensory loss, numbness, a burning sensation and pain in the feet, and mild gait abnormalities. Overtime weakness may develop and a "stocking and glove" distribution of sensory loss may occur. Physical exam findings depend on which nerve fibers are involved. Treatment of neuropathies includes treating the underlying disease (e.g., control blood sugar in diabetes) and medications to treat the symptoms. Medications used to manage the pain of neuropathy include tricyclic antidepressants, duloxetine, gabapentin, pregabalin, carbamazepine, topiramate, tramadol and NSAIDs. Post-herpetic neuralgia is pain that presents after a herpes zoster infection, which is caused by the varicella zoster virus. Certain groups are at higher risk to develop pain after a herpes zoster infection. These include older individuals, those who had higher levels of acute pain during the acute infection and those with a more severe rash.31 Herpes zoster is an infection that starts with a sharp, burning, stabbing pain that follows a dermatome. A rash will be seen a few days later along the same dermatome. Commonly affected dermatomes include the thoracic, cervical and trigeminal nerves. After the rash abates, some individuals develop pain along the same dermatome that persists longer than four months. Pain may persist for years or even throughout life. Allodynia is often seen in those with post-herpetic neuralgia. Post-herpetic neuralgia is commonly treated with tricyclic antidepressants, pregabalin and gabapentin. Topical capsaicin or lidocaine can be used. Opioids are sometimes used but should be used cautiously. They are considered second or third line options and are sometimes used while the TCAs, pregabalin, gabapentin take effect, then tapered. If all other options are not effective, the use of intrathecal glucocorticoids may be considered. Chronic low back pain is the fifth most frequent cause of a visit to see a physician.5 Most cases of back pain are non-specific and will improve within a few weeks with conservative treatment, but some people develop chronic pain. Those more likely to develop chronic back pain include those with functional impairment, poor health, psychiatric co-morbid conditions, maladaptive pain coping behaviors and non-organic signs – such as pain in the low back when pressing directly on top of the head.32 Non-invasive treatments are recommended as the safest and most appropriate approach to the resolution of back pain.5 Less than one percent of patients with back pain have a serious cause of back pain, and less than ten percent have specific etiologies.32 When back pain is present, it is important to rule out any serious pathologies. Serious pathologies are suggested by certain red flags (See Table 5) A complete history and physical exam is an important part of the exam to rule out serious causes of back pain and help identify the cause of the back pain. Certain conditions that are more urgent require immediate imaging with an MRI and referral, including those who have any red flags. Those who have not improved after 4-6 weeks of conservative therapy may be considered for imaging. Patients who have conditions that may benefit from surgery or epidural injections should have imaging. Other conditions that are helped by imaging include osteoarthritis and ankylosing spondylitis. Back pain should not be treated with bed rest, but modifying activity slightly to account for the pain is appropriate. Oral analgesics should be used short-term to provide pain control. Re-evaluation should occur at four weeks to assure improvement, evaluate for any needed testing and reevaluate the need for pain medications. Initial oral agents should include NSAIDs for 2-4 weeks. Those with an allergy or contraindication to NSAIDs may consider acetaminophen. When pain is not controlled with NSAIDs the use of a muscle relaxant may be considered. For those who cannot take a muscle relaxant, the combination of an NSAID and acetaminophen is an option. The use of opioids and tramadol should be used very judiciously in acute low back pain and only in those who are not getting pain control from other agents or contraindications to those other agents. Physical therapy can be used for acute low back pain but is more often used for chronic low back pain. One of the most important aspects in the management of back pain is education. Patients should be educated on the causes of back pain, the expected course of back pain, its encouraging prognosis, the value of diagnostic testing, treatment options and when to contact their healthcare provider. Chris is a 44-year-old secretary who presents to her primary care provider with back pain for the last three weeks. The pain started after she lifted a heavy box at work. Her self-management regime included bed rest and taking acetaminophen alternating with ibuprofen for the last three weeks. She reports that the pain is not getting any better. The pain is described as aching and diffuse along her lower back. She reports that the pain is worse with walking and prolonged standing or sitting and is relieved when lying down. She reports that the pain radiates into her right buttock, but not down the leg. Chris is generally healthy. The only medication that she takes on a regular basis is sertraline for depression. She has never had any surgeries and has no allergies to medications. On physical exam, her vitals are stable, and she appears comfortable. She walks with a slight limp. The exam shows diffuse tenderness across her lumbar spine. There is no deformity, the straight leg raise is normal, sensation is intact to the lower extremities and the remaining aspects of her exam shows no focal neurological finding. Recent labs demonstrated a normal blood count and normal liver and renal function. Her primary care physician recommends that she goes to physical therapy and prescribes diclofenac 50 mg three times a day for three weeks and encourages her to use acetaminophen for breakthrough pain. At the three-week follow-up, Chris is doing better. Her primary care physician recommends continued home exercises as recommended by the physical therapist and use of as needed acetaminophen for pain; the NSAID is discontinued as there is limited, if any, inflammation contributing to the pain. This case is typical of back pain; it essentially resolved within six weeks. The pain was caused by an acute injury with muscle spasms causing referral of pain into the buttock. Radicular pain was not present. Radicular pain would be present if there were inflammation, compression or injury to a spinal nerve root. The use of imaging is not indicated in this case because there were no red flags. Typically, this type of back pain responds to the use of simple analgesics; the use of opioids is not necessary. Acetaminophen is preferred for analgesia, because of its relative safe profile. The use of a NSAID may be needed because of its anti-inflammatory effect. At times, a short-term use of a muscle relaxer may be helpful for muscle spasm that often contributes to this type of acute pain. Tramadol is often used in cases of mild acute pain, but due to its potential abuse should be relegated to a second or third line option. In this case, Chris is also on sertraline and there is a potential interaction between tramadol and sertraline. The goal, in this case, is to minimize disability and return Chris to her baseline function as soon as possible. Relative rest at first may be appropriate, but prolonged bed rest will contribute to deconditioning and stiffness and will prolong recovery. The patient with acute low back pain should have exercises to strengthen the low back, abdominals and other core muscles as well as stretching the low back and legs. Appropriate health care of back pain will get patients back to normal functioning quickly while minimizing the risk of dangerous treatment options. Neck pain can occur from multiple pathologies including trauma, muscle strain or disc pain. The majority of cases of neck pain will resolve within three weeks. Initial treatments are conservative including oral analgesics (acetaminophen or NSAIDs for mild or moderate pain; short-term opioids for severe pain), posture modification and exercise. Chronic neck pain has multiple treatment options. The use of a long-term cervical collar is not recommended. The use of a cervical collar to manage severe pain for less than three hours a day for at a maximum of 2 weeks may be considered. Physical therapy and home exercises should be used. Pharmacological options for chronic pain include: acetaminophen; NSAIDs; a low-dose antidepressant, especially in those who have pain that interrupts sleep; a muscle relaxant may be considered for those with muscle spasm; and rarely opioids. Other options for pain management include trigger point injections, cervical medial branch blocks, TENS units, and radiofrequency neurotomy. Surgical evaluation may be considered in those with myelopathy or neurological symptoms associated with radiculopathy. Complex region pain syndrome (CRPS) is broken down into type I and II. It is a disorder of the extremities illustrated by regional pain that is inconsistent in degree or time to the expected pain. The pain is localized around a certain territory. The primary clinical manifestation is pain that is typically described as stinging, burning or tearing and is exacerbated by movement, temperature variation, stress or any contact. In addition, some individuals have allodynia or hyperalgesia. The patient may also notice differences in skin color or temperature. The affected side may be more edematous or sweat more when compared to the other side. Limb movement is typically impaired by pain, edema or contractures. The patient with CRPS may also have unilateral variations in hair or nail growth along with skin atrophy. The progression of the condition is variable over time. The underlying pathology is not well understood but may include inflammation and changes in pain perception in the central nervous system. CRPS I is the more common type and is diagnosed when the typical symptoms are present and there is no evidence of a peripheral nerve injury. CRPS II is less common and is present when there is evidence of a peripheral nerve injury. CRPS is more commonly seen in women. It is often associated with some acute event that starts the syndrome. An example of an event may include trauma such as a broken bone or a crush injury. The diagnosis is made on clinical exam after other conditions are ruled out. Treatment of CPRS should involve a multidisciplinary approach including physical and occupational therapy, physiological interventions and pharmacotherapy. Pharmacologic options include NSAIDs, tricyclic antidepressants, gabapentin, or topical treatments (lidocaine or capsaicin). Other less common options include calcitonin, glucocorticoids, alpha-adrenergic agonists/antagonist (e.g., prazosin, clonidine), ketamine, and opioids. Multiple interventional approaches may be considered including regional sympathetic nerve blocks, trigger/tenderpoint injections and spinal cord stimulation Phantom limb pain is aching, burning or shock-like pain where an amputated limb used to be, and is related to the perception of self. Sensory input from various parts of the body forms a ‘body map’ in the brain. When a limb is amputated, that map still exists in the brain leading to the feeling of pain and other sensations in the amputated body part.3 It is important to rule out other cause of the symptoms, such as infection or wound on the stump, ischemia or neuroma, before diagnosing phantom limb pain. The incidence of this condition is variable, and it is hypothesized that not controlling pain before and after the surgery increase the risk of phantom limb pain. It has been shown that stopping pain transmission in the spinal cord for 72 hours prior to the amputation surgery by using lumbar epidural blockage (LEB) has a positive long-term effect in preventing amputation pain. Also, aggressively managing pain immediately after surgery reduces the risk of developing phantom limb pain.3 Multiple agents are helpful in the management of phantom limb pain including acetaminophen, NSAIDs, TCAs, and gabapentin. In addition to medication, non-pharmacologic methods to manage pain include TENS units, mirror therapy (which helps resolve the visual-proprioceptive disconnect), biofeedback and occasionally surgical interventions. Pain is very prevalent in cancer. It is present in up to one-half of patients when first diagnosed with cancer and according to some estimates up to 100 percent of people with advanced cancer.33 Fear of uncontrolled pain is one of the most prevalent findings among cancer patients. Research shows that around 70% of cancer patients have severe pain at some point during the disease process.2 Pain in cancer can be acute or chronic. Acute pain is seen during interventions such as surgery, tissue injury or radiation therapy. Acute pain can also be felt secondary to the cancer itself such as an obstructed bowel, a perforated bile duct, bleeding from a liver cancer or a pathological fracture. Chronic pain during cancer is typically related to the tumor itself or as a complication of treatment. Neuropathic pain is also seen in cancer patients. Neuropathic pain in cancer can arise from the tumor pressing on a nerve or nerve plexus. In addition, neuropathic pain can be a result of the treatment as many chemotherapeutic agents or radiation therapy have the potential to cause nerve injury. Many conditions that result in neuropathic pain - such as herpes zoster, post-herpetic neuralgia - are relatively common in cancer patients. ‘Breakthrough’ pain is a debilitating, and often difficult to control occurrence for cancer patients. It can be defined as acute, severe pain that has a rapid and unpredictable onset.2 It requires analgesia that has a rapid onset and necessitates re-evaluation of the patient’s pain management regime. Management of cancer pain is typically aggressive. The use of opioids is common in chronic cancer pain, and doses should be titrated to find effective pain control. Agents commonly used include hydromorphone, morphine, oxycodone and hydromorphone. These agents are preferably given orally or transdermally. Dosing is commonly started with short-acting agents, but for those with chronic pain switching over to a long-acting formulation is preferred with the continued use of short-acting agents for break-through pain. The dose for breakthrough pain is typically about 10 percent of the basal daily opioid dose. Individuals who need rapid titration do well with the use of opioids given via infusion by the IV or SC route. While morphine is traditionally a common agent that is used, other agents have a good effect in certain situations. For those with swallowing difficulty or poor ability to absorb from the GI tract, the use of fentanyl can be used. Hydromorphone or fentanyl is recommended for those with renal insufficiency. Bone pain is another serious debilitating issue for many cancer patients to cope with, either as a primary lesion or as metastatic bone pain. Bisphosphonates, medications that are used to decrease the loss of bone mass are being used along with analgesic drugs to help alleviate bone pain in cancer patients.2 While looking at cancer pain, it is also important to exam pain related to the treatment of cancer, chemotherapy pain. Several chemotherapy agents can cause neuropathic pain, including Paclitaxel, Cisplatin, and vinca alkaloids, for example, vincristine and vinblastine.5 Palliative and End of Life Care have become two distinct terms. The population of palliative care patients has expanded to include those who are not necessarily close to death but are suffering from chronic conditions that require symptom control and pain management. End of Life Care requires skilled pain management to allow for a peaceful death with dignity. Morphine is the gold standard for end of life pain relief and is the most effective medication for the treatment of dyspnea. Morphine functions by changing the patient’s awareness of their breathing experience by decreasing the respiratory drive, and the intake of oxygen.2 Fibromyalgia (FMS) is a condition characterized by chronic widespread musculoskeletal pain. Patients also complain of fatigue, sleep disturbances, psychiatric symptoms, cognitive disturbances and multiple other somatic complaints. The etiology and pathophysiology are unclear. In FMS, pain is typically diffuse and persistent. It is often described as stiffness, deep aching, soreness, burning or throbbing. Patients typically report that pain is persistently present, but the intensity may vary. Poor sleep, excessive stress, and/or exposure to cold may exacerbate the pain. Generally, pain is worse in the morning and improves throughout the day. Pain commonly affects the neck, shoulders, back, arms, legs and chest wall. Fibromyalgia is more common in women and has been shown to be six times more common when compared to men. The prevalence is about 2-3 percent in the United States. It is the most common cause of generalized musculoskeletal pain in females aged 20 to 55.34 Patients with FMS often complain of hurting all over or feeling as though they have the flu. It is diagnosed in those with chronic pain and no hint of muscle inflammation. Differential diagnosis of FMS includes osteoarthritis, autoimmune disease, rheumatoid arthritis, systemic lupus erythematosus, hypothyroidism, inflammatory myopathy, systemic inflammatory arthropathies, spondyloarthritis, ankylosing spondylitis, myositis and polymyalgia rheumatic. Education is a critical step in the treatment of FMS. The condition must be explained to the patient including treatment approaches. Key aspects of patient education include: Medications are often used in the management of FMS. Typically, non-pharmacological methods are used first and when they are not effective the addition of medication is considered. Commonly used medications include low dose tricyclic antidepressants, selective serotonin reuptake inhibitors, pregabalin, duloxetine, cyclobenzaprine and milnacipran. When utilizing medications, the dose should be started low and built up gradually. Amitriptyline, milnacipran and duloxetine are fist line agents for fibromyalgia, but most patients do not find significant improvement (some improvement was noted with sleep and pain, but fatigue and quality of life were only minimally improved) on these medications and many have significant side effects.35 When first line agents do not work, a combination of medications can be tried. For example, the use of duloxetine in the morning and a tricyclic antidepressant before bed is one such combination. Combinations of medications work through different mechanisms of action and focus on different symptoms. At times the addition of analgesics or anti-inflammatory medication can be tried. The use of acetaminophen, NSAIDs or tramadol may be considered to target pain when more traditional FMS agents do not work. Generally, opioids should be avoided in FMS. Rheumatoid arthritis (RA), a chronic destructive, sometimes deforming disease, attacks the collagen in the body, especially in the joints. Rheumatoid arthritis is associated with widespread symptoms such as fatigue, fever, poor appetite, nerve damage and increased size of the spleen and lymph nodes. RA can irreversibly damage joints. Therefore, early diagnosis and treatment to control inflammation can improve outcomes of the disease. Treatment options include psychosocial care, patient education, therapy and pharmacologic treatment. A rheumatologist should be involved in the care of patients with RA as disease-modifying antirheumatic drugs (DMARD) are complex to use. If therapy is started soon, the patient will experience better outcomes. DMARD therapy is complex and requires a lot of monitoring. For this reason, DMARD is beyond the scope of this article. The use of NSAIDs and glucocorticoids are also used in the management of RA. They can be used as bridging therapy to get quick control of inflammation until the DMARDs take effect and can be used for pain control. Osteoarthritis is the most prevalent joint condition globally, and is one of the leading sources of pain and disability in older adults.2 Arthritis affects twenty percent of adults and costs more than $128 billion annually in the United States. As the population ages, the burden of OA will increase.36 Managing arthritis improves mobility, decreases falls, decreases death rates and improves quality of life. Osteoarthritis is defined as a joint disease with deterioration of the joint and abnormal bone formation. OA is present when the endings of the bones - called cartilage, which normally cushion the bones - no longer do their jobs. The ends of the bones rub together, and the cartilage wears away. Treatment of osteoarthritis focuses on pain control and maintaining function. In the near future, there may be treatments available to reverse or even cure the disease process, but at present symptom control is the only option. Treatment focuses on medications and non-medication means to control the pain and minimize disability. Non-drug treatment is first line management as it bypasses the negative effects drugs have on the body. Non-drug treatments include exercise, nutrition, physical and occupational therapy, heat and cold treatments, ultrasound, weight loss, magnets and patient education. When non-drug methods do not provide adequate relief, medications are used to treat OA. Acetaminophen, primarily due to its lack of negative side effects when compared to non-steroidal anti-inflammatory medications, is recommended as fist line treatment for OA.37 Acetaminophen is more likely to be beneficial if the arthritis is not inflammatory. Topical NSAIDs may be used especially if the disease is localized to one area. Topical agents are associated with a significantly less adverse event profile than systemic agents. In the United States, diclofenac sodium topical gel and diclofenac sodium topical solution are available for the management of osteoarthritis. Other topical agents can provide significant relief for patients with OA. Capsaicin (Zostrix) decreases the neurotransmitter called substance P, which is involved in the transmission of pain. Capsaicin is applied three to four times a day. It takes Capsaicin a few weeks before it provides significant pain relief. Hands should be washed after contact with the substance. Another topical agent sometimes used for the treatment of localized pain is the lidoderm patch. The patch is not approved by the Food and Drug Administration for use in OA but is often used. It is a small patch applied to the skin around the painful joint wore for no more than 12 hours a day. Other options include tramadol, codeine, hydrocodone, hydromorphone, oxycodone, fentanyl, and morphine. Intra-articular steroid injections can be used for painful joints. This injection involves placing a needle directly into the arthritic joint and injecting a steroid along with a numbing agent. No more than three injections per year should be given.38 When medical treatment fails, surgery is the next option. Surgical options include arthroscopy, osteotomy, total joint arthroplasty, or joint fusion. Pain is a disagreeable sensory and emotional experience connected with actual or potential tissue damage or explained in terms of such damage. Many conditions have the potential to cause pain. Understanding these conditions, how to assess them and how to treat them are a vital part of adequately managing the pain. In the current health care system, much pain is not even addressed. Many regulatory agencies have implemented guidelines within the health care system to help with addressing the pain epidemic. In February 2013, a report issued by the United Nations stated that certain types of abuse in healthcare settings “may cross a threshold of mistreatment that is tantamount to torture or cruel, inhuman or degrading treatment or punishment.” In particular, the United Nations report emphasized that countries drug control laws should recognize the “indispensable nature of narcotic and psychotropic drugs for the relief of pain and suffering,” and governments should appraise national laws, “to guarantee adequate availability of those medicines for legitimate medical uses” 5 It is the role of the health care team to perform a good initial pain assessment and an on-going assessment of pain. Proper pain management requires a team approach in the assessment and treatment of pain. Many options are available for the management of pain including non-pharmacological options, non-opioid medications, opioid medications, and adjunctive medications. Opioid analgesics, while very good at managing pain, have led to many social and legal problems including overuse and diversion. The health care team also has the responsibility to partner with the patient to properly manage the pain. Each health care team member has their role in the management of pain. If health care team members perform their role and the patient takes an active role in his/her care, the adequate treatment of pain is a very attainable goal. 1. American Academy of Pain Medicine, (Visit Source) ‘Get Facts on pain,’ retrieved April 10th 2016. 2. Wright Shelagh (2015) ‘Pain Management in Nursing Practice’, SAGE Publications Inc. 1st ed. 2455 Teller Road, Thousand Oaks, CA 91320 in association with the International Association for the Study of Pain (IASP) 3. Moller Aage R. (2014) ‘Pain It’s anatomy, Physiology and Treatment,’ 2nd Edition @ Aage R. Moller Publishing, Richardson, TX 75080 4. MedicineNet.com. (2016). Definition of Pain. Retrieved June 4, 2016 from: (Visit Source). 5. Foreman Judy (2014) ‘A Nation in Pain Healing our Biggest health Problem’, 1st edition @ Judy Foreman 2014, Oxford University Press, 198 Madison Ave, New York, NY 10016. 6. American academy of Pain Medicine 2016. AAPM Facts and Figures on Pain, (Visit Source) Retrieved April 10th 2017. 7. Institute of Medicine. (2011). Relieving Pain in America: A Blueprint for Transforming Prevention, Care, Education, and Research. Retrieved May 1, 2016 from (Visit Source). 8. SAMHSA. (2012). Results from the 2011 National Survey on Drug Use and Health: Summary of National Findings, NSDUH Series H-44, HHS Publication No. (SMA) 12-4713. Rockville, MD. 9. Center for Disease Control. (2014). Opioid Painkiller Prescribing. Retrieved May 1, 2016 from (Visit Source). 10. National Institute on Drug Abuse (NIH), National Institutes of Health; U.S. Department of Health and Human Services. ‘Substance Abuse in the Military,’ Revised March 2013. Htpps://www.drugabuse.gov Retrieved April10, 2017 11.Merikangas KR & McClair VL. (2012). Epidemiology of substance use disorders. Human Genetics. 131(6), 779-89. 12.Substance Abuse and Mental Health Services Administration, Center for Behavioral Health Statistics and Quality. (2013). The TEDS Report: 2001-2011: National Admissions to Substance Abuse Treatment Services. Retrieved May 1, 2016 from (Visit Source). 13.Birnbaum Howard G., White, Alan G., Schiller, Matt., Waldman, Tracy., Cleveland, Jody M., Roland, Carl L, (2011) ‘Societal Costs of Prescription opioid Abuse, Dependence, and Misuse in theUnited States,’ Pain Medicine 12 (4): 657-667 Published 15th April 2011. 14. Centers for Disease Control and Prevention (CDC) Recommendations and Reports/March 18, 2016 /65(1);1-49. CDC Guideline for Prescribing Opioids for ChronicPain – United States, 2016. https://www.cdc.gov Retrieved April 11, 2017. 15. Boscarino JA, Rukstalis M, Hoffman SN, Han JJ, Erlich PM, Gerhard PM Gerhard DS & Stewart WF. (2010). Risk factors for drug dependence among out-patients on opioid therapy in a large US health-care system. Addiction. 105(10), 1776-82. 16. Sehgal N, Manchikanti L & Smith HS. (2012). Prescription opioid abuse in chronic pain: a review of opioid abuse predictors and strategies to curb opioid abuse. Pain Physician. 15(3 Suppl), ES67-92. 17. The Joint Commission Statement on pain Management April (2016). Joint Commission Official Site, www.jointcommission.org. Retrieved April 28, 2017 18. Sielski R, Reif W, & Blombiewski, JA. (2016) Efficacy of Biofeedback in Chronic back pain: a meta-analysis. International Journal of Behavioral Medicine. Retrieved June 19, 2016 from (Visit Source). 19. Pop-Busui Rodica, Boulton, Andrew J. M., Feldman, L., Bril, Vera, Freeman, Roy,. MMalik, Rayaz A., (2017) ‘DiabeticNeuropathy: A Position statement by the American Diabetes Association,’ Diabetes Care January 2017; 40(1); 136-154. 20.Franklin G. (2014). Opioids for chronic noncancer pain. Neurology. 83(14), 1277-1284. 21.American Medical Directors Association (AMDA). (2012) Pain management in the long term care setting. Columbia (MD): American Medical Directors Association (AMDA); 2012. 22.Chou R, Fanciullo GJ, Fine PG, Adler JA, Ballantyne JC, Davies P, Donovan MI, Fishbain DA, Foley KM, Fudin J, Gilson AM, Kelter A, Mauskop A, O'Connor PG, Passik SD, Pasternak GW, Portenoy RK, Rich BA, Roberts RG, Todd KH, Miaskowski C; American Pain Society-American Academy of Pain Medicine Opioids Guidelines Panel. (2009). Clinical guidelines for the use of chronic opioid therapy in chronic noncancer pain. Journal of Pain. 10(2),113-30. 23.Kanwaljeet JS. (2016) Prevention and Treatment of Pain in the Neonate. Retrieved May 3, 2016 from (Visit Source). 24.AGS Panel on Persistent Pain in Older Persons. (2002). The management of persistent pain in older persons. Journal of the American Geriatric Society, 50, S205. 25. Herr, K., Coyne. Patrick J., McCaffery, M., Manworren, R., Merkel, S. ‘Pain Assessment in the Patient Unable to Self Report,’ (2011) Pain Management Nursing August 20, 2011 (Visit Source) assessment in the Patient Unable to Self Report. Retrieved April 28, 2017. 26.Renner JA. (2014). Managing Patients with Pain, Psychiatric Co-Morbidity & Addiction. Retrieved June 1, 2016 from (Visit Source). 27.DeBaun MR & Vichinsky EP. (2016). Vasoocclusive pain management in sickle cell disease. Retrieved May 15, 2016 from: (Visit Source). 28. Olek MJ. (2016). Clinical features of multiple sclerosis in adults. Retrieved May 25, 2016 from (Visit Source). 29.Garza I. (2016). Central neuropathic facial pain. Retrieved May 25, 2016 from (Visit Source). 30.Abrams GM & Wakasa M. (2016). Chronic complications of spinal cord injury and disease. Retrieved May 25, 2106 from (Visit Source) Geriatric Society, 50, S205. 31.Bajwa ZH & Ortega E. (2016). Post-herpetic neuralgia. Retrieved May 20, 2016 from (Visit Source). 32. Wheeler SG, Wipf JE, Staiger TO, & Deyo RA. (2016). Evaluation of low back pain. Retrieved May 24, 2016 from (Visit Source). 33.Davies PS & D’Arcy Y. (2013). Cancer Pain Management. Springer Publishing Company; New York. 34.Vincent A, Lahr BD, Wolfe F, Clauw DJ, Whipple MO, Oh TH,…St Sauver J. (2013). Prevalence of fibromyalgia: a population-based study in Olmsted County, Minnesota, utilizing the Rochester Epidemiology Project. Arthritis Care and Research. 65(5), 786-92. 35. Häuser W, Wolfe F, Tölle T, Uçeyler N, & Sommer C,(2012). The role of antidepressants in the management of fibromyalgia syndrome: a systematic review and meta-analysis. CNS Drugs, 26(4), 297-307. 36.Healthy People 2010. DATA 2010. 2014. Retrieved May 1, 2016 from (Visit Source). 37.US Department of Health and Human Services. (2011). American College of Rheumatology 2012 recommendations for the use of nonpharmacologic and pharmacologic therapies in osteoarthritis of the hand, hip, and knee. Retrieved on June 6, 2016 from (Visit Source). 38.Lozada CJ. (2016). Osteoarthritis Treatment & Management. Retrieved June 10, 2016 from (Visit Source). This course is applicable for the following professions: Advanced Registered Nurse Practitioner (ARNP), Clinical Nurse Specialist (CNS), Licensed Practical Nurse (LPN), Licensed Vocational Nurses (LVN), Occupational Therapist (OT), Occupational Therapist Assistant (OTA), Physical Therapist (PT), Registered Nurse (RN) Advance Practice Nurse Pharmacology Credit, CPD: Practice Effectively, Medical Surgical, Michigan Requirements, Pain Management, Pharmacology (All Nursing Professions)
In nuclear physics, beta decay (?-decay) is a type of radioactive decay in which a beta particle (fast energetic electron or positron) is emitted from an atomic nucleus, transforming the original nuclide to its isobar. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in so-called positron emission. Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or Q value must be positive. Beta decay is a consequence of the weak force, which is characterized by relatively lengthy decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by emission of a W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. The two types of beta decay are known as beta minus and beta plus. In beta minus (?-) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino; while in beta plus (?+) decay, a proton is converted to a neutron and the process creates a positron and an electron neutrino. ?+ decay is also known as positron emission. Beta decay conserves a quantum number known as the lepton number, or the number of electrons and their associated neutrinos (other leptons are the muon and tau particles). These particles have lepton number +1, while their antiparticles have lepton number -1. Since a proton or neutron has lepton number zero, ?+ decay (a positron, or antielectron) must be accompanied with an electron neutrino, while ?- decay (an electron) must be accompanied by an electron antineutrino. In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation. This new element has an unchanged mass number A, but an atomic number Z that is increased by one. As in all nuclear decays, the decaying element (in this case 14 ) is known as the parent nuclide while the resulting element (in this case 14 ) is known as the daughter nuclide. ?+ decay also results in nuclear transmutation, with the resulting element having an atomic number that is decreased by one. The beta spectrum, or distribution of energy values for the beta particles, is continuous. The total energy of the decay process is divided between the electron, the antineutrino, and the recoiling nuclide. In the figure to the right, an example of an electron with 0.40 MeV energy from the beta decay of 210Bi is shown. In this example, the total decay energy is 1.16 MeV, so the antineutrino has the remaining energy: 1.16 MeV - 0.40 MeV = 0.76 MeV. An electron at the far right of the curve would have the maximum possible kinetic energy, leaving the energy of the neutrino to be only its small rest mass. Radioactivity was discovered in 1896 by Henri Becquerel in uranium, and subsequently observed by Marie and Pierre Curie in thorium and in the new elements polonium and radium. In 1899, Ernest Rutherford separated radioactive emissions into two types: alpha and beta (now beta minus), based on penetration of objects and ability to cause ionization. Alpha rays could be stopped by thin sheets of paper or aluminium, whereas beta rays could penetrate several millimetres of aluminium. In 1900, Paul Villard identified a still more penetrating type of radiation, which Rutherford identified as a fundamentally new type in 1903 and termed gamma rays. Alpha, beta, and gamma are the first three letters of the Greek alphabet. In 1900, Becquerel measured the mass-to-charge ratio (m/e) for beta particles by the method of J.J. Thomson used to study cathode rays and identify the electron. He found that m/e for a beta particle is the same as for Thomson's electron, and therefore suggested that the beta particle is in fact an electron. In 1901, Rutherford and Frederick Soddy showed that alpha and beta radioactivity involves the transmutation of atoms into atoms of other chemical elements. In 1913, after the products of more radioactive decays were known, Soddy and Kazimierz Fajans independently proposed their radioactive displacement law, which states that beta (i.e., ) emission from one element produces another element one place to the right in the periodic table, while alpha emission produces an element two places to the left. The study of beta decay provided the first physical evidence for the existence of the neutrino. In both alpha and gamma decay, the resulting alpha or gamma particle has a narrow energy distribution, since the particle carries the energy from the difference between the initial and final nuclear states. However, the kinetic energy distribution, or spectrum, of beta particles measured by Lise Meitner and Otto Hahn in 1911 and by Jean Danysz in 1913 showed multiple lines on a diffuse background. These measurements offered the first hint that beta particles have a continuous spectrum. In 1914, James Chadwick used a magnetic spectrometer with one of Hans Geiger's new counters to make more accurate measurements which showed that the spectrum was continuous. The distribution of beta particle energies was in apparent contradiction to the law of conservation of energy. If beta decay were simply electron emission as assumed at the time, then the energy of the emitted electron should have a particular, well-defined value. For beta decay, however, the observed broad distribution of energies suggested that energy is lost in the beta decay process. This spectrum was puzzling for many years. A second problem is related to the conservation of angular momentum. Molecular band spectra showed that the nuclear spin of nitrogen-14 is 1 (i.e., equal to the reduced Planck constant) and more generally that the spin is integral for nuclei of even mass number and half-integral for nuclei of odd mass number. This was later explained by the proton-neutron model of the nucleus. Beta decay leaves the mass number unchanged, so the change of nuclear spin must be an integer. However, the electron spin is 1/2, hence angular momentum would not be conserved if beta decay were simply electron emission. From 1920-1927, Charles Drummond Ellis (along with Chadwick and colleagues) further established that the beta decay spectrum is continuous. In 1933, Ellis and Nevill Mott obtained strong evidence that the beta spectrum has an effective upper bound in energy. Niels Bohr had suggested that the beta spectrum could be explained if conservation of energy was true only in a statistical sense, thus this principle might be violated in any given decay.:27 However, the upper bound in beta energies determined by Ellis and Mott ruled out that notion. Now, the problem of how to account for the variability of energy in known beta decay products, as well as for conservation of momentum and angular momentum in the process, became acute. In a famous letter written in 1930, Wolfgang Pauli attempted to resolve the beta-particle energy conundrum by suggesting that, in addition to electrons and protons, atomic nuclei also contained an extremely light neutral particle, which he called the neutron. He suggested that this "neutron" was also emitted during beta decay (thus accounting for the known missing energy, momentum, and angular momentum), but it had simply not yet been observed. In 1931, Enrico Fermi renamed Pauli's "neutron" the "neutrino" (roughly 'little neutral one' in Italian). In 1934, Fermi published his landmark theory for beta decay, where he applied the principles of quantum mechanics to matter particles, supposing that they can be created and annihilated, just as the light quanta in atomic transitions. Thus, according to Fermi, neutrinos are created in the beta-decay process, rather than contained in the nucleus; the same happens to electrons. The neutrino interaction with matter was so weak that detecting it proved a severe experimental challenge. Further indirect evidence of the existence of the neutrino was obtained by observing the recoil of nuclei that emitted such a particle after absorbing an electron. Neutrinos were finally detected directly in 1956 by Clyde Cowan and Frederick Reines in the Cowan-Reines neutrino experiment. The properties of neutrinos were (with a few minor modifications) as predicted by Pauli and Fermi. In 1934, Frédéric and Irène Joliot-Curie bombarded aluminium with alpha particles to effect the nuclear reaction 4 , and observed that the product isotope 30 emits a positron identical to those found in cosmic rays (discovered by Carl David Anderson in 1932). This was the first example of decay (positron emission), which they termed artificial radioactivity since 30 is a short-lived nuclide which does not exist in nature. In recognition of their discovery the couple were awarded the Nobel Prize in Chemistry in 1935. The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed in 1937 by Luis Alvarez, in the nuclide 48V. Alvarez went on to study electron capture in 67Ga and other nuclides. In 1956, Tsung-Dao Lee and Chen Ning Yang noticed that there was no evidence that parity was conserved in weak interactions, and so they postulated that this symmetry may not be preserved by the weak force. They sketched the design for an experiment for testing conservation of parity in the laboratory. Later that year, Chien-Shiung Wu and coworkers conducted the Wu experiment showing an asymmetrical beta decay of cobalt-60 at cold temperatures that proved that parity is not conserved in beta decay. This surprising result overturned long-held assumptions about parity and the weak force. In recognition of their theoretical work, Lee and Yang were awarded the Nobel Prize for Physics in 1957. decay, the weak interaction converts an atomic nucleus into a nucleus with atomic number increased by one, while emitting an electron ( ) and an electron antineutrino ( decay generally occurs in neutron-rich nuclei. The generic equation is: Another example is when the free neutron (1 ) decays by decay into a proton ( At the fundamental level (as depicted in the Feynman diagram on the right), this is caused by the conversion of the negatively charged (- e) down quark to the positively charged (+ e) up quark by emission of a boson subsequently decays into an electron and an electron antineutrino: decay, or "positron emission", the weak interaction converts an atomic nucleus into a nucleus with atomic number decreased by one, while emitting a positron ( ) and an electron neutrino ( decay generally occurs in proton-rich nuclei. The generic equation is: This may be considered as the decay of a proton inside the nucleus to a neutron decay cannot occur in an isolated proton because it requires energy, due to the mass of the neutron being greater than the mass of the proton. decay can only happen inside nuclei when the daughter nucleus has a greater binding energy (and therefore a lower total energy) than the mother nucleus. The difference between these energies goes into the reaction of converting a proton into a neutron, a positron and a neutrino and into the kinetic energy of these particles. This process is opposite to negative beta decay, in that the weak interaction converts a proton into a neutron by converting an up quark into a down quark resulting in the emission of a or the absorption of a In all cases where decay (positron emission) of a nucleus is allowed energetically, so too is electron capture allowed. This is a process during which a nucleus captures one of its atomic electrons, resulting in the emission of a neutrino: All emitted neutrinos are of the same energy. In proton-rich nuclei where the energy difference between the initial and final states is less than 2mec2, decay is not energetically possible, and electron capture is the sole decay mode. If the captured electron comes from the innermost shell of the atom, the K-shell, which has the highest probability to interact with the nucleus, the process is called K-capture. If it comes from the L-shell, the process is called L-capture, etc. Electron capture is a competing (simultaneous) decay process for all nuclei that can undergo ?+ decay. The converse, however, is not true: electron capture is the only type of decay that is allowed in proton-rich nuclides that do not have sufficient energy to emit a positron and neutrino. Beta decay does not change the number (A) of nucleons in the nucleus, but changes only its charge Z. Thus the set of all nuclides with the same A can be introduced; these isobaric nuclides may turn into each other via beta decay. For a given A there is one that is most stable. It is said to be beta stable, because it presents a local minima of the mass excess: if such a nucleus has (A, Z) numbers, the neighbour nuclei (A, Z-1) and (A, Z+1) have higher mass excess and can beta decay into (A, Z), but not vice versa. For all odd mass numbers A, there is only one known beta-stable isobar. For even A, there are up to three different beta-stable isobars experimentally known; for example, 124 , and 124 are all beta-stable. There are about 350 known beta-decay stable nuclides. Usually unstable nuclides are clearly either "neutron rich" or "proton rich", with the former undergoing beta decay and the latter undergoing electron capture (or more rarely, due to the higher energy requirements, positron decay). However, in a few cases of odd-proton, odd-neutron radionuclides, it may be energetically favorable for the radionuclide to decay to an even-proton, even-neutron isobar either by undergoing beta-positive or beta-negative decay. An often-cited example is the single isotope 64 (29 protons, 35 neutrons), which illustrates three types of beta decay in competition. Copper-64 has a half-life of about 12.7 hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay. This particular nuclide (though not all nuclides in this situation) is almost equally likely to decay through proton decay by positron emission (18%) or electron capture (43%) to 64 , as it is through neutron decay by electron emission (39%) to 64 Most naturally occurring nuclides on earth are beta stable. Those that are not have half-lives ranging from under a second to periods of time significantly greater than the age of the universe. One common example of a long-lived isotope is the odd-proton odd-neutron nuclide 40 , which undergoes all three types of beta decay ( and electron capture) with a half-life of . Beta decay just changes neutron to proton or, in the case of positive beta decay (electron capture) proton to neutron so the number of individual quarks doesn't change. It is only the baryon flavor that changes, here labelled as the isospin. Up and down quarks have total isospin and isospin projections All other quarks have I = 0. so all leptons have assigned a value of +1, antileptons −1, and non-leptonic particles 0. For allowed decays, the net orbital angular momentum is zero, hence only spin quantum numbers are considered. The electron and antineutrino are fermions, spin-1/2 objects, therefore they may couple to total (parallel) or (anti-parallel). For forbidden decays, orbital angular momentum must also be taken into consideration. The Q value is defined as the total energy released in a given nuclear decay. In beta decay, Q is therefore also the sum of the kinetic energies of the emitted beta particle, neutrino, and recoiling nucleus. (Because of the large mass of the nucleus compared to that of the beta particle and neutrino, the kinetic energy of the recoiling nucleus can generally be neglected.) Beta particles can therefore be emitted with any kinetic energy ranging from 0 to Q. A typical Q is around 1 MeV, but can range from a few keV to a few tens of MeV. Consider the generic equation for beta decay The Q value for this decay is where is the mass of the nucleus of the A atom, is the mass of the electron, and is the mass of the electron antineutrino. In other words, the total energy released is the mass energy of the initial nucleus, minus the mass energy of the final nucleus, electron, and antineutrino. The mass of the nucleus mN is related to the standard atomic mass m by That is, the total atomic mass is the mass of the nucleus, plus the mass of the electrons, minus the sum of all electron binding energies Bi for the atom. This equation is rearranged to find , and is found similarly. Substituting these nuclear masses into the Q-value equation, while neglecting the nearly-zero antineutrino mass and the difference in electron binding energies, which is very small for high-Z atoms, we have This energy is carried away as kinetic energy by the electron and neutrino. Because the reaction will proceed only when the Q value is positive, ?- decay can occur when the mass of atom A is greater than the mass of atom A The equations for ?+ decay are similar, with the generic equation However, in this equation, the electron masses do not cancel, and we are left with Because the reaction will proceed only when the Q value is positive, ?+ decay can occur when the mass of atom A exceeds that of A by at least twice the mass of the electron. The analogous calculation for electron capture must take into account the binding energy of the electrons. This is because the atom will be left in an excited state after capturing the electron, and the binding energy of the captured innermost electron is significant. Using the generic equation for electron capture which simplifies to where Bn is the binding energy of the captured electron. Because the binding energy of the electron is much less than the mass of the electron, nuclei that can undergo ?+ decay can always also undergo electron capture, but the reverse is not true. Beta decay can be considered as a perturbation as described in quantum mechanics, and thus Fermi's Golden Rule can be applied. This leads to an expression for the kinetic energy spectrum N(T) of emitted betas as follows: where T is the kinetic energy, CL is a shape function that depends on the forbiddenness of the decay (it is constant for allowed decays), F(Z, T) is the Fermi Function (see below) with Z the charge of the final-state nucleus, E=T + mc2 is the total energy, p= is the momentum, and Q is the Q value of the decay. The kinetic energy of the emitted neutrino is given approximately by Q minus the kinetic energy of the beta. As an example, the beta decay spectrum of 210Bi (originally called RaE) is shown to the right. The Fermi function that appears in the beta spectrum formula accounts for the Coulomb attraction / repulsion between the emitted beta and the final state nucleus. Approximating the associated wavefunctions to be spherically symmetric, the Fermi function can be analytically calculated to be: For non-relativistic betas (Q mec2), this expression can be approximated by: A Kurie plot (also known as a Fermi-Kurie plot) is a graph used in studying beta decay developed by Franz N. D. Kurie, in which the square root of the number of beta particles whose momenta (or energy) lie within a certain narrow range, divided by the Fermi function, is plotted against beta-particle energy. It is a straight line for allowed transitions and some forbidden transitions, in accord with the Fermi beta-decay theory. The energy-axis (x-axis) intercept of a Kurie plot corresponds to the maximum energy imparted to the electron/positron (the decay's Q value). With a Kurie plot one can find the limit on the effective mass of a neutrino. After the discovery of parity non-conservation (see History), it was found that, in beta decay, electrons are emitted mostly with negative helicity, i.e., they move, naively speaking, like left-handed screws driven into a material (they have negative longitudinal polarization). Conversely, positrons have mostly positive helicity, i.e., they move like right-handed screws. Neutrinos (emitted in positron decay) have negative helicity, while antineutrinos (emitted in electron decay) have positive helicity. The higher the energy of the particles, the higher their polarization. Beta decays can be classified according to the angular momentum (L value) and total spin (S value) of the emitted radiation. Since total angular momentum must be conserved, including orbital and spin angular momentum, beta decay occurs by a variety of quantum state transitions to various nuclear angular momentum or spin states, known as "Fermi" or "Gamow-Teller" transitions. When beta decay particles carry no angular momentum (L = 0), the decay is referred to as "allowed", otherwise it is "forbidden". Other decay modes, which are rare, are known as bound state decay and double beta decay. A Fermi transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition). In the non-relativistic limit, the nuclear part of the operator for a Fermi transition is given by A Gamow-Teller transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition). In this case, the nuclear part of the operator is given by with the weak axial-vector coupling constant, and the spin Pauli matrices, which can produce a spin-flip in the decaying nucleon. When L > 0, the decay is referred to as "forbidden". Nuclear selection rules require high L values to be accompanied by changes in nuclear spin (J) and parity (?). The selection rules for the Lth forbidden transitions are: where = 1 or -1 corresponds to no parity change or parity change, respectively. The special case of a transition between isobaric analogue states, where the structure of the final state is very similar to the structure of the initial state, is referred to as "superallowed" for beta decay, and proceeds very quickly. The following table lists the ?J and values for the first few values of L: |First forbidden||0, 1, 2||yes| |Second forbidden||1, 2, 3||no| |Third forbidden||2, 3, 4||yes| A very small minority of free neutron decays (about four per million) are so-called "two-body decays", in which the proton, electron and antineutrino are produced, but the electron fails to gain the 13.6 eV energy necessary to escape the proton, and therefore simply remains bound to it, as a neutral hydrogen atom. In this type of beta decay, in essence all of the neutron decay energy is carried off by the antineutrino. For fully ionized atoms (bare nuclei), it is possible in likewise manner for electrons to fail to escape the atom, and to be emitted from the nucleus into low-lying atomic bound states (orbitals). This cannot occur for neutral atoms with low-lying bound states which are already filled by electrons. Bound-state ? decays were predicted by Daudel, Jean, and Lecoin in 1947, and the phenomenon in fully ionized atoms was first observed for 163Dy66+ in 1992 by Jung et al. of the Darmstadt Heavy-Ion Research group. Although neutral 163Dy is a stable isotope, the fully ionized 163Dy66+ undergoes ? decay into the K and L shells with a half-life of 47 days. Another possibility is that a fully ionized atom undergoes greatly accelerated ? decay, as observed for 187Re by Bosch et al., also at Darmstadt. Neutral 187Re does undergo ? decay with a half-life of 42 × 109 years, but for fully ionized 187Re75+ this is shortened by a factor of 109 to only 32.9 years. For comparison the variation of decay rates of other nuclear processes due to chemical environment is less than 1%. Some nuclei can undergo double beta decay ( decay) where the charge of the nucleus changes by two units. Double beta decay is difficult to study, as the process has an extremely long half-life. In nuclei for which both ? decay and decay are possible, the rarer decay process is effectively impossible to observe. However, in nuclei where ? decay is forbidden but decay is allowed, the process can be seen and a half-life measured. Thus, decay is usually studied only for beta stable nuclei. Like single beta decay, double beta decay does not change A; thus, at least one of the nuclides with some given A has to be stable with regard to both single and double beta decay. "Ordinary" double beta decay results in the emission of two electrons and two antineutrinos. If neutrinos are Majorana particles (i.e., they are their own antiparticles), then a decay known as neutrinoless double beta decay will occur. Most neutrino physicists believe that neutrinoless double beta decay has never been observed.
Bayes' formula, or Bayes' theorem, describes how conditional probabilities affect each other. The conditional probability P(A|B) depends not only on the relationship between A and B, but also on the global probability of A and B individually. It is calculated as , where P(A) is the prior probability of A (not taking B into account), P(A|B) is the posterior probability, P(B|A) is the likelihood, and P(B) the prior probability of B. Bayes' formula is often used to calculate true probabilities when performing tests with false-positive or negative results. If a test has a high false-positive rate, then the probability of a true positive is less than the probability of testing positive, as the false-positive rate inflates the number of positive tests. Suppose you want to sample the music of a band you haven’t heard before. You’re given 5 CDs, each with 12 tracks. You’ll use a probability method to sample 5 tracks to play. a. Describe (briefly) how you would carry out a simple random sampling method to pick the 5 tracks. b. Describe (briefly) how you would carry out a stratified sampling method to pick the 5 tracks. c. Describe briefly how you would carry out a cluster sampling method to pick the 5 tracks. This is part of my capstone. I'm trying to figure out the T test? Sample Size? Justification/power analysis? p level? comparing What? The student will be able to identify and apply appropriate statistical analysis, to include techniques in data collection, review, critique, interpretation and inference in the aviation and aerospace industry. There is no significant difference in fatigue symptoms of military pilots that exercise 30 minutes 3 times a week then pilots that do not exercise during the week while on deployment. Military pilots that exercise 30 minutes 3 times a week shows fewer symptoms of fatigue, then military pilots that do not exercise during deployments. By being physically fit, will provide the pilot endurance, which will help the pilot to fight fatigue during the long hours of flight. The questionnaire will consist of 28 questions that will address internal and external influences the pilots will report through the questionnaire. Data collection will be accomplished with a questionnaire administered through Surveymonkey.com. An email will be sent to 350 pilots that work for L-3 Vertex, to their personal email address with a link to the questionnaire at Surveymonkey.com. The email will explain the capstone project, the proposal and the objects of this research. This questionnaire will be voluntary, and there will be no incentive in completing the questionnaire. This will require me to show the questionnaire for each of the different program managers and receive permission to send out the questionnaires to the 350 pilots. The survey will be contained in Appendix A. The geographic location for the questionnaire will be either within the Continental United States (CONUS) at the pilot’s home of record or outside the Continental United States (OCONUS) in the desert. The questionnaire utilizes five demographic questions, 15 fatigue questions, 6 exercise questions and 1 diet question. The survey is quantitative and will be collected from pilots working for L-3 Vertex. Interpretation, review, and critique of the data The data will be reviewed, and once the statistical analysis is complete, the results of the analysis will be reviewed. If the null hypothesis is rejected then that will lead us to the alternate hypothesis.• Join Chegg Study Guided textbook solutions created by Chegg experts Learn from step-by-step solutions for 2,500+ textbooks in Math, Science, Engineering, Business and more 24/7 Study Help Answers in a pinch from experts and subject enthusiasts all semester long
The performance of a plant is determined by three major factors: - the interaction between genes and environment. These three factors are explained below. Genes are the building blocks to all living things. The genes present in a plant affect its productivity, influence how tall or short it is, or may protect the plant from a particular disease. In addition to genes, a plant’s health and productivity are also directly impacted by the environment (weather and soil) in which it is grown. Plants need water and sunlight. However, too much rain can cause disease or flooding. Or too much heat, especially in the absence of rainfall, can decrease productivity. The type of soil also has an effect on a plant. For example, if a plant is grown in soil that is able to hold more water than average, it will be able to better withstand an extended period of low rainfall. By characterizing the environments in which plants are grown, we can better understand how plants react to the different environments. Scientists do this by precisely measuring the weather and soil in all growing locations. A particular plant is adapted to grow best in a particular region due to many factors, including the length of the growing season (determined roughly by the time between the last frost in the spring and the first frost in the fall), expected rainfall, temperature, solar radiation, soil types and others. Some plants may tolerate drought better than others. Some plants may prefer a soil that is sandy, while others prefer clay. This is what is called a genetic by environment (GxE) interaction. The environment activates certain genes that allow the plant to thrive (or not) in that particular environment. Plant breeders work to develop high yielding plants for growers across a wide range of environments. Not all environments are productive growing environments; however scientists are working to better understand GxE and breed for plants that can perform in highly stressed environments. Successfully doing so could result in crops being developed to make marginal cropland more productive, potentially reducing hunger in arid regions of the world. Corn is one of the world’s most important crops. Each year, breeders create several new corn products, known as experimental hybrids. Corn breeders work to create corn hybrids that can maintain high yield across a wide range of environments. Historically, identifying the best hybrids has been by trial and error, with breeders testing their experimental hybrids in a diverse set of locations and measuring their performance to select the highest yielding hybrids. This process can take many years. Corn breeders would benefit from accurate models that can predict performance across a range of environmental scenarios. One way of modeling corn yield is that any particular hybrid (experimental cross of corn varieties) has a maximum yield potential, which then decreases depending on the environment in which it is grown. Every environment will have certain characteristics, or limiting factors, that are suboptimal for any hybrid, causing the actual yield to be less than the yield potential. Can environmental data be aggregated into useful metrics representing stresses encountered by corn throughout a growing season? Can these metrics be used to discriminate between hybrids tolerant and susceptible to the stresses they represent? Some potential environmental stresses that can have a negative effect on yield are poor weather (heat, drought, cold, etc.), soil lacking nutrients, insect damage or pathogens. The degree of each stress and how resistant a particular hybrid is to the stresses encountered will determine how much the yield is impacted. In addition, certain stresses, when faced at the same time, can have a stronger impact than the combined individual stresses. A strong understanding of how a hybrid reacts when facing certain stresses (and combined stresses) could be a powerful tool for developing hybrids for regions that are less hospitable for corn, allowing farmers the potential to productively grow corn where currently it is challenging. Furthermore, individual farmers benefit from having access to this type of information because they can better manage risk across their acres. Using feature engineering on environmental data (daily weather, soil, plant/harvest dates, any other available data), develop metrics representing the amount of stress that corn would face in any particular environment across a growing season. The objective is to individually model heat stress, drought stress, and stress due to the combination of heat and drought. Each stress will obviously depend on the weather at each location, but the impact can also vary depending on soil type and when the stress occurs throughout the growing season. These stresses are not the only factors affecting yield but, typically, the higher the stress, the lower the typical yield would be. A sub-analysis that can be done at this step is measuring the impact of the interaction of heat stress and drought stress. Can the yield loss due to these stresses be explained by the individual contributions of heat and drought stress, or does the interaction of the two stresses significantly contribute to yield loss? Using the stress metrics developed in Objective #1, classify hybrids as either tolerant or susceptible to each type of stress using the hybrid’s yield across different environments. One possible way of doing this is by conducting a linear regression of yield against each stress, and classify hybrids based on the slope of that regression line. You are encouraged to use more complex or non-linear models in order to build a better classifier. - Each stress does not necessarily need to be represented by a single metric. The analysis will become more complex as more variables are added. - Objective #1 can be completed using supervised methods. Increased stress should correlate with decreased yield across locations for average yields. - Objective #2 must be completed with unsupervised methods. No dataset will be provided that classifies any set of hybrids as tolerant or susceptible to any stress, though we will be using internal data to evaluate your classifications. - An example of a similar analysis can be found in this paper (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5061753/). It only covers drought tolerance and uses supervised methods on a much smaller labeled dataset. Some of the techniques used are not generally applicable to the case presented here, but it does provide some context to understand the problem. - Definition and interpretation of stress metrics (heat, drought, combined heat and drought) - Classifications of stress tolerance (heat, drought, combined heat and drought) for all hybrids. Additionally, following the standards for academic publication, entries should include: - Quantitative results to justify your modeling and classification techniques - A clear description of the methodology and theory used - References or citations as appropriate The entries will be evaluated based on: - Novel ideas used to define stress metrics and classify hybrids for stress tolerance - How well your classifications agree with Syngenta’s internal knowledge of hybrid stress tolerance - Simplicity and intuitiveness of the solution - Evaluation of factors included in the decision process - Clarity in the explanation - The quality and clarity of the finalist’s presentation at the 2019 INFORMS Conference on Business Analytics and Operations Research You are provided with the following training datasets to create stress models and classify hybrids. - Performance Dataset: This dataset contains the observed yields from the tests (trials) of hybrids. Each row represents one observation for one hybrid at a given location and year. Performance data of 2452 hybrids in 1560 locations is provided from 2008 to 2017. In addition, plant date, harvest date, and irrigation status are included for each observation, along with information about the location such as average yield and soil properties (sourced from ISRIC). The ‘performance dataset’ needs to be aligned with ‘weather dataset’ by ENV_ID (which is a unique identifier combining latitude, longitude and year). (performance_data.csv) - Weather Dataset: This dataset (sourced from Daymet) contains the recorded weather for each environment in which any hybrids were tested. Across the growing region, differences in weather conditions and soil types will cause variation in a hybrid’s observed performance, as well as a difference in the observed average yield of all hybrids tested in a location. Weather data is included in daily increments, labeled by the day number within the year (e.g. January 1 is day 1, December 31 is day 365 in non-leap years). This dataset needs to be aligned with the ‘performance dataset’ by ENV_ID (which is a unique identifier combining latitude, longitude and year). (weather_data.csv) - Key for Datasets: This table provides the meaning of each variable in the two datasets. |Performance Dataset||HYBRID_ID||ID for each hybrid in dataset| |ENV_ID||ID for each environment in dataset| |HYBRID_MG||Maturity group of hybrid – a higher number indicates a longer growing season needed to reach maturity| |ENV_MG||Typical maturity group of environment – a higher number indicates a longer growing season with more growing degree days; this can vary due to weather in any given year| |YIELD||Yield of hybrid in environment| |PLANT_DATE||Plant date for this observation| |HARVEST_DATE||Harvest date for this observation| ||Whether field was irrigated: NULL – unknown irrigation NONE or DRY – no irrigation ECO – very light irrigation LIRR – light irrigation IRR – normal irrigation |ENV_YIELD_MEAN||Mean yield for ENV_ID| |ENV_YIELD_STD||Standard Deviation of yield for ENV_ID | |ELEVATION||Elevation of field | |CLAY||% of clay in soil | |SILT||% of silt in soil | |SAND||% of sand in soil| |AWC||Available water capacity in soil| |PH||pH of soil| |OM||Organic matter in soil| |CEC||Cation exchange capacity of soil| |KSAT||Saturated hydraulic conductivity of soil| |Weather Dataset||ENV_ID||ID for each environment in dataset| |DAY_NUM||Day number within year of weather variables| |SWE||Snow water equivalent| JAN 18, 2019 Deadline for Submissions APRIL 14-16, 2019 Finalist Presentations and Winner Announcement Two Q&A webinars will be available, October 11th and December 6th, that all participants may attend. Archives will be available to view here and the distilled results will be added to the FAQ - Adee, E., Roozeboom, K., Balboa, G. R., Schlegel, A., & Ciampitti, I. A. (2016). Drought-Tolerant Corn Hybrids Yield More in Drought-Stressed Environments with No Penalty in Non-stressed Environments. Frontiers in Plant Science, 7, 1534. http://doi.org/10.3389/fpls.2016.01534 - Hengl T, Mendes de Jesus J, Heuvelink GBM, Ruiperez Gonzalez M, Kilibarda M, Blagotić A, et al. (2017) SoilGrids250m: Global gridded soil information based on machine learning. PLoS ONE 12(2): e0169748. https://doi.org/10.1371/journal.pone.0169748 - Thornton, P.E., M.M. Thornton, B.W. Mayer, N. Wilhelmi, Y. Wei, R. Devarakonda, and R.B. Cook. (2014). Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2. ORNL DAAC, Oak Ridge, Tennessee, USA. http://dx.doi.org/10.3334/ORNLDAAC/1219
A chemical equation is a way of representing a chemical reaction using symbols. The reactants are represented on the left side of the equation and the products are represented on the right side. The coefficients in front of the symbols represent the number of molecules or atoms taking part in the reaction. In a balanced chemical equation, the number of atoms of each element must be equal on both sides of the arrow. The Gizmo can be used to balance equations by changing the coefficients in front of the symbols until there are equal numbers of each type of atom on both sides. One of the most important things that students need to learn in Chemistry is how to balance chemical equations. The Balancing Chemical Equations Gizmo is a great tool to help students with this concept. The Gizmo allows students to enter a chemical equation and then see the balanced equation. It also shows the student what they did wrong if they didn’t get the answer correct. This is a great way for students to learn how to balance equations and it’s also a lot of fun! How are Chemical Equations Balanced Gizmo? When a chemical equation is balanced, that means that there is the same number of atoms of each element on both sides of the equation. In other words, the reactants (the substances being reacted) and products (the substances being produced) have the same number of atoms for each element. The Gizmo for balancing chemical equations can be found here: To balance an equation using this Gizmo, first select the “Balancing” tab. Then, enter the unbalanced equation into the box on the left side of the screen. The Gizmo will then automatically balance the equation for you. What is Chemical Equation Question Answer? In a chemical equation, the reactants are denoted on the left side of an arrow, while the products appear on the right. The Law of Conservation of Mass dictates that matter can neither be created nor destroyed in a chemical reaction, meaning that the quantities of reactants and products must remain equal. In order for this to happen, each side of the equation must have an equivalent number of atoms. This can be determined by using atomic coefficients in front of each compound’s symbol. Is There a Trick to Balancing Chemical Equations? In order to balance a chemical equation, you need to have the same number of atoms of each element on both sides of the equation. In other words, the reactants (left side) must equal the products (right side). This can be accomplished by changing the coefficients (the numbers in front of the formulas) so that they correspond with one another. There are a few rules that you can follow when trying to balance a chemical equation: -Make sure that you don’t change any of the subscripts (the numbers after the letters in a formula). Subscripts tell you how many atoms of an element are in that molecule. -Start by balancing equations with the smallest number of atoms first. This will make it easier to see what is going on and avoid making mistakes. -Add electrons to one side or take them away from another until both sides have an equal charge. This is only necessary for equations involving ions. -Halogens almost always appear on the right side of equations because they tend to gain electrons easily, while metals usually appear on the left side because they lose electrons easily. -If you’re stuck, try looking at similar equations that are already balanced and see if you can figure out what changes need to be made to yours. What are the 4 Rules of Balancing Chemical Equations? If you want to be a whiz at balancing chemical equations, there are 4 key rules you need to know. By following these simple steps, you’ll be on your way to success in no time! 1. Make sure the number of atoms on each side of the equation is equal. This means that if there are 3 oxygen atoms on one side, there must also be 3 oxygen atoms on the other side. The same goes for all of the other elements in the equation. 2. The total charge on each side of the equation must also be equal. This means that if there are 2 negative charges on one side, there must be 2 negative charges on the other side as well. Again, this rule applies to all charges in the equation. 3. Elements can only move from left to right or from right to left – they cannot switch places with one another within a given side of the equation. For example, if carbon is on the left hand side of the equals sign, it cannot move to the right hand side – it would have to go through oxygen first! Balancing Chemical Equations Gizmo Assessment Answers If you’re a student who is struggling with balancing chemical equations, the Balancing Chemical Equations Gizmo assessment can help. This assessment provides detailed information about your strengths and weaknesses in this area, and gives you specific feedback on how to improve. The questions on the assessment are not difficult, but they will require some thought and careful work. However, if you take your time and carefully read the instructions, you should be able to get through them without too much trouble. And once you’ve completed the assessment, you’ll have a much better understanding of what you need to work on in order to improve your skills in this area. This blog post is all about the Balancing Chemical Equations Gizmo Answer Key. Perez walks the reader through how to use the answer key and provides helpful tips along the way.
In 2017, almost five decades after the last manned mission to the moon as part of Apollo 17, NASA announced the launch of its Artemis program. The goal of this program was multi-pronged. In the short term, it aimed to reinvigorate the U.S. Space program by resuming manned missions to the moon. In the long run, the Artemis program would enable further exploration of the moon for scientific purposes, thereby forming the basis of establishing a long-term sustainable human presence on the lunar surface. Apart from helping NASA prepare for manned missions to Mars in the distant future, the Artemis program would also involve multiple scientific missions to the moon. The goal of these missions would be to unravel mysteries of the lunar surface that continue to trouble scientists to this day. One such mysterious aspect of the moon involves the Gruithuisen Domes. Named after Franz von Gruithuisen, a Bavarian (present-day German) scientist from the 1800’s who believed that the moon was habitable, the origins of these lunar features have remained a mystery ever since these were first discovered two centuries ago. The mystery behind the formation of these domes has been so perplexing that NASA just announced that it would send a probe to the moon to study them exclusively. Let’s now try to understand why this mission is of such importance, and how NASA intends to use its probe to unravel a long-hidden mystery. What’s so mysterious about the Gruithuisen Domes? NASA calls the Gruithuisen Domes a "geologic enigma," and for a good reason. Previous studies have shown that the composition of these domes — which are made of silica-rich magma — completely differs from the stuff that makes up the rest of the surrounding terrain, which is based on basaltic magma. The critical difference between these two types of material is that while silica-based magma is more viscous and does not travel fast, basaltic lava is thin and runny in nature. It is the viscous nature of the silicic lava that prevented the lava from running off and creating these domes on the moon’s surface in the first place. However, what makes the presence of silicic magma on the moon perplexing is the fact that this type of magma typically requires both water and plate tectonics to form. Neither water nor any manner of plate tectonics is known to exist on the moon at this time. So, the big question that has long troubled lunar geologists is the mystery behind the origins of these features on the lunar surface without these essential ingredients. What is NASA doing to solve this mystery? The Artemis mission also involves sending unmanned probes and rovers to study and collect samples from the lunar surface. To solve the long-standing mystery surrounding the Gruithuisen Domes, the agency has designed a dedicated probe that will be part of a suite of five instruments. NASA calls this study the Lunar Vulkan Imaging and Spectroscopy Explorer (Lunar-VISE) investigation. NASA’s current plan involves getting these instruments on the lunar surface by 2025. The probes dedicated to studying the Gruithuisen Domes will be mounted aboard a mobile rover that will climb up to the summit of one of the domes and collect samples from there. NASA expects this process to take a total of ten Earth days. After this, the data collected from the samples will be sent back to the Earth for further study. If all of this goes as per plan, scientists at NASA will be able to conduct the most comprehensive study on the Gruithuisen Domes since they were first discovered nearly two centuries ago, and humanity will be one step closer to unraveling the mysteries within.
Grade Level: 10 (9-12) Time Required: 45 minutes Expendable Cost/Group: US $2.00 This activity also uses some non-expendable items such as lab equipment and marbles; see the Materials List. This activity also uses some non-expendable items such as lab equipment and marbles; see the Materials List. Group Size: 3 Activity Dependency: None Subject Areas: Chemistry, Earth and Space, Physical Science SummaryStudents learn about the underlying factors that can contribute to Plinian eruptions (which eject large amounts of pumice, gas and volcanic ash, and can result in significant death and destruction in the surrounding environment), versus more gentle, effusive eruptions. Students explore two concepts related to the explosiveness of volcanic eruptions, viscosity and the rate of degassing, by modelling the concepts with the use of simple materials. They experiment with three fluids of varying viscosities, and explore the concept of degassing as it relates to eruptions through experimentation with carbonated beverage cans. Finally, students reflect on how the scientific concepts covered in the activity connect to useful engineering applications, such as community evacuation planning and implementation, and mapping of safe living zones near volcanoes. A PowerPoint® presentation and student worksheet are provided. Volcanologists are geologists who study volcanic processes and eruptions. Though volcanologists focus on the fluid dynamics, geology, earth processes and other related concepts around volcanoes, their findings can provide valuable insights for engineering innovations, such as in the field of geochemical engineering or for technologies that involve fluids. Understanding volcanic eruptions can help people in nearby communities to stay safe in the event of an eruption and help avoid triggering eruptions, as some events involving gas drilling have been linked to setting off volcanic activity. Engineers must have a thorough understanding of volcanoes to pursue advances in hydrocarbon recovery (such as gas lift techniques in porous reservoirs) or the use of magma for geothermal applications (exploiting the high temperatures of magma bodies to provide heat to geothermal systems for heat and energy). After this activity, students should be able to: - Define and describe viscosity and explain how it relates to the amount of pressure exerted on bubbles rising through magma. - Describe how the viscosity of magma relates to the rise speed of bubbles, as well as the ability of a bubble to expand in magma. - Describe “low” and “high” viscosity fluids. - Explain how magma viscosity relates to the explosiveness of volcanic eruptions. - Explain how the rate of pressure change or degassing time can affect how effusive or explosive eruptions are. Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards. All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org). In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc. Scientific inquiry is characterized by a common set of values that include: logical thinking, precision, open-mindedness, objectivity, skepticism, replicability of results, and honest and ethical reporting of findings. (Grades 9 - 12) Do you agree with this alignment? Thanks for your feedback! analyze physical and chemical properties of elements and compounds such as color, density, viscosity, buoyancy, boiling point, freezing point, conductivity, and reactivity; Do you agree with this alignment? Thanks for your feedback! describe how the macroscopic properties of a thermodynamic system such as temperature, specific heat, and pressure are related to the molecular level of matter, including kinetic or potential energy of atoms; Do you agree with this alignment? Thanks for your feedback! Each group needs: - 2 filled and sealed carbonated beverage cans, such as cola or lemon-lime soda - (optional) 1 timer, to measure 30 seconds or less; alternatively, students can use the second-hand on the classroom clock - 3 identical 150-ml glass beakers; note that other-sized beakers, columns, graduated cylinders, small rectangular prisms, test tubes, etc., will also work—just make sure the containers are identical (for each group), each hold 100 ml of fluid and are large enough that students can drop a marble into each - 3 marbles or small objects that can be dropped into the aforementioned containers; as an optional extension, make available additional small objects of differing size, mass or density to provide for additional experimentation - 100 ml each of 3 fluids with varying viscosity; prepare the solutions in larger beakers (such as 250 ml or larger) and pour them into the 150 ml beakers; for example, consider using water and corn syrup combined in varying proportions by volume, such as: - 100 ml corn syrup - 50 ml corn syrup and 50 ml water - 120 ml corn syrup boiled down to 100 ml to create more viscous corn syrup - 3 plastic drinking straws, such as a 100-pack for $7 from Ikea/Amazon - 3 glass stirring rods - Viscosity and Pressure in Volcanic Eruptions Worksheet, one per student To share with the entire class: - (optional) projector to show the Volcano Presentation, a PowerPoint® file Worksheets and AttachmentsVisit [ ] to print or download. More Curriculum Like This Students learn about the causes, composition and types of volcanoes. They begin with an overview of the Earth's interior and how volcanoes form. Once students know how volcanoes function, they learn how engineers predict eruptions. Students observe an in-classroom visual representation of a volcanic eruption. During the activity, students observe, measure and sketch the volcano, seeing how its behavior provides engineers with indicators used to predict an eruption. While learning about volcanoes, magma and lava flows, students learn about the properties of liquid movement, coming to understand viscosity and other factors that increase and decrease liquid flow. They also learn about lava composition and its risk to human settlements. Students are introduced to natural disasters and learn the difference between natural hazards and natural disasters. Students should have a basic understanding of volcanoes and volcanic eruptions and be familiar with fluid behaviors in situations of flow and static equilibrium, understanding that these behaviors may vary based on temperature and fluid composition. Students should also be familiar with the concept of gas exerting pressure inside a closed container. Today we are going to talk about something that we all know about, but that is still a great mystery to us in many ways. What do you know about volcanoes? (Listen to student answers. Expect students to answer with a general description of a volcano as a large landmass that is prone to eruptions.) Great! I heard you mention lava, magma, eruptions and heat. Today, we are going to delve into some characteristics that make each volcano unique by looking at certain features that cause them to behave differently. Volcanoes have long fascinated humans with their immense—sometimes incredibly destructive—power and impact. Would any of you like to live near a volcano? (Expect students to respond by mentioning the dangers of being close to an active volcano, including the potential for death and destruction caused by eruptions.) So, we know the basics of volcanoes, but what makes them dangerous and why wouldn’t you want to live near one? (Heat, fire, hot magma, etc.) Commonly, people associate volcanoes with eruptions of a violent nature, but many volcanoes do not pose a catastrophic risk to their surrounding environments because they do not erupt violently. Now let’s take a closer look at what makes a volcano dangerous. (Show or draw a basic diagram of a volcano, such as Figure 1, with a conduit that connects a magma chamber located below the Earth’s surface to a volcano’s crater above the surface). During some eruptions, lava, as well as volcanic ash, gas and rock fragments, are sent miles into the air and surrounding areas, posing a large environmental hazard. Volcanoes are dangerous because of the extreme temperatures of the molten (melted) rock involved in eruptions. When the hot molten rock remains underneath the Earth’s surface, it is called magma; when the rock reaches the Earth’s surface, it is called lava. Magma is a combination of molten (melted) rock and dissolved gases (mainly water vapor) that flow from a magma crater close to the Earth’s surface. Complex processes happen inside magma, and that is our focus today. First, it is important to note that pressure is incredibly high under the surface of volcanoes. We measure pressure in units of Pascals, just like we measure length in meters. Because pressure is so high underneath volcanoes (in the range of 100 MegaPascals, more than 1,000 times more pressure than at sea level), volatiles are dissolved in magma while it remains at great depths below the Earth’s surface. However, as the magma rises towards the Earth’s surface, bubbles start to form and escape. At that point, a volcano can erupt in one of two main ways: an effusive eruption or an explosive eruption. Once magma reaches the Earth’s surface, lava can leave in river-like flowing streams, which we call effusive eruptions (show a photograph or draw on the classroom board: https://volcanoes.usgs.gov/vsc/glossary/effusive_eruption.html) or in violent bursts, which we call explosive eruptions (show a photograph or sketch on the board: https://volcanoes.usgs.gov/vsc/glossary/explosive_eruption.html; images also on slide 5). Explosive eruptions pose huge dangers to nearby communities. Let’s consider the example of the 1991 eruption of Mount Pinatubo in the Philippines. After a series of initial steam explosions and earthquakes, Mount Pinatubo exploded, releasing fiery hot ash and that swept down 30+ miles of valleys, devastating towns and cities along the way—even reaching Clark Air Force Base, home to more than 15,000 American servicemen and dependents. In order to understand how an explosive volcanic eruption works, it is important to know about the bubbles that come out of magma as pressure decreases when magma ascends to the Earth’s surface. An eruption’s explosiveness varies, depending on how thick the magma is. It is intuitive that thick fluids flow at a slower rate than thinner ones, but let’s explore the scientific term, viscosity, as a means of describing fluid thickness. Eruption intensity can also vary depending on how much pressure is inside the bubbles. We must understand these two concepts to understand violent volcanic eruptions and ultimately what we can do to engineer solutions to the inevitable hazards they pose. Although the eruption of Mount Pinatubo resulted in many casualties and much damage, that outcome is considered a “success” on the part of the U.S. Geological Survey (USGS) and the Phillipine Institute of Volcanology and Seismology (PHIVOLCS). Upon recognition of Pinatobu’s activity, PHIVOLCS and USGS scientists initiated onsite monitoring for a few weeks. They conducted intensive studies of the volcano’s eruptive history and their analyses indicated that a large eruption was approaching. The team immediately issued urgent warnings that prompted the mass evacuation of people, aircraft and equipment to safe zones prior to Mount Pinatobu’s explosion. An estimated 5,000-20,000 lives were saved and at least $200 million in damages were averted due to the successful early warning and evacuation efforts. A thorough understanding of volcanoes made this feat possible in the form of engineered tools that monitored and predicted the volcanic eruption. Today we will learn about the fundamentals of what makes a violent eruption so catastrophic. For many, the term “volcanic eruption” conjures images up of widespread disaster and destruction of entire towns and communities. In truth, eruptions come in a wide variety of forms and the explosive and damaging class is just one of a wide range of types. For example, many volcanoes in Hawaii are characterized by free-flowing magma, but rarely are as destructive as violent eruptions that spew large amounts of rock and ash. Mount St. Helens, one of the most well-known volcanoes in American history, is characterized by eruptions on the opposite end of the spectrum. The 1980 Mount St. Helens pyroclastic eruption killed 57 people and remains the most destructive in U.S. history. In this activity, students explore two of the main concepts behind explosive eruptions (as opposed to the less-damaging, effusive eruptions): viscosity and the degassing rate. The first part of the activity is a simplified procedure based on a lab assay that is regularly performed by researchers who study fluid mechanics. Before the Activity - Gather materials and make copies of the Viscosity and Pressure in Volcanic Eruptions Worksheet. - For each group, prepare and measure out 100 ml of each of the three fluids of varying viscosity into the 150-ml beakers labeled “A,” “B” and “C.” - Make arrangements for use of an outside location to conduct Part 1, or else an indoor location that is easy to clean up after exploded soda cans. - If available, set up a laptop and projector to show the 10-slide Volcano Presentation, a PowerPoint® file. With the Students—Introduction - Conduct the pre-activity assessment class discussion as described in the Assessment section. - Present the Introduction/Motivation content, showing the presentation as you introduce the activity. - Divide the class into groups of three students each. - Hand out the worksheet. With the Students—Part 1: Pressure Relief / Degassing - To each group, hand out two filled and sealed carbonated beverage cans and a timer. - Explain that when magma is “agitated” (that is, rises to the Earth’s surface), bubbles can form, which can lead to violent eruptions. - Tell students that for this part of the activity, the cans represent volcanoes. Explain that they will agitate their “volcanoes.” Then they will open each can differently to model explosive versus effusive eruptions. Remind them to pay close attention to what happens so they can later describe and reflect on their observations. (Note: For the “eruption,” make sure that students are outside or open the cans on a surface that is easily cleaned.) - Direct the students to agitate both of their soda cans for 10 seconds. - For the “explosive” eruption can, completely open the can tab in less than one second. - For the “effusive” eruption can, take 30 seconds to completely open the tab. - Have students discuss within their teams the differences between each “eruption.” What was different about each situation? If students need help, encourage them to think about decompression rate—the time allowed for the cans to release gas and pressure. - Direct students to fill out Part 1 of their worksheets. With the Students—Part 2: Viscosity - Explain the concept of viscosity and that higher viscosity magma usually lends itself to more explosive eruptions. - Viscosity is the “thickness” of a fluid; the internal friction a substance has as it moves. The more viscous a fluid, the slower it is to flow. - The more viscous magma is, the harder it is to move it. In other words, more force is required to shift thicker magma. When magma is flowing up through a conduit on its way to the Earth’s surface, intense pressures build up and the only relief from that pressure is at the exit of the volcanic crater. The more viscous a volcano’s magma, the more force is required to expel the magma (and the more pressure that builds up). Along with more viscous magma is a general trend towards more violent and “forceful” eruptions. - Pass out the marbles and beakers A, B and C, one of each per group. - Inform students that they have 10 minutes to rank on their worksheets the fluids in order of increasing viscosity. Explain that they need to back up their claims with data they gather based on the time it takes each marble to drop to the bottom of each beaker (see Figure 2). - Direct students to use the straws and stirring rods to explore (and then explain) why more viscous magma might lend itself to more explosive eruptions. (Depending on your students, make this step as open-ended or guided as desired.) - Straws: Think about the bubbles in magma. When magma rises towards the Earth’s surface, pressure decreases because less rock and earth are pushing down on the magma. As a result, volatiles once dissolved in the magma emerge as bubbles. (At this point, let students experiment with the straws and the liquids. Expect students to blow bubbles in beakers A, B and C, and observe the differing amount of force required to blow into each one and how that relates to viscosity. Hopefully, students lead themselves to this observation and realization prompted only by your explanation about the bubbles in magma in a volcano, but as necessary, provide guidance.) - Stirring rods: Suggest that students use the applied physical force of stirring rods in beakers A, B and C to explore how the force of volcanic explosions is dependent on the magma viscosity. Expect students to stir the rods in each solution and observe how the force required to stir varies, depending on the viscosity. Encourage students to think about how this relates to effusive eruptions with flowing lava versus explosive eruptions with more viscous lava. - Direct students to fill out Part 2 of their worksheets, including a detailed explanation of how viscosity and the amount of time a volcano is able to degas relate to how violent its eruption is. - Have student share their explanations with the class; draw attention to exemplar responses. Ask them the reflection questions provided in the Assessment section. degassing : The process of freeing gas from a solution. In the case of volcanoes, freeing gas from magma. lava: Molten rock expelled from a volcano during an eruption. When rising magma from within a volcano reaches the Earth’s surface, it is called lava. magma: A substance found below the Earth’s surface and inside volcanoes, composed of hot, liquid rock and dissolved gasses. Plinian eruption: The largest and most violent type of volcanic eruptions. Usually associated with very viscous magma. Example: 1980 Mount. St. Helens eruption. pressure: The continuous physical force exerted against an object by something that it is in physical contact with, in the case of erupting volcanoes, the gas in a bubble against magma. pumice: A lightweight, rough and porous volcanic rock that is typically white or gray in color. viscosity: A fluid’s resistance to flow or movement; describes internal friction of a fluid. A bubble encounters more resistance as it rises through honey than water. volatiles: Chemical elements and compounds typically with low boiling points, such as nitrogen, water and carbon dioxide. They typically exist in the gaseous phase, but due to pressure underneath the Earth’s surface, in magma they are dissolved in small amounts. volcanic crater: A typically bowl-shaped depression at the top of a volcano with a vent to below where magma reaches the Earth’s surface after traveling through a conduit. volcanic eruption: The sudden occurrence of a discharge of steam and volcanic material through the vent/crater of a volcano. Discussion of Volcanic Eruptions: Engage students in an open discussion to help them to ponder why volcanoes can be dangerous. Encourage brainstorming and sharing of ideas. Ask students: - What do you think causes volcanic eruptions? Why are they sometimes explosive? - What are some everyday situations in which bubbles lead to or are involved in explosions? How might that relate to volcanic eruptions? Activity Embedded Assessment Worksheet: Have students fill out the corresponding sections of the Viscosity and Pressure in Volcanic Eruptions Worksheet as they work though Parts 1 and 2 of the activity. Review student answers to gauge their depth of understanding. Lab Observations to Real-World Understanding: Ask students to predict how the carbonated beverage cans will react when agitated and opened. After they simulate explosive and effusive “eruptions,” have students describe how and why the eruptions differed and relate their observations to volcanoes and the gases that bubble out of rising magma. What exactly was it about how the cans were opened that meant the difference between an effusive and explosive eruption? (Answer: The critical difference was in the time allowed for the carbonation to degas from the can; that time difference [1 second versus 30 seconds] meant the difference between an explosive and effusive eruption. Fast degassing can cause violent volcanic eruptions; for example, an earthquake caused a large landslide on Mount St. Helens in May 1980, which suddenly exposed the volcano to lower pressures and led to an explosive eruption.) Reflection Questions: Individually or as a class, have students answer the following questions: - What other concepts do you think affect the explosivity of volcanoes beyond what we have covered? (Possible answers: Magma composition; location of the volcano on the Earth’s surface in relation to fault lines, tectonic plates, etc.; volcanic activity; bubble characteristics in magma.) - How do you think an understanding about the concepts covered in this activity connects to useful engineering applications? (Possible answers: Designing equipment to gather all sorts of data about volcanoes; providing global data on volcano activity; using available data to analyze and compare similar volcanoes; predicting future volcanic eruptions; city/town evacuation planning and implementation; mapping out safe living zones near volcanoes.) Fluids Testing! For homework (or in class), have students carry out similar experiments using their own materials. For example: - Have students use marbles, straws and glass stirring rods to test the viscosities of fluids and substances they find at home, such as juice, molasses, honey, cooking oil, melted butter and milk. Take it further by having students conduct their tests at varying temperatures. - Have students try the same degassing procedure with carbonated beverages at different temperatures. Prepare for the explosive opening of agitated carbonated beverage cans by conducting Part 1 of the activity outside. Change the temperature of fluids A, B and C, and see how their viscosities change. Have students conduct the Measuring Viscosity activity so they can quantitatively calculate viscosity. Have students complete the “Balloon Blow-Up” activity at https://www.exploratorium.edu/science_explorer/balloon_blowup.html - For lower grades, do just one of the two short activities or minimize the amount of testing students do with the fluids. - For higher grades, have students conduct the Measuring Viscosity so they can calculate viscosity. “1980 Cataclysmic Eruption.” Last modified August 27, 2015. Mount St. Helens, Volcano Hazards Program, U.S. Geological Survey, U.S. Department of the Interior. Accessed February 2016. http://volcanoes.usgs.gov/volcanoes/st_helens/st_helens_geo_hist_99.html Ball, Jessica. “Types of Volcanic Eruptions.” Geoscience News and Information, Geology.com. Accessed February 2016. http://geology.com/volcanoes/types-of-volcanic-eruptions/ “Why are some eruptions gentle and others violent?” Volcano World, Oregon State University. Accessed February 2016. http://volcano.oregonstate.edu/why-are-some-eruptions-gentle-and-others-violent Copyright© 2016 by Regents of the University of Colorado; original © 2015 Rice University ContributorsNathan Truong; Austin Blaser; Thomas Giachetti; Helge Gonnermann Supporting ProgramNanotechnology RET, Department of Earth Science, School Science and Technology, Rice University This material was developed based upon work supported by the National Science Foundation under grant no. EEC 1406885—the Nanotechnology Research Experience for Teachers at the Rice University School Science and Technology in Houston, TX. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Last modified: April 29, 2019
- Trending Categories - Data Structure - Operating System - C Programming - Selected Reading - UPSC IAS Exams Notes - Developer's Best Practices - Questions and Answers - Effective Resume Writing - HR Interview Questions - Computer Glossary - Who is Who Arithmetic Logic Unit (ALU) Inside a computer, there is an Arithmetic Logic Unit (ALU), which is capable of performing logical operations (e.g. AND, OR, Ex-OR, Invert etc.) in addition to the arithmetic operations (e.g. Addition, Subtraction etc.). The control unit supplies the data required by the ALU from memory, or from input devices, and directs the ALU to perform a specific operation based on the instruction fetched from the memory. ALU is the “calculator” portion of the computer. An arithmetic logic unit(ALU) is a major component of the central processing unit of the a computer system. It does all processes related to arithmetic and logic operations that need to be done on instruction words. In some microprocessor architectures, the ALU is divided into the arithmetic unit (AU) and the logic unit (LU). An ALU can be designed by engineers to calculate many different operations. When the operations become more and more complex, then the ALU will also become more and more expensive and also takes up more space in the CPU and dissipates more heat. That is why engineers make the ALU powerful enough to ensure that the CPU is also powerful and fast, but not so complex as to become prohibitive in terms of cost and other disadvantages. ALU is also known as an Integer Unit (IU). The arithmetic logic unit is that part of the CPU that handles all the calculations the CPU may need. Most of these operations are logical in nature. Depending on how the ALU is designed, it can make the CPU more powerful, but it also consumes more energy and creates more heat. Therefore, there must be a balance between how powerful and complex the ALU is and how expensive the whole unit becomes. This is why faster CPUs are more expensive, consume more power and dissipate more heat. Different operation as carried out by ALU can be categorized as follows – logical operations − These include operations like AND, OR, NOT, XOR, NOR, NAND, etc. Bit-Shifting Operations − This pertains to shifting the positions of the bits by a certain number of places either towards the right or left, which is considered a multiplication or division operations. Arithmetic operations − This refers to bit addition and subtraction. Although multiplication and division are sometimes used, these operations are more expensive to make. Multiplication and subtraction can also be done by repetitive additions and subtractions respectively. - What is Arithmetic Logic Shift Unit in Computer Architecture? - Simulation of 4-bit ALU - How can I use the arithmetic operators (+,-,*,/) with unit values of INTERVAL keyword in MySQL? - Logic Gates in Python - Introduction to Mathematical Logic! - What is Logic Gates? - Z-Transform of Unit Impulse, Unit Step, and Unit Ramp Functions - Negation logic in SAP ABAP - Description of logic controller interface - Decimal counter using logic controller - Mathematical Logic Statements and Notations - What is Control Logic Gates? - Chip Select Logic in 8085 Microprocessor - Inference Theory of the Predicate Logic - Java arithmetic operators
From Wikipedia the free encyclopedia The lipid bilayer (or phospholipid bilayer) is a thin polar membrane made of two layers of lipid molecules. These membranes are flat sheets that form a continuous barrier around all cells. The cell membranes of almost all organisms and many viruses are made of a lipid bilayer, as are the nuclear membrane surrounding the cell nucleus, and membranes of the membrane-bound organelles in the cell. The lipid bilayer is the barrier that keeps ions, proteins and other molecules where they are needed and prevents them from diffusing into areas where they should not be. Lipid bilayers are ideally suited to this role, even though they are only a few nanometers in width, because they are impermeable to most water-soluble (hydrophilic) molecules. Bilayers are particularly impermeable to ions, which allows cells to regulate salt concentrations and pH by transporting ions across their membranes using proteins called ion pumps. Biological bilayers are usually composed of amphiphilic phospholipids that have a hydrophilic phosphate head and a hydrophobic tail consisting of two fatty acid chains. Phospholipids with certain head groups can alter the surface chemistry of a bilayer and can, for example, serve as signals as well as "anchors" for other molecules in the membranes of cells. Just like the heads, the tails of lipids can also affect membrane properties, for instance by determining the phase of the bilayer. The bilayer can adopt a solid gel phase state at lower temperatures but undergo phase transition to a fluid state at higher temperatures, and the chemical properties of the lipids' tails influence at which temperature this happens. The packing of lipids within the bilayer also affects its mechanical properties, including its resistance to stretching and bending. Many of these properties have been studied with the use of artificial "model" bilayers produced in a lab. Vesicles made by model bilayers have also been used clinically to deliver drugs. The structure of biological membranes typically includes several types of molecules in addition to the phospholipids comprising the bilayer. A particularly important example in animal cells is cholesterol, which helps strengthen the bilayer and decrease its permeability. Cholesterol also helps regulate the activity of certain integral membrane proteins. Integral membrane proteins function when incorporated into a lipid bilayer, and they are held tightly to the lipid bilayer with the help of an annular lipid shell. Because bilayers define the boundaries of the cell and its compartments, these membrane proteins are involved in many intra- and inter-cellular signaling processes. Certain kinds of membrane proteins are involved in the process of fusing two bilayers together. This fusion allows the joining of two distinct structures as in the acrosome reaction during fertilization of an egg by a sperm, or the entry of a virus into a cell. Because lipid bilayers are fragile and invisible in a traditional microscope, they are a challenge to study. Experiments on bilayers often require advanced techniques like electron microscopy and atomic force microscopy. Structure and organization When phospholipids are exposed to water, they self-assemble into a two-layered sheet with the hydrophobic tails pointing toward the center of the sheet. This arrangement results in two “leaflets” that are each a single molecular layer. The center of this bilayer contains almost no water and excludes molecules like sugars or salts that dissolve in water. The assembly process is driven by interactions between hydrophobic molecules (also called the hydrophobic effect). An increase in interactions between hydrophobic molecules (causing clustering of hydrophobic regions) allows water molecules to bond more freely with each other, increasing the entropy of the system. This complex process includes non-covalent interactions such as van der Waals forces, electrostatic and hydrogen bonds. Cross section analysis The lipid bilayer is very thin compared to its lateral dimensions. If a typical mammalian cell (diameter ~10 micrometers) were magnified to the size of a watermelon (~1 ft/30 cm), the lipid bilayer making up the plasma membrane would be about as thick as a piece of office paper. Despite being only a few nanometers thick, the bilayer is composed of several distinct chemical regions across its cross-section. These regions and their interactions with the surrounding water have been characterized over the past several decades with x-ray reflectometry, neutron scattering, and nuclear magnetic resonance techniques. The first region on either side of the bilayer is the hydrophilic headgroup. This portion of the membrane is completely hydrated and is typically around 0.8-0.9 nm thick. In phospholipid bilayers the phosphate group is located within this hydrated region, approximately 0.5 nm outside the hydrophobic core. In some cases, the hydrated region can extend much further, for instance in lipids with a large protein or long sugar chain grafted to the head. One common example of such a modification in nature is the lipopolysaccharide coat on a bacterial outer membrane, which helps retain a water layer around the bacterium to prevent dehydration. Next to the hydrated region is an intermediate region that is only partially hydrated. This boundary layer is approximately 0.3 nm thick. Within this short distance, the water concentration drops from 2M on the headgroup side to nearly zero on the tail (core) side. The hydrophobic core of the bilayer is typically 3-4 nm thick, but this value varies with chain length and chemistry. Core thickness also varies significantly with temperature, in particular near a phase transition. In many naturally occurring bilayers, the compositions of the inner and outer membrane leaflets are different. In human red blood cells, the inner (cytoplasmic) leaflet is composed mostly of phosphatidylethanolamine, phosphatidylserine and phosphatidylinositol and its phosphorylated derivatives. By contrast, the outer (extracellular) leaflet is based on phosphatidylcholine, sphingomyelin and a variety of glycolipids. In some cases, this asymmetry is based on where the lipids are made in the cell and reflects their initial orientation. The biological functions of lipid asymmetry are imperfectly understood, although it is clear that it is used in several different situations. For example, when a cell undergoes apoptosis, the phosphatidylserine — normally localised to the cytoplasmic leaflet — is transferred to the outer surface: There, it is recognised by a macrophage that then actively scavenges the dying cell. Lipid asymmetry arises, at least in part, from the fact that most phospholipids are synthesised and initially inserted into the inner monolayer: those that constitute the outer monolayer are then transported from the inner monolayer by a class of enzymes called flippases. Other lipids, such as sphingomyelin, appear to be synthesised at the external leaflet. Flippases are members of a larger family of lipid transport molecules that also includes floppases, which transfer lipids in the opposite direction, and scramblases, which randomize lipid distribution across lipid bilayers (as in apoptotic cells). In any case, once lipid asymmetry is established, it does not normally dissipate quickly because spontaneous flip-flop of lipids between leaflets is extremely slow. It is possible to mimic this asymmetry in the laboratory in model bilayer systems. Certain types of very small artificial vesicle will automatically make themselves slightly asymmetric, although the mechanism by which this asymmetry is generated is very different from that in cells. By utilizing two different monolayers in Langmuir-Blodgett deposition or a combination of Langmuir-Blodgett and vesicle rupture deposition it is also possible to synthesize an asymmetric planar bilayer. This asymmetry may be lost over time as lipids in supported bilayers can be prone to flip-flop. However, it has been reported that lipid flip-flop is slow compare to cholesterol and other smaller molecules. It has been reported that the organization and dynamics of the lipid monolayers in a bilayer are coupled. For example, introduction of obstructions in one monolayer can slow down the lateral diffusion in both monolayers. In addition, phase separation in one monolayer can also induce phase separation in other monolayer even when other monolayer can not phase separate by itself. Phases and phase transitions At a given temperature a lipid bilayer can exist in either a liquid or a gel (solid) phase. All lipids have a characteristic temperature at which they transition (melt) from the gel to liquid phase. In both phases the lipid molecules are prevented from flip-flopping across the bilayer, but in liquid phase bilayers a given lipid will exchange locations with its neighbor millions of times a second. This random walk exchange allows lipid to diffuse and thus wander across the surface of the membrane. Unlike liquid phase bilayers, the lipids in a gel phase bilayer have less mobility. The phase behavior of lipid bilayers is determined largely by the strength of the attractive Van der Waals interactions between adjacent lipid molecules. Longer-tailed lipids have more area over which to interact, increasing the strength of this interaction and, as a consequence, decreasing the lipid mobility. Thus, at a given temperature, a short-tailed lipid will be more fluid than an otherwise identical long-tailed lipid. Transition temperature can also be affected by the degree of unsaturation of the lipid tails. An unsaturated double bond can produce a kink in the alkane chain, disrupting the lipid packing. This disruption creates extra free space within the bilayer that allows additional flexibility in the adjacent chains. An example of this effect can be noted in everyday life as butter, which has a large percentage saturated fats, is solid at room temperature while vegetable oil, which is mostly unsaturated, is liquid. Most natural membranes are a complex mixture of different lipid molecules. If some of the components are liquid at a given temperature while others are in the gel phase, the two phases can coexist in spatially separated regions, rather like an iceberg floating in the ocean. This phase separation plays a critical role in biochemical phenomena because membrane components such as proteins can partition into one or the other phase and thus be locally concentrated or activated. One particularly important component of many mixed phase systems is cholesterol, which modulates bilayer permeability, mechanical strength, and biochemical interactions. While lipid tails primarily modulate bilayer phase behavior, it is the headgroup that determines the bilayer surface chemistry. Most natural bilayers are composed primarily of phospholipids, but sphingolipids and sterols such as cholesterol are also important components. Of the phospholipids, the most common headgroup is phosphatidylcholine (PC), accounting for about half the phospholipids in most mammalian cells. PC is a zwitterionic headgroup, as it has a negative charge on the phosphate group and a positive charge on the amine but, because these local charges balance, no net charge. Other headgroups are also present to varying degrees and can include phosphatidylserine (PS) phosphatidylethanolamine (PE) and phosphatidylglycerol (PG). These alternate headgroups often confer specific biological functionality that is highly context-dependent. For instance, PS presence on the extracellular membrane face of erythrocytes is a marker of cell apoptosis, whereas PS in growth plate vesicles is necessary for the nucleation of hydroxyapatite crystals and subsequent bone mineralization. Unlike PC, some of the other headgroups carry a net charge, which can alter the electrostatic interactions of small molecules with the bilayer. Containment and separation The primary role of the lipid bilayer in biology is to separate aqueous compartments from their surroundings. Without some form of barrier delineating “self” from “non-self”, it is difficult to even define the concept of an organism or of life. This barrier takes the form of a lipid bilayer in all known life forms except for a few species of archaea that utilize a specially adapted lipid monolayer. It has even been proposed that the very first form of life may have been a simple lipid vesicle with virtually its sole biosynthetic capability being the production of more phospholipids. The partitioning ability of the lipid bilayer is based on the fact that hydrophilic molecules cannot easily cross the hydrophobic bilayer core, as discussed in Transport across the bilayer below. The nucleus, mitochondria and chloroplasts have two lipid bilayers, while other sub-cellular structures are surrounded by a single lipid bilayer (such as the plasma membrane, endoplasmic reticula, Golgi apparatus and lysosomes). See Organelle. Prokaryotes have only one lipid bilayer - the cell membrane (also known as the plasma membrane). Many prokaryotes also have a cell wall, but the cell wall is composed of proteins or long chain carbohydrates, not lipids. In contrast, eukaryotes have a range of organelles including the nucleus, mitochondria, lysosomes and endoplasmic reticulum. All of these sub-cellular compartments are surrounded by one or more lipid bilayers and, together, typically comprise the majority of the bilayer area present in the cell. In liver hepatocytes for example, the plasma membrane accounts for only two percent of the total bilayer area of the cell, whereas the endoplasmic reticulum contains more than fifty percent and the mitochondria a further thirty percent. The most familiar form of cellular signaling is likely synaptic transmission, whereby a nerve impulse that has reached the end of one neuron is conveyed to an adjacent neuron via the release of neurotransmitters. This transmission is made possible by the action of synaptic vesicles which are, inside the cell, loaded with the neurotransmitters to be released later. These loaded vesicles fuse with the cell membrane at the pre-synaptic terminal and their contents are released into the space outside the cell. The contents then diffuse across the synapse to the post-synaptic terminal. Lipid bilayers are also involved in signal transduction through their role as the home of integral membrane proteins. This is an extremely broad and important class of biomolecule. It is estimated that up to a third of the human proteome are membrane proteins. Some of these proteins are linked to the exterior of the cell membrane. An example of this is the CD59 protein, which identifies cells as “self” and thus inhibits their destruction by the immune system. The HIV virus evades the immune system in part by grafting these proteins from the host membrane onto its own surface. Alternatively, some membrane proteins penetrate all the way through the bilayer and serve to relay individual signal events from the outside to the inside of the cell. The most common class of this type of protein is the G protein-coupled receptor (GPCR). GPCRs are responsible for much of the cell's ability to sense its surroundings and, because of this important role, approximately 40% of all modern drugs are targeted at GPCRs. In addition to protein- and solution-mediated processes, it is also possible for lipid bilayers to participate directly in signaling. A classic example of this is phosphatidylserine-triggered phagocytosis. Normally, phosphatidylserine is asymmetrically distributed in the cell membrane and is present only on the interior side. During programmed cell death a protein called a scramblase equilibrates this distribution, displaying phosphatidylserine on the extracellular bilayer face. The presence of phosphatidylserine then triggers phagocytosis to remove the dead or dying cell. The lipid bilayer is a very difficult structure to study because it is so thin and fragile. In spite of these limitations dozens of techniques have been developed over the last seventy years to allow investigations of its structure and function. Electrical measurements are a straightforward way to characterize an important function of a bilayer: its ability to segregate and prevent the flow of ions in solution. By applying a voltage across the bilayer and measuring the resulting current, the resistance of the bilayer is determined. This resistance is typically quite high (108 Ohm-cm2 or more) since the hydrophobic core is impermeable to charged species. The presence of even a few nanometer-scale holes results in a dramatic increase in current. The sensitivity of this system is such that even the activity of single ion channels can be resolved. A lipid bilayer cannot be seen with a traditional microscope because it is too thin, so researchers often use fluorescence microscopy. A sample is excited with one wavelength of light and observed in another, so that only fluorescent molecules with a matching excitation and emission profile will be seen. A natural lipid bilayer is not fluorescent, so at least one fluorescent dye needs to be attached to some of the molecules in the bilayer. Resolution is usually limited to a few hundred nanometers, which is unfortunately much larger than the thickness of a lipid bilayer. Electron microscopy offers a higher resolution image. In an electron microscope, a beam of focused electrons interacts with the sample rather than a beam of light as in traditional microscopy. In conjunction with rapid freezing techniques, electron microscopy has also been used to study the mechanisms of inter- and intracellular transport, for instance in demonstrating that exocytotic vesicles are the means of chemical release at synapses. Nuclear magnetic resonance spectroscopy 31P-NMR(nuclear magnetic resonance) spectroscopy is widely used for studies of phospholipid bilayers and biological membranes in native conditions. The analysis of 31P-NMR spectra of lipids could provide a wide range of information about lipid bilayer packing, phase transitions (gel phase, physiological liquid crystal phase, ripple phases, non bilayer phases), lipid head group orientation/dynamics, and elastic properties of pure lipid bilayer and as a result of binding of proteins and other biomolecules. Atomic force microscopy A new method to study lipid bilayers is Atomic force microscopy (AFM). Rather than using a beam of light or particles, a very small sharpened tip scans the surface by making physical contact with the bilayer and moving across it, like a record player needle. AFM is a promising technique because it has the potential to image with nanometer resolution at room temperature and even under water or physiological buffer, conditions necessary for natural bilayer behavior. Utilizing this capability, AFM has been used to examine dynamic bilayer behavior including the formation of transmembrane pores (holes) and phase transitions in supported bilayers. Another advantage is that AFM does not require fluorescent or isotopic labeling of the lipids, since the probe tip interacts mechanically with the bilayer surface. Because of this, the same scan can image both lipids and associated proteins, sometimes even with single-molecule resolution. AFM can also probe the mechanical nature of lipid bilayers. Dual polarisation interferometry Lipid bilayers exhibit high levels of birefringence where the refractive index in the plane of the bilayer differs from that perpendicular by as much as 0.1 refractive index units. This has been used to characterise the degree of order and disruption in bilayers using dual polarisation interferometry to understand mechanisms of protein interaction. Quantum chemical calculations Lipid bilayers are complicated molecular systems with many degrees of freedom. Thus, atomistic simulation of membrane and in particular ab initio calculations of its properties is difficult and computationally expensive. Quantum chemical calculations has recently been successfully performed to estimate dipole and quadrupole moments of lipid membranes. Transport across the bilayer Most polar molecules have low solubility in the hydrocarbon core of a lipid bilayer and, as a consequence, have low permeability coefficients across the bilayer. This effect is particularly pronounced for charged species, which have even lower permeability coefficients than neutral polar molecules. Anions typically have a higher rate of diffusion through bilayers than cations. Compared to ions, water molecules actually have a relatively large permeability through the bilayer, as evidenced by osmotic swelling. When a cell or vesicle with a high interior salt concentration is placed in a solution with a low salt concentration it will swell and eventually burst. Such a result would not be observed unless water was able to pass through the bilayer with relative ease. The anomalously large permeability of water through bilayers is still not completely understood and continues to be the subject of active debate. Small uncharged apolar molecules diffuse through lipid bilayers many orders of magnitude faster than ions or water. This applies both to fats and organic solvents like chloroform and ether. Regardless of their polar character larger molecules diffuse more slowly across lipid bilayers than small molecules. Ion pumps and channels Two special classes of protein deal with the ionic gradients found across cellular and sub-cellular membranes in nature- ion channels and ion pumps. Both pumps and channels are integral membrane proteins that pass through the bilayer, but their roles are quite different. Ion pumps are the proteins that build and maintain the chemical gradients by utilizing an external energy source to move ions against the concentration gradient to an area of higher chemical potential. The energy source can be ATP, as is the case for the Na+-K+ ATPase. Alternatively, the energy source can be another chemical gradient already in place, as in the Ca2+/Na+ antiporter. It is through the action of ion pumps that cells are able to regulate pH via the pumping of protons. In contrast to ion pumps, ion channels do not build chemical gradients but rather dissipate them in order to perform work or send a signal. Probably the most familiar and best studied example is the voltage-gated Na+ channel, which allows conduction of an action potential along neurons. All ion pumps have some sort of trigger or “gating” mechanism. In the previous example it was electrical bias, but other channels can be activated by binding a molecular agonist or through a conformational change in another nearby protein. Endocytosis and exocytosis Some molecules or particles are too large or too hydrophilic to pass through a lipid bilayer. Other molecules could pass through the bilayer but must be transported rapidly in such large numbers that channel-type transport is impractical. In both cases, these types of cargo can be moved across the cell membrane through fusion or budding of vesicles. When a vesicle is produced inside the cell and fuses with the plasma membrane to release its contents into the extracellular space, this process is known as exocytosis. In the reverse process, a region of the cell membrane will dimple inwards and eventually pinch off, enclosing a portion of the extracellular fluid to transport it into the cell. Endocytosis and exocytosis rely on very different molecular machinery to function, but the two processes are intimately linked and could not work without each other. The primary mechanism of this interdependence is the large amount of lipid material involved. In a typical cell, an area of bilayer equivalent to the entire plasma membrane will travel through the endocytosis/exocytosis cycle in about half an hour. If these two processes were not balancing each other, the cell would either balloon outward to an unmanageable size or completely deplete its plasma membrane within a short time. Exocytosis in prokaryotes: Membrane vesicular exocytosis, popularly known as membrane vesicle trafficking, a Nobel prize-winning (year, 2013) process, is traditionally regarded as a prerogative of eukaryotic cells. This myth was however broken with the revelation that nanovesicles, popularly known as bacterial outer membrane vesicles, released by gram-negative microbes, translocate bacterial signal molecules to host or target cells to carry out multiple processes in favour of the secreting microbe e.g., in host cell invasion and microbe-environment interactions, in general. Electroporation is the rapid increase in bilayer permeability induced by the application of a large artificial electric field across the membrane. Experimentally, electroporation is used to introduce hydrophilic molecules into cells. It is a particularly useful technique for large highly charged molecules such as DNA, which would never passively diffuse across the hydrophobic bilayer core. Because of this, electroporation is one of the key methods of transfection as well as bacterial transformation. It has even been proposed that electroporation resulting from lightning strikes could be a mechanism of natural horizontal gene transfer. This increase in permeability primarily affects transport of ions and other hydrated species, indicating that the mechanism is the creation of nm-scale water-filled holes in the membrane. Although electroporation and dielectric breakdown both result from application of an electric field, the mechanisms involved are fundamentally different. In dielectric breakdown the barrier material is ionized, creating a conductive pathway. The material alteration is thus chemical in nature. In contrast, during electroporation the lipid molecules are not chemically altered but simply shift position, opening up a pore that acts as the conductive pathway through the bilayer as it is filled with water. Lipid bilayers are large enough structures to have some of the mechanical properties of liquids or solids. The area compression modulus Ka, bending modulus Kb, and edge energy , can be used to describe them. Solid lipid bilayers also have a shear modulus, but like any liquid, the shear modulus is zero for fluid bilayers. These mechanical properties affect how the membrane functions. Ka and Kb affect the ability of proteins and small molecules to insert into the bilayer, and bilayer mechanical properties have been shown to alter the function of mechanically activated ion channels. Bilayer mechanical properties also govern what types of stress a cell can withstand without tearing. Although lipid bilayers can easily bend, most cannot stretch more than a few percent before rupturing. As discussed in the Structure and organization section, the hydrophobic attraction of lipid tails in water is the primary force holding lipid bilayers together. Thus, the elastic modulus of the bilayer is primarily determined by how much extra area is exposed to water when the lipid molecules are stretched apart. It is not surprising given this understanding of the forces involved that studies have shown that Ka varies strongly with osmotic pressure but only weakly with tail length and unsaturation. Because the forces involved are so small, it is difficult to experimentally determine Ka. Most techniques require sophisticated microscopy and very sensitive measurement equipment. In contrast to Ka, which is a measure of how much energy is needed to stretch the bilayer, Kb is a measure of how much energy is needed to bend or flex the bilayer. Formally, bending modulus is defined as the energy required to deform a membrane from its intrinsic curvature to some other curvature. Intrinsic curvature is defined by the ratio of the diameter of the head group to that of the tail group. For two-tailed PC lipids, this ratio is nearly one so the intrinsic curvature is nearly zero. If a particular lipid has too large a deviation from zero intrinsic curvature it will not form a bilayer and will instead form other phases such as micelles or inverted micelles. Addition of small hydrophilic molecules like sucrose into mixed lipid lamellar liposomes made from galactolipid-rich thylakoid membranes destabilises bilayers into micellar phase. Typically, Kb is not measured experimentally but rather is calculated from measurements of Ka and bilayer thickness, since the three parameters are related. is a measure of how much energy it takes to expose a bilayer edge to water by tearing the bilayer or creating a hole in it. The origin of this energy is the fact that creating such an interface exposes some of the lipid tails to water, but the exact orientation of these border lipids is unknown. There is some evidence that both hydrophobic (tails straight) and hydrophilic (heads curved around) pores can coexist. Fusion is the process by which two lipid bilayers merge, resulting in one connected structure. If this fusion proceeds completely through both leaflets of both bilayers, a water-filled bridge is formed and the solutions contained by the bilayers can mix. Alternatively, if only one leaflet from each bilayer is involved in the fusion process, the bilayers are said to be hemifused. Fusion is involved in many cellular processes, in particular in eukaryotes, since the eukaryotic cell is extensively sub-divided by lipid bilayer membranes. Exocytosis, fertilization of an egg by sperm activation, and transport of waste products to the lysozome are a few of the many eukaryotic processes that rely on some form of fusion. Even the entry of pathogens can be governed by fusion, as many bilayer-coated viruses have dedicated fusion proteins to gain entry into the host cell. There are four fundamental steps in the fusion process. First, the involved membranes must aggregate, approaching each other to within several nanometers. Second, the two bilayers must come into very close contact (within a few angstroms). To achieve this close contact, the two surfaces must become at least partially dehydrated, as the bound surface water normally present causes bilayers to strongly repel. The presence of ions, in particular divalent cations like magnesium and calcium, strongly affects this step. One of the critical roles of calcium in the body is regulating membrane fusion. Third, a destabilization must form at one point between the two bilayers, locally distorting their structures. The exact nature of this distortion is not known. One theory is that a highly curved "stalk" must form between the two bilayers. Proponents of this theory believe that it explains why phosphatidylethanolamine, a highly curved lipid, promotes fusion. Finally, in the last step of fusion, this point defect grows and the components of the two bilayers mix and diffuse away from the site of contact. The situation is further complicated when considering fusion in vivo since biological fusion is almost always regulated by the action of membrane-associated proteins. The first of these proteins to be studied were the viral fusion proteins, which allow an enveloped virus to insert its genetic material into the host cell (enveloped viruses are those surrounded by a lipid bilayer; some others have only a protein coat). Eukaryotic cells also use fusion proteins, the best-studied of which are the SNAREs. SNARE proteins are used to direct all vesicular intracellular trafficking. Despite years of study, much is still unknown about the function of this protein class. In fact, there is still an active debate regarding whether SNAREs are linked to early docking or participate later in the fusion process by facilitating hemifusion. In studies of molecular and cellular biology it is often desirable to artificially induce fusion. The addition of polyethylene glycol (PEG) causes fusion without significant aggregation or biochemical disruption. This procedure is now used extensively, for example by fusing B-cells with myeloma cells. The resulting “hybridoma” from this combination expresses a desired antibody as determined by the B-cell involved, but is immortalized due to the melanoma component. Fusion can also be artificially induced through electroporation in a process known as electrofusion. It is believed that this phenomenon results from the energetically active edges formed during electroporation, which can act as the local defect point to nucleate stalk growth between two bilayers. Lipid bilayers can be created artificially in the lab to allow researchers to perform experiments that cannot be done with natural bilayers. They can also be used in the field of Synthetic Biology, to define the boundaries of artificial cells. These synthetic systems are called model lipid bilayers. There are many different types of model bilayers, each having experimental advantages and disadvantages. They can be made with either synthetic or natural lipids. Among the most common model systems are: - Black lipid membranes (BLM) - Supported lipid bilayers (SLB) - Tethered Bilayer Lipid Membranes (t-BLM) - Droplet Interface Bilayers (DIBs) To date, the most successful commercial application of lipid bilayers has been the use of liposomes for drug delivery, especially for cancer treatment. (Note- the term “liposome” is in essence synonymous with “vesicle” except that vesicle is a general term for the structure whereas liposome refers to only artificial not natural vesicles) The basic idea of liposomal drug delivery is that the drug is encapsulated in solution inside the liposome then injected into the patient. These drug-loaded liposomes travel through the system until they bind at the target site and rupture, releasing the drug. In theory, liposomes should make an ideal drug delivery system since they can isolate nearly any hydrophilic drug, can be grafted with molecules to target specific tissues and can be relatively non-toxic since the body possesses biochemical pathways for degrading lipids. The first generation of drug delivery liposomes had a simple lipid composition and suffered from several limitations. Circulation in the bloodstream was extremely limited due to both renal clearing and phagocytosis. Refinement of the lipid composition to tune fluidity, surface charge density, and surface hydration resulted in vesicles that adsorb fewer proteins from serum and thus are less readily recognized by the immune system. The most significant advance in this area was the grafting of polyethylene glycol (PEG) onto the liposome surface to produce “stealth” vesicles, which circulate over long times without immune or renal clearing. The first stealth liposomes were passively targeted at tumor tissues. Because tumors induce rapid and uncontrolled angiogenesis they are especially “leaky” and allow liposomes to exit the bloodstream at a much higher rate than normal tissue would. More recently[when?] work has been undertaken to graft antibodies or other molecular markers onto the liposome surface in the hope of actively binding them to a specific cell or tissue type. Some examples of this approach are already in clinical trials. Another potential application of lipid bilayers is the field of biosensors. Since the lipid bilayer is the barrier between the interior and exterior of the cell, it is also the site of extensive signal transduction. Researchers over the years have tried to harness this potential to develop a bilayer-based device for clinical diagnosis or bioterrorism detection. Progress has been slow in this area and, although a few companies have developed automated lipid-based detection systems, they are still targeted at the research community. These include Biacore (now GE Healthcare Life Sciences), which offers a disposable chip for utilizing lipid bilayers in studies of binding kinetics and Nanion Inc., which has developed an automated patch clamping system. Other, more exotic applications are also being pursued such as the use of lipid bilayer membrane pores for DNA sequencing by Oxford Nanolabs. To date, this technology has not proven commercially viable. A supported lipid bilayer (SLB) as described above has achieved commercial success as a screening technique to measure the permeability of drugs. This parallel artificial membrane permeability assay PAMPA technique measures the permeability across specifically formulated lipid cocktail(s) found to be highly correlated with Caco-2 cultures, the gastrointestinal tract, blood–brain barrier and skin. By the early twentieth century scientists had come to believe that cells are surrounded by a thin oil-like barrier, but the structural nature of this membrane was not known. Two experiments in 1925 laid the groundwork to fill in this gap. By measuring the capacitance of erythrocyte solutions, Hugo Fricke determined that the cell membrane was 3.3 nm thick. Although the results of this experiment were accurate, Fricke misinterpreted the data to mean that the cell membrane is a single molecular layer. Prof. Dr. Evert Gorter (1881–1954) and F. Grendel of Leiden University approached the problem from a different perspective, spreading the erythrocyte lipids as a monolayer on a Langmuir-Blodgett trough. When they compared the area of the monolayer to the surface area of the cells, they found a ratio of two to one. Later analyses showed several errors and incorrect assumptions with this experiment but, serendipitously, these errors canceled out and from this flawed data Gorter and Grendel drew the correct conclusion- that the cell membrane is a lipid bilayer. This theory was confirmed through the use of electron microscopy in the late 1950s. Although he did not publish the first electron microscopy study of lipid bilayers J. David Robertson was the first to assert that the two dark electron-dense bands were the headgroups and associated proteins of two apposed lipid monolayers. In this body of work, Robertson put forward the concept of the “unit membrane.” This was the first time the bilayer structure had been universally assigned to all cell membranes as well as organelle membranes. Around the same time, the development of model membranes confirmed that the lipid bilayer is a stable structure that can exist independent of proteins. By “painting” a solution of lipid in organic solvent across an aperture, Mueller and Rudin were able to create an artificial bilayer and determine that this exhibited lateral fluidity, high electrical resistance and self-healing in response to puncture, all of which are properties of a natural cell membrane. A few years later, Alec Bangham showed that bilayers, in the form of lipid vesicles, could also be formed simply by exposing a dried lipid sample to water. This was an important advance, since it demonstrated that lipid bilayers form spontaneously via self assembly and do not require a patterned support structure. In 1977, a totally synthetic bilayer membrane was prepared by Kunitake and Okahata, from a single organic compound, didodecyldimethylammonium bromide. It clearly shows that the bilayer membrane was assembled by the van der Waals interaction. - Andersen, Olaf S.; Koeppe, II, Roger E. (June 2007). "Bilayer Thickness and Membrane Protein Function: An Energetic Perspective". Annual Review of Biophysics and Biomolecular Structure. 36 (1): 107–130. doi:10.1146/annurev.biophys.36.040306.132643. PMID 17263662. S2CID 6521535. - Divecha, Nullin; Irvine, Robin F (27 January 1995). "Phospholipid signaling". Cell. 80 (2): 269–278. doi:10.1016/0092-8674(95)90409-3. PMID 7834746. S2CID 14120598. - Mashaghi et al. Hydration strongly affects the molecular and electronic structure of membrane phospholipids. 136, 114709 (2012)"The Journal of Chemical Physics". Archived from the original on 15 May 2016. Retrieved 17 May 2012. - Lewis BA, Engelman DM (May 1983). "Lipid bilayer thickness varies linearly with acyl chain length in fluid phosphatidylcholine vesicles". J. Mol. Biol. 166 (2): 211–7. doi:10.1016/S0022-2836(83)80007-2. PMID 6854644. - Zaccai G, Blasie JK, Schoenborn BP (January 1975). "Neutron Diffraction Studies on the Location of Water in Lecithin Bilayer Model Membranes". Proc. Natl. Acad. Sci. U.S.A. 72 (1): 376–380. Bibcode:1975PNAS...72..376Z. doi:10.1073/pnas.72.1.376. PMC 432308. PMID 16592215. - Nagle JF, Tristram-Nagle S (November 2000). "Structure of lipid bilayers". Biochim. Biophys. Acta. 1469 (3): 159–95. doi:10.1016/S0304-4157(00)00016-2. PMC 2747654. PMID 11063882. - Parker J, Madigan MT, Brock TD, Martinko JM (2003). Brock biology of microorganisms (10th ed.). Englewood Cliffs, N.J: Prentice Hall. ISBN 978-0-13-049147-3. - Marsh D (July 2001). "Polarity and permeation profiles in lipid membranes". Proc. Natl. Acad. Sci. U.S.A. 98 (14): 7777–82. Bibcode:2001PNAS...98.7777M. doi:10.1073/pnas.131023798. PMC 35418. PMID 11438731. - Marsh D (December 2002). "Membrane water-penetration profiles from spin labels". Eur. Biophys. J. 31 (7): 559–62. doi:10.1007/s00249-002-0245-z. PMID 12602343. S2CID 36212541. - Rawicz W, Olbrich KC, McIntosh T, Needham D, Evans E (July 2000). "Effect of chain length and unsaturation on elasticity of lipid bilayers". Biophys. J. 79 (1): 328–39. Bibcode:2000BpJ....79..328R. doi:10.1016/S0006-3495(00)76295-3. PMC 1300937. PMID 10866959. - Trauble H, Haynes DH (1971). "The volume change in lipid bilayer lamellae at the crystalline-liquid crystalline phase transition". Chem. Phys. Lipids. 7 (4): 324–35. doi:10.1016/0009-3084(71)90010-7. - Bretscher MS (1 March 1972). "Asymmetrical Lipid Bilayer Structure for Biological Membranes". Nature New Biology. 236 (61): 11–12. doi:10.1038/newbio236011a0. PMID 4502419. - Verkleij AJ, Zwaal RF, Roelofsen B, Comfurius P, Kastelijn D, van Deenen LL (October 1973). "The asymmetric distribution of phospholipids in the human red cell membrane. A combined study using phospholipases and freeze-etch electron microscopy". Biochim. Biophys. Acta. 323 (2): 178–93. doi:10.1016/0005-2736(73)90143-0. PMID 4356540. - Coones, R. T.; Green, R. J.; Frazier, R. A. (2021). "Investigating lipid headgroup composition within epithelial membranes: a systematic review". Soft Matter. 17 (28): 6773–6786. Bibcode:2021SMat...17.6773C. doi:10.1039/D1SM00703C. ISSN 1744-683X. PMID 34212942. S2CID 235708094. - Bell RM, Ballas LM, Coleman RA (1 March 1981). "Lipid topogenesis". J. Lipid Res. 22 (3): 391–403. doi:10.1016/S0022-2275(20)34952-X. PMID 7017050. - Bretscher MS (August 1973). "Membrane structure: some general principles". Science. 181 (4100): 622–629. Bibcode:1973Sci...181..622B. doi:10.1126/science.181.4100.622. PMID 4724478. S2CID 34501546. - Rothman JE, Kennedy EP (May 1977). "Rapid transmembrane movement of newly synthesized phospholipids during membrane assembly". Proc. Natl. Acad. Sci. U.S.A. 74 (5): 1821–5. Bibcode:1977PNAS...74.1821R. doi:10.1073/pnas.74.5.1821. PMC 431015. PMID 405668. - Kornberg RD, McConnell HM (March 1971). "Inside-outside transitions of phospholipids in vesicle membranes". Biochemistry. 10 (7): 1111–20. doi:10.1021/bi00783a003. PMID 4324203. - Litman BJ (July 1974). "Determination of molecular asymmetry in the phosphatidylethanolamine surface distribution in mixed phospholipid vesicles". Biochemistry. 13 (14): 2844–8. doi:10.1021/bi00711a010. PMID 4407872. - Crane JM, Kiessling V, Tamm LK (February 2005). "Measuring lipid asymmetry in planar supported bilayers by fluorescence interference contrast microscopy". Langmuir. 21 (4): 1377–88. doi:10.1021/la047654w. PMID 15697284. - Kalb E, Frey S, Tamm LK (January 1992). "Formation of supported planar bilayers by fusion of vesicles to supported phospholipid monolayers". Biochim. Biophys. Acta. 1103 (2): 307–16. doi:10.1016/0005-2736(92)90101-Q. PMID 1311950. - Lin WC, Blanchette CD, Ratto TV, Longo ML (January 2006). "Lipid asymmetry in DLPC/DSPC-supported lipid bilayers: a combined AFM and fluorescence microscopy study". Biophys. J. 90 (1): 228–37. Bibcode:2006BpJ....90..228L. doi:10.1529/biophysj.105.067066. PMC 1367021. PMID 16214871. - Perez-Salas, Ursula; Porcar, Lionel; Garg, Sumit; Ayee, Manuela A. A.; Levitan, Irena (October 2022). "Effective Parameters Controlling Sterol Transfer: A Time-Resolved Small-Angle Neutron Scattering Study". The Journal of Membrane Biology. 255 (4–5): 423–435. doi:10.1007/s00232-022-00231-3. ISSN 1432-1424. PMID 35467109. S2CID 248375027. - Garg, S.; Porcar, L.; Woodka, A. C.; Butler, P. D.; Perez-Salas, U. (20 July 2011). "Noninvasive neutron scattering measurements reveal slower cholesterol transport in model lipid membranes". Biophysical Journal. 101 (2): 370–377. Bibcode:2011BpJ...101..370G. doi:10.1016/j.bpj.2011.06.014. ISSN 1542-0086. PMC 3136766. PMID 21767489. - Deverall, Miranda A.; Garg, Sumit; Lüdtke, Karin; Jordan, Rainer; Rühe, Jürgen; Naumann, Christoph A. (12 August 2008). "Transbilayer coupling of obstructed lipid diffusion in polymer-tethered phospholipid bilayers". Soft Matter. 4 (9): 1899–1908. Bibcode:2008SMat....4.1899D. doi:10.1039/B800801A. ISSN 1744-6848. - Garg, Sumit; Rühe, Jürgen; Lüdtke, Karin; Jordan, Rainer; Naumann, Christoph A. (15 February 2007). "Domain Registration in Raft-Mimicking Lipid Mixtures Studied Using Polymer-Tethered Lipid Bilayers". Biophysical Journal. 92 (4): 1263–1270. Bibcode:2007BpJ....92.1263G. doi:10.1529/biophysj.106.091082. ISSN 0006-3495. PMC 1783876. PMID 17114215. - Berg, Howard C. (1993). Random walks in biology (Extended Paperback ed.). Princeton, N.J: Princeton University Press. ISBN 978-0-691-00064-0. - Dietrich C, Volovyk ZN, Levi M, Thompson NL, Jacobson K (September 2001). "Partitioning of Thy-1, GM1, and cross-linked phospholipid analogs into lipid rafts reconstituted in supported model membrane monolayers". Proc. Natl. Acad. Sci. U.S.A. 98 (19): 10642–7. Bibcode:2001PNAS...9810642D. doi:10.1073/pnas.191168698. PMC 58519. PMID 11535814. - Alberts, Bruce (2017). "Chapter 10: Membrane Structures". Molecular Biology of the Cell. Garland Science. ISBN 9781317563747. - Yeagle, Philip (1993). The membranes of cells (2nd ed.). Boston: Academic Press. ISBN 978-0-12-769041-4. - Fadok VA, Bratton DL, Frasch SC, Warner ML, Henson PM (July 1998). "The role of phosphatidylserine in recognition of apoptotic cells by phagocytes". Cell Death Differ. 5 (7): 551–62. doi:10.1038/sj.cdd.4400404. PMID 10200509. - Anderson HC, Garimella R, Tague SE (January 2005). "The role of matrix vesicles in growth plate development and biomineralization". Front. Biosci. 10 (1–3): 822–37. doi:10.2741/1576. PMID 15569622. - Eanes ED, Hailer AW (January 1987). "Calcium phosphate precipitation in aqueous suspensions of phosphatidylserine-containing anionic liposomes". Calcif. Tissue Int. 40 (1): 43–8. doi:10.1007/BF02555727. PMID 3103899. S2CID 26435152. - Kim J, Mosior M, Chung LA, Wu H, McLaughlin S (July 1991). "Binding of peptides with basic residues to membranes containing acidic phospholipids". Biophys. J. 60 (1): 135–48. Bibcode:1991BpJ....60..135K. doi:10.1016/S0006-3495(91)82037-9. PMC 1260045. PMID 1883932. - Koch AL (1984). "Primeval cells: possible energy-generating and cell-division mechanisms". J. Mol. Evol. 21 (3): 270–7. Bibcode:1985JMolE..21..270K. doi:10.1007/BF02102359. PMID 6242168. S2CID 21635206. - "5.1 Cell Membrane Structure | Life Science | University of Tokyo". Archived from the original on 22 February 2014. Retrieved 10 November 2012. - Alberts, Bruce (2002). Molecular biology of the cell (4th ed.). New York: Garland Science. ISBN 978-0-8153-4072-0. - Martelli PL, Fariselli P, Casadio R (2003). "An ENSEMBLE machine learning approach for the prediction of all-alpha membrane proteins". Bioinformatics. 19 (Suppl 1): i205–11. doi:10.1093/bioinformatics/btg1027. PMID 12855459. - Filmore D (2004). "It's A GPCR World". Modern Drug Discovery. 11: 24–9. - Montal M, Mueller P (December 1972). "Formation of bimolecular membranes from lipid monolayers and a study of their electrical properties". Proc. Natl. Acad. Sci. 69 (12): 3561–6. Bibcode:1972PNAS...69.3561M. doi:10.1073/pnas.69.12.3561. PMC 389821. PMID 4509315. - Melikov KC, Frolov VA, Shcherbakov A, Samsonov AV, Chizmadzhev YA, Chernomordik LV (April 2001). "Voltage-induced nonconductive pre-pores and metastable single pores in unmodified planar lipid bilayer". Biophys. J. 80 (4): 1829–36. Bibcode:2001BpJ....80.1829M. doi:10.1016/S0006-3495(01)76153-X. PMC 1301372. PMID 11259296. - Neher E, Sakmann B (April 1976). "Single-channel currents recorded from membrane of denervated frog muscle fibres". Nature. 260 (5554): 799–802. Bibcode:1976Natur.260..799N. doi:10.1038/260799a0. PMID 1083489. S2CID 4204985. - Heuser JE, Reese TS, Dennis MJ, Jan Y, Jan L, Evans L (May 1979). "Synaptic vesicle exocytosis captured by quick freezing and correlated with quantal transmitter release". J. Cell Biol. 81 (2): 275–300. doi:10.1083/jcb.81.2.275. PMC 2110310. PMID 38256. - Dubinnyi MA, Lesovoy DM, Dubovskii PV, Chupin VV, Arseniev AS (June 2006). "Modeling of 31P-NMR spectra of magnetically oriented phospholipid liposomes: A new analytical solution". Solid State Nucl Magn Reson. 29 (4): 305–311. doi:10.1016/j.ssnmr.2005.10.009. PMID 16298110.[dead link] - Roiter, Yuri; Ornatska, Maryna; Rammohan, Aravind R.; Balakrishnan, Jitendra; Heine, David R.; Minko, Sergiy (2008). "Interaction of Nanoparticles with Lipid Membrane". Nano Letters. 8 (3): 941–944. Bibcode:2008NanoL...8..941R. doi:10.1021/nl080080l. PMID 18254602. - Tokumasu F, Jin AJ, Dvorak JA (2002). "Lipid membrane phase behavior elucidated in real time by controlled environment atomic force microscopy". Journal of Electron Microscopy. 51 (1): 1–9. doi:10.1093/jmicro/51.1.1. PMID 12003236. - Richter RP, Brisson A (2003). "Characterization of lipid bilayers and protein assemblies supported on rough surfaces by atomic force microscopy". Langmuir. 19 (5): 1632–40. doi:10.1021/la026427w. S2CID 56532332. - Steltenkamp S, Müller MM, Deserno M, Hennesthal C, Steinem C, Janshoff A (July 2006). "Mechanical properties of pore-spanning lipid bilayers probed by atomic force microscopy". Biophys. J. 91 (1): 217–26. Bibcode:2006BpJ....91..217S. doi:10.1529/biophysj.106.081398. PMC 1479081. PMID 16617084. - Alireza Mashaghi et al., Hydration strongly affects the molecular and electronic structure of membrane phospholipids. J. Chem. Phys. 136, 114709 (2012) "The Journal of Chemical Physics". Archived from the original on 15 May 2016. Retrieved 17 May 2012. - Chakrabarti AC (1994). "Permeability of membranes to amino acids and modified amino acids: mechanisms involved in translocation". Amino Acids. 6 (3): 213–29. doi:10.1007/BF00813743. PMID 11543596. S2CID 24350029. - Hauser H, Phillips MC, Stubbs M (October 1972). "Ion permeability of phospholipid bilayers". Nature. 239 (5371): 342–4. Bibcode:1972Natur.239..342H. doi:10.1038/239342a0. PMID 12635233. S2CID 4185197. - Papahadjopoulos D, Watkins JC (September 1967). "Phospholipid model membranes. II. Permeability properties of hydrated liquid crystals". Biochim. Biophys. Acta. 135 (4): 639–52. doi:10.1016/0005-2736(67)90095-8. PMID 6048247. - Paula S, Volkov AG, Van Hoek AN, Haines TH, Deamer DW (January 1996). "Permeation of protons, potassium ions, and small polar molecules through phospholipid bilayers as a function of membrane thickness". Biophys. J. 70 (1): 339–48. Bibcode:1996BpJ....70..339P. doi:10.1016/S0006-3495(96)79575-9. PMC 1224932. PMID 8770210. - Xiang TX, Anderson BD (June 1994). "The relationship between permeant size and permeability in lipid bilayer membranes". J. Membr. Biol. 140 (2): 111–22. doi:10.1007/bf00232899. PMID 7932645. S2CID 20394005. - Gouaux E, Mackinnon R (December 2005). "Principles of selective ion transport in channels and pumps". Science. 310 (5753): 1461–5. Bibcode:2005Sci...310.1461G. doi:10.1126/science.1113666. PMID 16322449. S2CID 16323721. - Gundelfinger ED, Kessels MM, Qualmann B (February 2003). "Temporal and spatial coordination of exocytosis and endocytosis". Nat. Rev. Mol. Cell Biol. 4 (2): 127–39. doi:10.1038/nrm1016. PMID 12563290. S2CID 14415959. - Steinman RM, Brodie SE, Cohn ZA (March 1976). "Membrane flow during pinocytosis. A stereologic analysis". J. Cell Biol. 68 (3): 665–87. doi:10.1083/jcb.68.3.665. PMC 2109655. PMID 1030706. - YashRoy R.C. (1999) 'Exocytosis in prokaryotes' and its role in salmonella invasion. ICAR NEWS - A Science and Technology Newsletter, (Oct-Dec) vol. 5(4), page 18.https://www.researchgate.net/publication/230822402_'Exocytosis_in_prokaryotes'_and_its_role_in_Salmonella_invasion?ev=prf_pub - YashRoy R C (1993) Electron microscope studies of surface pili and vesicles of Salmonella 3,10:r:- organisms. Ind Jl of Anim Sci 63, 99-102.https://www.researchgate.net/publication/230817087_Electron_microscope_studies_of_surface_pilli_and_vesicles_of_Salmonella_310r-_organisms?ev=prf_pub - YashRoy R.C. (1998) Discovery of vesicular exocytosis in prokaryotes and its role in Salmonella invasion. Current Science, vol. 75(10), pp. 1062-1066.https://www.researchgate.net/publication/230793568_Discovery_of_vesicular_exocytosis_in_prokaryotes_and_its_role_in_Salmonella_invasion?ev=prf_pub - YashRoy RC (1998). "Exocytosis from gram negative bacteria for Salmonella invasion of chicken ileal epithelium". Indian Journal of Poultry Science. 33 (2): 119–123. - Neumann E, Schaefer-Ridder M, Wang Y, Hofschneider PH (1982). "Gene transfer into mouse lyoma cells by electroporation in high electric fields". EMBO J. 1 (7): 841–5. doi:10.1002/j.1460-2075.1982.tb01257.x. PMC 553119. PMID 6329708. - Demanèche S, Bertolla F, Buret F, et al. (August 2001). "Laboratory-scale evidence for lightning-mediated gene transfer in soil". Appl. Environ. Microbiol. 67 (8): 3440–4. Bibcode:2001ApEnM..67.3440D. doi:10.1128/AEM.67.8.3440-3444.2001. PMC 93040. PMID 11472916. - Garcia ML (July 2004). "Ion channels: gate expectations". Nature. 430 (6996): 153–5. Bibcode:2004Natur.430..153G. doi:10.1038/430153a. PMID 15241399. S2CID 4427370. - McIntosh TJ, Simon SA (2006). "Roles of Bilayer Material Properties in Function and Distribution of Membrane Proteins". Annu. Rev. Biophys. Biomol. Struct. 35 (1): 177–98. doi:10.1146/annurev.biophys.35.040405.102022. PMID 16689633. - Suchyna TM, Tape SE, Koeppe RE, Andersen OS, Sachs F, Gottlieb PA (July 2004). "Bilayer-dependent inhibition of mechanosensitive channels by neuroactive peptide enantiomers". Nature. 430 (6996): 235–40. Bibcode:2004Natur.430..235S. doi:10.1038/nature02743. PMID 15241420. S2CID 4401688. - Hallett FR, Marsh J, Nickel BG, Wood JM (February 1993). "Mechanical properties of vesicles. II. A model for osmotic swelling and lysis". Biophys. J. 64 (2): 435–42. Bibcode:1993BpJ....64..435H. doi:10.1016/S0006-3495(93)81384-5. PMC 1262346. PMID 8457669. - Boal, David H. (2001). Mechanics of the cell. Cambridge, UK: Cambridge University Press. ISBN 978-0-521-79681-1. - Rutkowski CA, Williams LM, Haines TH, Cummins HZ (June 1991). "The elasticity of synthetic phospholipid vesicles obtained by photon correlation spectroscopy". Biochemistry. 30 (23): 5688–96. doi:10.1021/bi00237a008. PMID 2043611. - Evans E, Heinrich V, Ludwig F, Rawicz W (October 2003). "Dynamic tension spectroscopy and strength of biomembranes". Biophys. J. 85 (4): 2342–50. Bibcode:2003BpJ....85.2342E. doi:10.1016/S0006-3495(03)74658-X. PMC 1303459. PMID 14507698. - YashRoy R.C. (1994) Destabilisation of lamellar dispersion of thylakoid membrane lipids by sucrose. Biochimica et Biophysica Acta, vol. 1212, pp. 129-133.https://www.researchgate.net/publication/15042978_Destabilisation_of_lamellar_dispersion_of_thylakoid_membrane_lipids_by_sucrose?ev=prf_pub - Weaver JC, Chizmadzhev YA (1996). "Theory of electroporation: A review". Bioelectrochemistry and Bioenergetics. 41 (2): 135–60. doi:10.1016/S0302-4598(96)05062-3. - Zeidi, Mahdi; Kim, Chun IL (2018). "The effects of intra-membrane viscosity on lipid membrane morphology: complete analytical solution". Scientific Reports. 8 (1): 12845. Bibcode:2018NatSR...812845Z. doi:10.1038/s41598-018-31251-6. ISSN 2045-2322. PMC 6110749. PMID 30150612. - Papahadjopoulos D, Nir S, Düzgünes N (April 1990). "Molecular mechanisms of calcium-induced membrane fusion". J. Bioenerg. Biomembr. 22 (2): 157–79. doi:10.1007/BF00762944. PMID 2139437. S2CID 1465571. - Leventis R, Gagné J, Fuller N, Rand RP, Silvius JR (November 1986). "Divalent cation induced fusion and lipid lateral segregation in phosphatidylcholine-phosphatidic acid vesicles". Biochemistry. 25 (22): 6978–87. doi:10.1021/bi00370a600. PMID 3801406. - Markin VS, Kozlov MM, Borovjagin VL (October 1984). "On the theory of membrane fusion. The stalk mechanism". Gen. Physiol. Biophys. 3 (5): 361–77. PMID 6510702. - Chernomordik LV, Kozlov MM (2003). "Protein-lipid interplay in fusion and fission of biological membranes". Annu. Rev. Biochem. 72 (1): 175–207. doi:10.1146/annurev.biochem.72.121801.161504. PMID 14527322. - Georgiev, Danko D.; Glazebrook, James F. (2007). "Subneuronal processing of information by solitary waves and stochastic processes". In Lyshevski, Sergey Edward (ed.). Nano and Molecular Electronics Handbook. Nano and Microengineering Series. CRC Press. pp. 17–1–17–41. doi:10.1201/9781315221670-17. ISBN 978-0-8493-8528-5. S2CID 199021983. - Chen YA, Scheller RH (February 2001). "SNARE-mediated membrane fusion". Nat. Rev. Mol. Cell Biol. 2 (2): 98–106. doi:10.1038/35052017. PMID 11252968. S2CID 205012830. - Köhler G, Milstein C (August 1975). "Continuous cultures of fused cells secreting antibody of predefined specificity". Nature. 256 (5517): 495–7. Bibcode:1975Natur.256..495K. doi:10.1038/256495a0. PMID 1172191. S2CID 4161444. - Jordan, Carol A.; Neumann, Eberhard; Sowershi mason, Arthur E. (1989). Electroporation and electrofusion in cell biology. New York: Plenum Press. ISBN 978-0-306-43043-5. - Immordino ML, Dosio F, Cattel L (2006). "Stealth liposomes: review of the basic science, rationale, and clinical applications, existing and potential". Int J Nanomed. 1 (3): 297–315. doi:10.2217/174358220.127.116.117. PMC 2426795. PMID 17717971. - Chonn A, Semple SC, Cullis PR (15 September 1992). "Association of blood proteins with large unilamellar liposomes in vivo. Relation to circulation lifetimes". J. Biol. Chem. 267 (26): 18759–65. doi:10.1016/S0021-9258(19)37026-7. PMID 1527006. - Boris EH, Winterhalter M, Frederik PM, Vallner JJ, Lasic DD (1997). "Stealth liposomes: from theory to product". Advanced Drug Delivery Reviews. 24 (2–3): 165–77. doi:10.1016/S0169-409X(96)00456-5. - Maeda H, Sawa T, Konno T (July 2001). "Mechanism of tumor-targeted delivery of macromolecular drugs, including the EPR effect in solid tumor and clinical overview of the prototype polymeric drug SMANCS". J Control Release. 74 (1–3): 47–61. doi:10.1016/S0168-3659(01)00309-1. PMID 11489482. - Lopes DE, Menezes DE, Kirchmeier MJ, Gagne JF (1999). "Cellular trafficking and cytotoxicity of anti-CD19-targeted liposomal doxorubicin in B lymphoma cells". Journal of Liposome Research. 9 (2): 199–228. doi:10.3109/08982109909024786. - Matsumura Y, Gotoh M, Muro K, et al. (March 2004). "Phase I and pharmacokinetic study of MCC-465, a doxorubicin (DXR) encapsulated in PEG immunoliposome, in patients with metastatic stomach cancer". Ann. Oncol. 15 (3): 517–25. doi:10.1093/annonc/mdh092. PMID 14998859. - . Biacore Inc. Retrieved Feb 12, 2009. - Nanion Technologies. Automated Patch Clamp. Retrieved Feb 28, 2010. (PDF) - Bermejo, M.; Avdeef, A.; Ruiz, A.; Nalda, R.; Ruell, J. A.; Tsinman, O.; González, I.; Fernández, C.; Sánchez, G.; Garrigues, T. M.; Merino, V. (2004). "PAMPA--a drug absorption in vitro model 7. Comparing rat in situ, Caco-2, and PAMPA permeability of fluoroquinolones". European Journal of Pharmaceutical Sciences. 21 (4): 429–41. doi:10.1016/j.ejps.2003.10.009. PMID 14998573. - Avdeef, A.; Artursson, P.; Neuhoff, S.; Lazorova, L.; Gråsjö, J.; Tavelin, S. (2005). "Caco-2 permeability of weakly basic drugs predicted with the double-sink PAMPA pKa(flux) method". European Journal of Pharmaceutical Sciences. 24 (4): 333–49. doi:10.1016/j.ejps.2004.11.011. PMID 15734300. - Avdeef, A.; Nielsen, P. E.; Tsinman, O. (2004). "PAMPA--a drug absorption in vitro model 11. Matching the in vivo unstirred water layer thickness by individual-well stirring in microtitre plates". European Journal of Pharmaceutical Sciences. 22 (5): 365–74. doi:10.1016/j.ejps.2004.04.009. PMID 15265506. - Dagenais, C.; Avdeef, A.; Tsinman, O.; Dudley, A.; Beliveau, R. (2009). "P-glycoprotein deficient mouse in situ blood-brain barrier permeability and its prediction using an in combo PAMPA model". European Journal of Pharmaceutical Sciences. 38 (2): 121–37. doi:10.1016/j.ejps.2009.06.009. PMC 2747801. PMID 19591928. - Sinkó, B.; Kökösi, J.; Avdeef, A.; Takács-Novák, K. (2009). "A PAMPA study of the permeability-enhancing effect of new ceramide analogues". Chemistry & Biodiversity. 6 (11): 1867–74. doi:10.1002/cbdv.200900149. PMID 19937821. S2CID 27395246. - Loeb J (December 1904). "The recent development of Biology". Science. 20 (519): 777–786. Bibcode:1904Sci....20..777L. doi:10.1126/science.20.519.777. PMID 17730464. - Fricke H (1925). "The electrical capacity of suspensions with special reference to blood". Journal of General Physiology. 9 (2): 137–52. doi:10.1085/jgp.9.2.137. PMC 2140799. PMID 19872238. - Dooren LJ, Wiedemann LR (1986). "On bimolecular layers of lipids on the chromocytes of the blood". Journal of European Journal of Pediatrics. 145 (5): 329. doi:10.1007/BF00439232. PMID 3539619. S2CID 36842138. - Gorter E, Grendel F (1925). "On bimolecular layers of lipids on the chromocytes of the blood". Journal of Experimental Medicine. 41 (4): 439–43. doi:10.1084/jem.41.4.439. PMC 2130960. PMID 19868999. - Sjöstrand FS, Andersson-Cedergren E, Dewey MM (April 1958). "The ultrastructure of the intercalated discs of frog, mouse and guinea pig cardiac muscle". J. Ultrastruct. Res. 1 (3): 271–87. doi:10.1016/S0022-5320(58)80008-8. PMID 13550367. - Robertson JD (1960). "The molecular structure and contact relationships of cell membranes". Prog. Biophys. Mol. Biol. 10: 343–418. PMID 13742209. - Robertson JD (1959). "The ultrastructure of cell membranes and their derivatives". Biochem. Soc. Symp. 16: 3–43. PMID 13651159. - Mueller P, Rudin DO, Tien HT, Wescott WC (June 1962). "Reconstitution of cell membrane structure in vitro and its transformation into an excitable system". Nature. 194 (4832): 979–80. Bibcode:1962Natur.194..979M. doi:10.1038/194979a0. PMID 14476933. S2CID 2110051. - Bangham, A. D.; Horne, R. W. (1964). "Negative Staining of Phospholipids and Their Structural Modification by Surface-Active Agents As Observed in the Electron Microscope". Journal of Molecular Biology. 8 (5): 660–668. doi:10.1016/S0022-2836(64)80115-7. PMID 14187392. - Kunitake T (1977). "A totally synthetic bilayer membrane". J. Am. Chem. Soc. 99 (11): 3860–3861. doi:10.1021/ja00453a066. - Avanti Lipids One of the largest commercial suppliers of lipids. Technical information on lipid properties and handling and lipid bilayer preparation techniques. - LIPIDAT An extensive database of lipid physical properties - Structure of Fluid Lipid Bilayers Simulations and publication links related to the cross sectional structure of lipid bilayers. - Lipid Bilayers and the Gramicidin Channel (requires Java plugin) Pictures and movies showing the results of molecular dynamics simulations of lipid bilayers. - Structure of Fluid Lipid Bilayers, from the Stephen White laboratory at University of California, Irvine - Animations of lipid bilayer dynamics (requires Flash plugin)
We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you! Presentation is loading. Please wait. Published byJackson Erickson Modified over 3 years ago Brain Strain Find the value of x. x x x xx Special Segments in Triangles Tell whether each red segment is an altitude of the triangle. The altitude is the true height of the triangle. Tell whether each red segment is an perpendicular bisector of the triangle. Drill & Practice Indicate which special triangle segment the red line is based on the picture and markings Points of Concurrency The intersection of the angle bisectors is called the INCENTER. Equidistant to the sides The intersection of the altitudes is called the ORTHOCENTER. The intersection of the medians is called the CENTROID. Vertex to Centroid is Twice as Long as Centroid to Midpoint The intersection of the perpendicular bisector is called the CIRCUMCENTER. Equidistant to the vertices Memorize these! MC AO ABI PBCC Medians/Centroid Altitudes/Orthocenter Angle Bisectors/Incenter Perpendicular Bisectors/Circumcenter Will this work? MC AO ABI PBCC My Cat Ate Our Apples But I Prefer Blue Cheese Crumbles Special Property of Medians Theorem Vertex to CENTROID is TWICE as long as CENTROID to MIDPOINT vertex centroid midpoint A B F X E C D A B F X E C D In ABC, AN, BP, and CM are medians. A B M P E C N If EM = 3, find EC. Ex: 1 In ABC, AN, BP, and CM are medians. A B M P E C N If EN = 12, find AN. Ex: 2 In ABC, AN, BP, and CM are medians. A B M P E C N If CM = 3x + 6, and CE = x + 12, what is x? CM = CE + EM Ex: 3 Warm Up 1.Give the restrictions on the third side of the triangle if the first two sides are 18 and Find x and y for both figures. 55°70° x y 100° Points of Concurrency The point where three or more lines intersect. Points of Concurrency MM1G3e Students will be able to find and use points of concurrency in triangles. The intersection of the medians is called the CENTROID. How many medians does a triangle have? Centers of Triangles or Points of Concurrency Median. Warm up: Solve for x. Linear Pair 4x + 3 7x + 12 X = 15. Centers of Triangles or Points of Concurrency 8/9/2015 EQ: What are the differences between medians, altitudes, and perpendicular bisectors? Warm-Up Take out homework p.301 (4-20) even Check your answers. Points of Concurrency Triangles. Geometry B POINTS OF CONCURRENCY. The intersection of the perpendicular bisectors. CIRCUMCENTER. Chapter 10 Section 3 Concurrent Lines. If the lines are Concurrent then they all intersect at the same point. The point of intersection is called the. Special lines in Triangles and their points of concurrency Perpendicular bisector of a triangle: is perpendicular to and intersects the side of a triangle. 5.3 Concurrent Lines, Medians, and Altitudes Stand 0_ Can you figure out the puzzle below??? No one understands! 5.3: Concurrent Lines, Medians and Altitudes Objectives: Students will be able to… Identify properties of perpendicular bisectors and angle bisectors Identify. Special Segments of Triangles Chapter 5.3 Concurrent Lines, Medians, and Altitudes Median and Altitude of a Triangle Sec 5.3 1 Relationships in Triangles Bisectors, Medians, and Altitudes Section 6.1 – 6.3 Students Should Begin Taking Notes At Screen 4!! SPECIAL SEGMENTS OF TRIANGLES SECTIONS 5.2, 5.3, 5.4. 5.4 Medians and Altitudes A median of a triangle is a segment whose endpoints are a vertex and the midpoint of the opposite side. –A triangle’s three medians. 5.3 - Concurrent Lines, Medians, and Altitudes 5-3 Concurrent Lines, Medians, Altitudes Angle Bisector A segment that cuts an angle in half. Perpendicular Bisectors ADB C CD is a perpendicular bisector of AB Theorem 5-2: Perpendicular Bisector Theorem: If a point is on a perpendicular bisector. Definition: A line that passes through the midpoint of the side of a triangle and is perpendicular to that side. Sec 6.1 Median. Points of Concurrency Where multiple lines, segments rays intersect, have specific properties. Medians, Altitudes, and Angle Bisectors Honors Geometry Mr. Manker. Special Segments of Triangles Advanced Geometry Triangle Congruence Lesson 4. Geometry Sections 5.2 & 5.3 Points of Concurrency. 5.3: Concurrent Lines, Medians and Altitudes Objectives: To identify properties of perpendicular bisectors and angle bisectors To identify properties of. Unit Essential Question: How do you use the properties of triangles to classify and draw conclusions? The 5 special segments of a triangle …again Perpendicular bisector Angle bisector Median Altitude Perpendicular and thru a midpoint of a side Bisects an. Bisectors of a Triangle Perpendicular Bisector- a line, segment, or ray that passes through the midpoint of the side and is perpendicular to that side Theorem 5.1 Any point. Concurrent Lines Geometry Mrs. King Unit 4, Day 7. LESSON FIFTEEN: TRIANGLES IN TRAINING. MORE TRIANGLE PROPERTIES In the last lesson, we discussed perpendicular bisectors and how they intersect to create. 5-2 Median & Altitudes of Triangles Finding Equations of Lines If you know the slope and one point on a line you can use the point-slope form of a line to find the equation. If you know the. Properties of Triangles Perpendicular and Angle Bisectors Perpendicular Bisector – A line, segment, or ray that passes through the midpoint of a side of a triangle and is perpendicular. Geometry Grab your clicker and get ready for the warm-up. 5.2 Bisectors of Triangles5.2 Bisectors of Triangles Use the properties of perpendicular bisectors of a triangle Use the properties of angle bisectors. Chapter 5, Section 1 Perpendiculars & Bisectors. Perpendicular Bisector A segment, ray, line or plane which is perpendicular to a segment at it’s midpoint. Medians, and Altitudes. When three or more lines intersect in one point, they are concurrent. The point at which they intersect is the point of concurrency. Day 4 agenda Go over homework- 5 min Warm-up- 10 min 5.3 notes- 55 min Start homework- 20 min The students will practice what they learned in the computer. By: Isaac Fernando and Kevin Chung. Do Now: what is a point of concurrency? Medians, Altitudes and Concurrent Lines Section 5-3. Lesson 12 – Points of Concurrency II © 2017 SlidePlayer.com Inc. All rights reserved.
Black holes are mysterious, strangest and fascinating objects existing in space. A black hole is a place in space where gravity pulls so much that even light can not get out. NASA states that the name black hole is misnomer. A black hole is anything but empty space. Rather, it is a great amount of matter packed into a very small area – think of a star ten times more massive than the Sun squeezed into a sphere approximately the diameter of New York City. The result is a gravitational field so strong that nothing, not even light, can escape.Black holes were predicted by Einstein’s theory of general relativity in 1916,the term “black hole” was coined many years later in 1967 by American astronomer John Wheeler , which showed that when a massive star dies, it leaves behind a small, dense remnant core. If the core’s mass is more than about three times the mass of the Sun, the equations showed, the force of gravity overwhelms all other forces and produces a black hole.Most black holes form from the remnants of a large star that dies in a supernova explosion.Smaller stars become dense neutron stars,black holes are created when a massive star reaches the end of its life and implodes, collapsing in on itself i.e stellar death.There are four types of black holes: stellar, intermediate, supermassive, and miniature. •Stellar black hole : When a star burns through the last of its fuel, the object may collapse, or fall into itself. For smaller stars (those up to about three times the sun’s mass), the new core will become a neutron star or a white dwarf. But when a larger star collapses, it continues to compress and creates a stellar black hole. °Intermediate : Intermediate black holes are those which have a mass somewhere between stellar and supermassive black holes. Intermediate-mass black holes are thought to form when multiple stellar-mass black holes undergo a series of mergers with one another.class of black hole with mass in the range 10²–10⁵ solar masses: significantly more than stellar black holes but less than the 10⁵–10⁹ solar mass supermassive black holes. °Supermassive :The largest black holes are called supermassive.It was predicted by Einstein’s general theory of relativity, can have masses equal to billions of sun.It may somehow result of hundreds and thousands of tiny merged black holes.There are few disagreements regarding progenitors of these massive black holes.Most obvious hypothesis is that they are the remnants of several different massive stars that exploded, which were formed by the accretion of matter in the galactic center. °Miniature : Black holes with masses less than their heavyweight relatives.Mini black holes, like the more massive variety, lose mass over time through Hawking radiation and disappear. Two black holes used to combine together and depredate the objects around them inorder to expand their size.In 1974,Stephen Hawking stated that black hole have their own temperature that is inversely proportional to its mass i.e, if temperature of black hole is low then its mass will be more.Due to this temperature they emit a specific type of radiations called Hawking-radiations. The holy Quran states in surah Waqiah verse 75-77; “But nay!I sware by the setting( falling or fading) of stars” Like most large galaxies, the Milky Way is glued together by a supermassive black hole at its center, buried deep in the constellation Sagittarius. Our galaxy’s supermassive black hole, called Sagittarius A* (or Sgr A*), constantly pulls stars, dust and other matter inward, forming a stellar megalopolis 1 billion times denser than our corner of the galaxy. The event horizon of a black hole is the point of no return. Anything that passes this point will be swallowed by the black hole and forever vanish from our known universe.It attracts and shifts to no-where .Avi Loeb, chair of astronomy at Harvard University states that when an item gets near an event horizon, a witness would see the item’s image redden and dim as gravity distorted light coming from that item.Within the event horizon, one would find the black hole’s singularity, where previous research suggests all of the object’s mass has collapsed to an infinitely dense extent. This means the fabric of space and time around the singularity has also curved to an infinite degree, so the laws of physics as we currently know them break down.The event horizon protects us from the unknown physics near a singularity. The size of an event horizon depends on the black hole’s mass.If Earth were compressed until it became a black hole, it would have a diameter of about 0.69 inches (17.4 millimeters)eg; if the Earth is compressed till the size of a Golf ball,here the golf ball is black hole with comprising size equal to golf ball but attraction of gravity with density and mass equal to an Earth. If the sun were converted to a black hole, it would be about 3.62 miles (5.84 kilometers) wide, about the size of a village or town. There were many who used to accept black holes as theory but on 14th of september 2015 LIGO scientific laboratory for the first time, scientists have observed ripples in the fabric of spacetime called gravitational waves, arriving at the earth from a cataclysmic event in the distant universe. This confirms a major prediction of Albert Einstein’s 1915 general theory of relativity and opens an unprecedented new window onto the cosmos. Gravitational waves carry information about their dramatic origins and about the nature of gravity that cannot otherwise be obtained. Physicists have concluded that the detected gravitational waves were produced during the final fraction of a second of the merger of two black holes to produce a single, more massive spinning black hole. This collision of two black holes had been predicted but never observed. Black holes are regions in space-time where gravity’s pull is so powerful that not even light can escape its grasp. However, while light cannot escape a black hole, its extreme gravity warps space around it, which allows light to “echo,” bending around the back of the object. Thanks to this strange phenomenon, astronomers have, for the first time, observed the light from behind a black hole. In a new study, researchers, led by Dan Wilkins, an astrophysicist at Stanford University in California, used the European Space Agency’s XMM-Newton and NASA’s NuSTAR space telescopes to observe the light from behind a black hole that’s 10 million times more massive than our sun and lies 800 million light-years away in the spiral galaxy.Bright flares of X-ray light are emitted by gas that falls into black holes from their accretion disks. Let us find the answers in heavenly reveilation Al Quran in surah At Tariq verse 1-3; “By the sky and the nightcomer-And what can make you know what is nightcomer? It is the piercing star” In Arabic word الثاقب means piercing or bright (also defines pulsars) or cross holes.The light that penetrates the darkness and reaches far.The word الثاقب is originated from word ثقب ; ثَقْب : – ثُقْب – bore; hole; perforation; puncture ثَقْب : – مَصْدَر ثَقَبَ – boring; punching; piercing; perforating; drilling; puncturing,eat holes into Albert Einstein’s theory of general relativity profoundly changed our thinking about fundamental concepts in physics, such as space and time. Anoter mysterious phenomena is Wormhole.wormholes – bridges connecting different points in spacetime, in theory providing shortcuts for space travellers.The question is does they really exist rest to be seen.Theoretically, a wormhole might connect extremely long distances such as a billion light years, or short distances such as a few meters, or different points in time, or even different universes.Some wormholes may be traversable, meaning humans may be able to travel through them.But for that they would need to be sufficiently large and kept open against the force of gravity, which tries to shut them. It is to push spacetime outward in direction that requires huge amounts of negative energy. In Quran Almighty Allah says Suarh Nuh verse :15; “Do you not see how Allah created seven heavens, one above the other”,(in layers) In surah muminoon verse 17; “We have indeed fashioned above you seven paths.15 Never were We unaware of the task of creation” The word طرائق- taraiq has more than one meaning.This verse is providing clue of Meraaj of Prophet(wormholes and black holes) The modern physical science says regarding crossing black hole that a human can do this only if the respective black hole is supermassive and isolated,and have to move with the speed more than of light.And till now scientific techonology couldn’t find such a racy and furious technology.and the end limit of speed of light is 3×10^8m/s – 299,792,485. m/s.If one can attain such high speed can lost in black hole. It has been mentioned in Quran Surah Al-Isra and according to Sahih Hadith of sahih Muslim; “I was brought al-Burg who is an animal white and long, larger than a donkey but smaller than a mule, who would place his hoof at a distance equal to the range of vision. I mounted it and came to the Temple (Bait-ul Maqdis in Jerusalem), then tethered it to the ring used by the prophets.” The Arabic word البراق -Buraq means lightning or sparkling or bright.According to islamic point of veiw it a creature from the heavens that carried Prophet(PBUH) from Earth to heavens and back Meraaj . In the sight of Allah, all information pertaining to this moment is kept in a Book.This Main Book, or as the Quran calls it, “The Mother of the Book,” holds every bit of information about everything: “Certainly there is no hidden thing in either heaven or earth which is not in a Clear Book.” (Qur’an, 27:75).
Central angles and arc measures Central Angles and Arcs It is the central angle's ability to sweep through an arc of degrees that determines the number of Figure 5 Degree measure and arc length of a semicircle.and watch and The measure of an angle with its vertex inside the circle is half the sum of the intercepted arcs. The measure of an angle with its vertex outside the circle is half the difference of the intercepted arcs. Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. In these lessons, we will learn some formulas relating the angles and the intercepted arcs of circles. Measure of a central angle. Note: The term "intercepted arc" refers to an arc "cut off" or "lying between" the sides of the specified angle. Central Angle A central angle is an angle formed by two radii with the vertex at the center of the circle. In a circle, or congruent circles, congruent central angles have congruent arcs. In a circle, or congruent circles, congruent central angles have congruent chords. Inscribed Angle An inscribed angle is an angle with its vertex "on" the circle, formed by two intersecting chords. An angle inscribed in a semicircle is a right angle. Called Thales Theorem. If we solve the proportion for arc length, and replace "arc measure" with its equivalent "central angle", we can establish the formula:. how to get bagon in pokemon soul silver There are several different angles associated with circles. Perhaps the one that most immediately comes to mind is the central angle. It is the central angle's ability to sweep through an arc of degrees that determines the number of degrees usually thought of as being contained by a circle. Central angles are angles formed by any two radii in a circle. The vertex is the center of the circle. Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. Related Topics: More Lessons for Grade 9 Math Worksheets Examples, solutions, videos, worksheets, and activities to help Geometry students learn about central angles and arcs. The measure of a central angle is equal to the measure of its intercepted arc. The following diagram shows that the angle measure of the central angle is equal to the measure of the intercepted arc. Scroll down the page for more examples and solutions. Search Updated December 14th, In this section of MATHguide, you will learn the relationship between central angles and their respective arcs. To grasp the relationship between angles and arcs within a circle, you first have to know what a central angle looks like. A central angle is an angle whose vertex rests on the center of a circle and its sides are radii of the same circle. A central angle can be seen here. The diagram above shows Circle A. Notice that point-A is the vertex of the angle, which is at the center of the circle. Central Angles and Congruent Arcs Finding Arc and Central Angle Measures Angles and Intercepted Arcs
VERC: Vectors: An Introduction Last edited 3 years ago2019-04-21 13:07:54 UTC - View Page When you want to precisely define the motion of an object, how would you do it? More than likely, you’d give a direction and speed at which it is moving in that direction, as in “the car is moving at 30 m.p.h. due north.” Now, if I was to tell you to give me exactly the same definition but with a vector, what would you do? If you didn’t know what a vector was, then you may just stare blankly and say “uhh” a few times. This tutorial aims to tell you exactly what a vector is, what you can do with them, and how they are used in 3D graphics and physics. Without vectors, we wouldn’t have moving objects – we wouldn’t have a game. Nothing could move, rotate, skew, scale, or go bang without vectors. So, what the hell are they? Well, they are essentially a direction and a size in any spatial N-dimensions – although all you need to worry about are 2D and 3D vectors. For the purposes of this tutorial, they will be discussed as 3D vectors unless otherwise specified. To express a vector, 3 points are given – a coordinate in world space. Picture an arrow going from the origin (at the coordinate 0,0,0) to a position in 3D space, and that’s a vector – although it’s not physically an arrow, they are drawn on diagrams as arrows. The length of the arrow is what’s called its magnitude (in the context of physics, its speed, or a distance) and the direction it points in is, well, the vector’s direction. In an equation or formula, a vector is shown as a bold, capital letter with half an arrow over the top (sometimes, the half-arrow is omitted): Vectors can be written in a couple of different ways, with the main one being: where x, y and z are the 3 components of the vector. There is another notation where a vector is shown as a one-column, three-row matrix: and another where it can be shown as a one-row, three-column matrix (called the transpose of the column vector, shown above): In practice, you don’t need to know exactly what they are used for in the latter two forms (until you start learning matrix mathematics) – all that matters at this moment in time is that they are the 3 main ways used to define a vector. Vectors can be treated like any other mathematical object – so, they have certain operations that can be done on them, such as addition, subtraction, multiplication, and division. When you add two vectors together, you just add together the respective components, like so: Subtracting one vector from another is exactly the same principle. When you add two vectors together, you end up with a vector that is at the point that is reached when you place one vector onto the end of the other. Another thing you can do is to multiply or divide a vector by a scalar value (you cannot multiply or divide a vector by another vector) like so: Again, dividing is the same principle. Multiplying and dividing a vector by scalar values will increase or decrease its length, a useful operation. So, you’ve got your vector, but what do you do with it? Well, earlier I mentioned that a vector has magnitude and direction. The direction is obvious, but getting the magnitude requires some manipulation – specifically, a 3D version of Pythagoras’ theorem (the square on the hypotenuse is equal to the sum of the squares on the other two sides). The length of a vector, shown by two bar lines on either side of the vector’s symbol, is defined by: The three items within the square root are the three components of the vector, squared. Now you can get the length of your vector, and you know its direction, but what if you want to change the length of the vector and keep the direction? You need to change the vector so that it has a magnitude of 1 – and to do that, you must normalize the vector. To normalize a vector, you divide the vector by its magnitude: This will give you a vector of length 1, providing the original vector was not of length 0 (i.e. it was 0,0,0). When a vector is of magnitude one, it is called a unit vector. When you have a unit vector, you can multiply it by the length you want it to be, or you can perform operations on it as is (unit vectors are very useful, as you will find out later). Now we start to get into more vector-specific operations. We’ll start with the dot product, as it’s one of the most useful operations you can do. It is defined by: As you can see, this gives you a scalar value. So what use is this? Well, take a look at this: Alpha (the weird-looking ‘a’ – a Greek letter) is the angle between the two vectors on the 3D plane that has the origin, and the two vectors lying on it. This means that we can use the dot product to calculate the angle between two vectors, and therefore how far apart they are. If both vectors are unit-length (see earlier on normalization) then the equation simplifies to: Giving: As you can guess, this is an exceedingly useful tool in 3D graphics, and one that is used in all kinds of ways (one notable one being determining if a polygon faces away from or towards the camera). Another extremely useful vector operation is the cross product. It takes two vectors and gives another vector, perpendicular to the input two. It is defined by: Again, this is used widely in 3D graphics, two uses being calculating the normal vector (not to be confused with normalization) of a triangle (the vector that is perpendicular to its plane) and finding a third axis given two others. So, that’s all well and good, but how can these be applied in a practical manner? Well, let’s take a practical example to show how vectors are invaluable in physics. Imagine we have a plane flying along in the air, in a specific direction, at a specific speed. As you have probably guessed, we can represent this with a vector. In this sort of situation, we would normally store a scalar value for the speed, and a unit-length vector for the direction. We’ll call the vector V, and the speed s. We’ll also assume that the plane has a position stored (vectors can also be used to simply store positions, as they store exactly the same information as a 3D coordinate), which we’ll call P. Consider this: In the above equation, t is the current time, in seconds. So what does this mean? Well, it means that the object’s position at time t is equal to its speed multiplied by the current time, multiplied by its direction. (If you’re not sure about this, do some diagrams on paper using 2D vectors, and vary the speed and time.) So, using vectors, we are already able to show the position of an object at any moment in time (providing it has a constant speed – an exercise would be to modify it so that it has constant acceleration over time). However, in our world, there are other forces that affect a moving object – one of them being gravity. Gravity can also be represented as a vector, pointing downwards. The longer the vector, the stronger gravity is. We’ll call our gravity G, and assume that for a downward gravitational pull the Y component of the vector is negative. This gives us our new formula: We’re now making use of vector addition to pull the position towards the ground (or whatever direction gravity is pulling in) over time. You can add other things, such as air resistance, but they all follow the same principle. All the principles discussed in here are widely used within 2D and 3D graphics, and physics. Without vectors, we’d be a bit stuck – we would have no measure of velocity and no measure of direction. Hopefully this tutorial has enlightened you as to what vectors are and how they can be used. Notes for advanced readers Only 3-dimensional vectors were discussed in this document. However, vectors can be n-dimensional, as mentioned at the beginning. Here are the formulae for all of the operations discussed above (except for the cross product, which is hard to interpret as a formula in an arbitrary number of dimensions except as a determinant, which is beyond the scope of this document). In all formulae, n is the number of dimensions, and a number subscript of a vector variable indicates the nth component: Vector addition: Vector multiplication: Vector magnitude: Vector dot product: Mathematics for 3D Game Programming and Computer Graphics, Eric Lengyel, Charles River Media, 2002 You must log in to post a comment. You can login or register a new account.
Money and the Prices in the Long Run and Open Economies – economic outlook. National Bureau of Economic ResearchPurpose of Assignment Week 3 will help students develop an understanding of what money is, what forms money takes, how the banking system helps create money, and how the Federal Reserve controls the quantity of money. Students will learn how the quantity of money affects inflation and interest rates in the long run, and production and employment in the short run. Students will find that, in the long run, there is a strong relationship between the growth rate of money and inflation. Students will review the basic concepts macroeconomists use to study open economies and will address why a nation’s net exports must equal its net capital outflow. Students will demonstrate the relationship between the prices and quantities in the market for loanable funds and the prices and quantities in the market for foreign-currency exchange. Students will learn to analyze the impact of a variety of government policies on an economy’s exchange rate and trade balance. Resources: National Bureau of Economic Research Develop a 2,100-word economic outlook that includes the following: - Using data from the above link to the NBER, and other sources, analyze the history of changes in U. S. GDP, Inflation, and Unemployment and compare to each of their forecasts for the next five years. - Discuss how government policies, such as fiscal and monetary, can influence economic growth. - Analyze how monetary policy could influence the long-run behavior of inflation rates, and other real or nominal variables. - Describe how trade deficits or surpluses can influence the growth of productivity and GDP. - Discuss the importance of the market for loanable funds and the market for foreign-currency exchange to our economic growth. - Recommend, based on your above findings, what the government should do to encourage economic growth Use a minimum of three sources. See rubric Format your paper consistent with APA guidelines.
Before we begin graphing systems of equations, a good starting point is to review our knowledge of 2-D graphs. These graphs are known as 2-D because they have two axes. Find an online image of a graph to use as the foundation of your discussion. (Thisis easily accomplished by searching within Google Images.)Using your graph as the example:1.Select any two points on the graph and apply the slope formula, interpreting the result as a rate of change (units of measurement required); and2.Use rate of change (slope) to explain why your graph is linear (constant slope) ornot linear (changing slopes).Embed the graph into the post by copying and pasting into the discussion. You must citethe source of the image. Also be sure to show the computations used to determine slope.Professor and class,I had an error pasting in my work from Microsoft Word. I apologize as my fractions did not copy over. Here is the complete response.
How did the Market Revolution impact the North and the South differently? Broadly speaking, in the North, the Market Revolution led to increased urbanization and economic integration. A major part of both of these trends was industrialization, of which the textile industry was the most prominent example. The market revolution also saw rapid growth of infrastructure in the North, including state and federally-subsidized roads, canals, and eventually railroads. Additionally, the Market Revolution, as well as other external forces (especially famine in Ireland) led to massive immigration during the 1830 and especially the 1840s. In the South, the Market Revolution, particularly in the deep South, was characterized by the expansion of cotton agriculture, famously facilitated by the invention of the cotton gin. Far from the idyllic plantations romanticized in historical memory, hunger for new lands to cultivate cotton led to rampant speculation and aggressive land-grabs in the fertile black belts of western Georgia, Alabama, and Mississippi (and later Texas.) This not only led to the deportation of Southeastern Indians, but created a insatiable demand for slave labor. Because the foreign slave trade was banned, southern planters turned to the border South, particularly Virginia, where there was a labor surplus. This led to one of the other defining characteristics of the Market Revolution, namely the internal slave trade. Thousands of slaves were taken in coffles to the Deep South, where they labored on the new plantations. On the other hand, the rise of Jacksonian democracy, with its emphasis on extending the franchise to all white men, was also a legacy of the Market Revolution in the South as well as the North. And the Market Revolution also led to increased economic integration of the North and South, as well as the West. This took place even as the regions began to experience sectional tensions over the expansion of slavery and other issues related to slavery. The Market Revolution in the United States saw the reemergence of mercantilism with the nation seeking to increase its reserves of gold and silver. Capitalism took precedence due to increased industrialization and improvements in transport and communication, which enhanced modern trade. Northern cities took advantage of the Industrial Revolution and improved infrastructure to accelerate their manufacturing economy. On the other hand, Southern cities were opposed to the changes and influences of supply and demand and continued to place their emphasis on agriculture. The North and South were pulling towards different directions, which brought their inherent differences to the fore. The Market Revolution increased the need for labor in the plantations, leading to an increasing need for slaves. The North had banned slavery and was pushing the South to do the same. However, the more the North needed raw materials for manufacturing, the more the South needed more labor to satisfy the needs of the North. The market revolution of the early nineteenth century resulted in furthering the industrialization of the North while causing increased reliance on agriculture in the South. The market revolution was the result of a campaign to improve the country's transportation infrastructure after the War of 1812. As a result, canals and railroads expanded, mainly linking the northeast with the midwest. The North became the site of factories that turned southern cotton into textiles, while the Midwest developed into the "bread basket" of the country, supplying the northeast with grains, meat, and other products. The South, on the other hand, became increasingly reliant on cotton production, particularly after the invention of the cotton gin in 1794, which facilitated the process of removing seeds from cotton plants. The market revolution, then, resulted in increased economic differentiation between the North and South in the years before the Civil War.
Hi, in this tutorial, we are going to write a program that shows an example of Merge Sort in Python. What is Merge Sort? In computer science, merge sort is an efficient, general-purpose, comparison-based sorting algorithm. Most implementations produce a stable sort, which means that the order of equal elements is the same in the input and output. It is a divide and conquers algorithm. In the divide and conquer paradigm, a problem is broken into pieces where each piece still retains all the properties of the larger problem – except its size. Advantages of Merge Sort 2. Much More Efficient for small and large data sets. 3. Adaptive that is efficient for the type of data sets that are already substantially sorted. 4. Stable Sorting Algorithm Define Merge Sorting Function Now, let’s define a new function named merge-sorting which accepts one parameter which is list we pass as an argument to this function. So this function is to sort an array or list using a merge sorting algorithm. As we have discussed above, to solve the original problem, each piece is solved individually and then the pieces are merged back together. For that, we are going to use recursive calls to a new function named merge which accepts two sorted arrays to form a single sort array. Now in the merge-sort function, the base condition for our recursive call is that if the length of an array or list is equal to 0 or 1 then simply return the first element of the array. Otherwise, just divide the array into two equal halves and pass both arrays to recursive calls of merge-sort. And at last, we are going to call merge function after each recursive call to join both sorted array. def mergeSort(x): if len(x) == 0 or len(x) == 1: return x else: middle = len(x)//2 a = mergeSort(x[:middle]) b = mergeSort(x[middle:]) return merge(a,b) Define Merge Function Now we are breaking the array until they are divided individually. So what we want is just join the arrays that we passed in a sorted way to this function and then returned the new array as a result. def merge(a,b): c = while len(a) != 0 and len(b) != 0: if a < b: c.append(a) a.remove(a) else: c.append(b) b.remove(b) if len(a) == 0: c += b else: c += a return c The overall time complexity of Merge is O(nLogn). The space complexity of Merge-sort is O(n). This means that this algorithm takes a lot of space and may slow down operations for large data sets. Define Main Condition Now, let’s create a main condition where we need to call the above function and pass the list which needs to be sorted. So let’s manually defined the list which we want to pass as an argument to the function. if __name__ == '__main__': List = [3, 4, 2, 6, 5, 7, 1, 9] print('Sorted List : ',mergeSort(List)) def merge(a,b): c = while len(a) != 0 and len(b) != 0: if a < b: c.append(a) a.remove(a) else: c.append(b) b.remove(b) if len(a) == 0: c += b else: c += a return c # Code for merge sort def mergeSort(x): if len(x) == 0 or len(x) == 1: return x else: middle = len(x)//2 a = mergeSort(x[:middle]) b = mergeSort(x[middle:]) return merge(a,b) if __name__ == '__main__': List = [3, 4, 2, 6, 5, 7, 1, 9] print('Sorted List : ',mergeSort(List)) Hope you guys like the tutorial, feel free to drop any comments in the comment section below.
BA PART I : PAPER 3.13 When a plane surface is extended, it is called a plane. It has got only length and breadth and not the width. It is a two dimensional figure. Three points which are not in a straight line, determine a plane. Its examples are the upper part of the floor, top of the table etc. All the points of a curved surface do not lie in a plane for example surface of a cone, surface of a ball etc. All the points of a circle are equidistant from one point. That point is the centre of the circle. The distance of centre with any point on the circle is called radius. A straight line passing through the centre and meeting the circle at two points opposite to each other is called diametet If a semi-circle is rotated around its diameter, the three dimensional figure formed is called a sphere. All the points on the surface of a sphere are equidistant from the centre and this distance is known as radius. Football and Tennis balls are its examples. Ellipse is a figure of two dimensions and is like an egg. Fix two pins I on the board, loop the string tightly stretched. The figure drawn is an giijpge. In figure-I the points F and H where the pins arc fixed arc the foci of the ellipse. Let the few points on the ellipse be AGBCD and E, HC + CF = HD + OF = HE + EF = HA + AF = HG + GF = It is commonly used in Astronomy. A great circle on the surface of the sphere is that circle whose plane passes through the centre of the sphere. This circle divides the sphere into two equal parts and its diameter passes through the centre of the sphere. The circle made on the suface of an apple or an orange, when they are cut from the middle, is a great circle. In figure -2, C is the centre of the sphere and circle ABD is a great circle, whose diameter ACB is passing through the centre C. It is evident that great circles on a sphere are equal in magnitude. A plane which docs not pass through the centre of the sphere makes a small circle at the surface of the sphere. As such these circles 2Ve maller than a great circle and are called small circles. In fig - 2, re e EHF ‘s a small circle as its diameter does not pass through the A straight line drawn perpendicular to the plane of a great circle and passing through the centre of the circle/sphere meets the sphere at two points. These points are the poles of the great circle, one is above the great circle and the other below it. In figure - 2, N and S are the poles ofgreat circle ADB. Properties of a great circle and poles: (a) Great circles passing through the poles of a great circle are called secondaries to the later circle. These secondaries cut the great circle at right angles. In figure - 2, ADB is a great circle and its poles are N and S. The great circles NLS and NMS arc secondaries and the angle between them at L and M are right angles. (b) Every great circle has two poles on the sphere in opposite (c) A pole has only one great circle. Axis is a sfaaight line passing through the poles of a great circle and its centre. In figure - 2, NCS is the axis of great circle ADB. in Astronomy axis is a line around which a planet rotates. Shape of the earth: Earth is a spheroid like the orange. It is a bit flat on the poles. Torroctrial oniiator is an imaoinarv oreat circle on the surface of the earth which divides the earth in two equal hemi-spheres and it» poles are in the centre of both the flat portions. It the sphere in iiguw - 2 is taken as earth, the circle ADB is the terrestrial equator N and S aw · L. -i. TU„ rv\~innn ;Ihn„o thic oniiator is called the Northern heiTli- sphere and the lower one as Southern hemi-sphere. NCS is the axis of ‘ the earth and C is its centre. ‘” lo find the position of a place on the earth, we require a set erf W coordinates which are called latitude and longitude. I Imagine small circles on the surface of the earth parallel to tbi~j equator. The centre of all these circles will lie on the axis of the earth. 11*1 figure - 3, ACB is the diameter of the equator and NCS is the axis of ~»l earth. M is a place on the earth. EMF is a small circle parallel to tf~i equator. NMRS is a great circle which intersects the small circle at M perpendicular to the equator) is the terrestrial latitude of the place M. It is the latitude of all places lying on the small circle EMF. The places north of the equator have their latitudes between 0° to 90° (N) and those in the southern hemisphere have their latitudes between 0° to 90° Tropic of Cancer: Tropic of Cancer is an imaginery line round the surface of the earth parallel to the equator at an angular distance (latitude) of 23° 27’ (N). The sun shines at the head at mid-noon on about 21st June every year. The sun enters sayana Cancer sign at this time. It is the reason of naming this line as Tropic of Cancer. Tropic of Capricorn : When the sun goes maximum towards south on its orbit, it shines above the head at places whose latitudes are 23° 27’ (S) and the oedination of the sun becomes 23° 27’ (S). This small circle at 23° 27’ ~ * “ tailed Tropic of Capricorn as the sun enters the sayan Capricorn “rgn. This happens about 23rd of December every year. 4.11 Terrestrial Meridian: ‘ ~ Ihe Sreat circles on the surface of the earth and passing through and other semi-the poles N and S are called terrestrial meridians of the ~iD”eS from which they are passing. To divided the earth between northern and southern hemispheres, equator is there whose latitude is zero but for the starting point of longitudes a problem arises as to which meridian be taken as the starting point. In olden times the meridian passing through Ujjain used to be taken as the Prime-meridian (zero degree meridian). Now a days the meridian passing through the Greenwich observatory near London, is considered as the Prime-meridian and the counting of longitudes starts from this meridian. Angle of the arc intercepted at equator between the Prime-meridian and the meridian of a place is known as terrestrial longitude of that place. If the place is in the east of the Prime-meridian the longitudes arc Eastward ‘E’ is suffixed after the degrees of longitudes and in case of West W- is suffixed. Longitudes can be 0° to 180° (E) or 0° to 180° (W). In figure -3, G is Greenwich and NGQS is the Prime meridian. The longitudes of M is the angle QCR (E) as it is in the east of Greenwich. Every country/zone is not only spread from North to South but from East to West also. The earth rotates around its axis from West to East and completes one rotation in one day i.e. 24 hours. Therefore, the sun appears moving from East to West daily. The places which are in the east will have their mid-noon earlier than the places in the West, resulting in the difference of local time. Local time of the places in the cast will be ahead of the local time of the places in the West. A problem arises that there will be difference of time at the places east and west of a country/zone and the time schedule of trains, Aeroplanes, TV. etc. cannot be framed. A person travelling from cast to west or vice-versa will have to adjust his watch frequently due to difference of local time. It was solved by finding out a way that in a country/zone a meridian is chosen whose local time will be followed throughout the country/zone and the wordly affairs are regulated according to it This meridian is called the standard or central meridian of the country/zone and its local time is said to be standard time of If air is pumped into a balloon, its size increases gradually. Similariy the earth may be projected into the space, the surface of this projected sphere is known as celestial sphere. The centre of the earth is the centre of the celestial sphere. It is also defined as an imaginary sphere in the sky, centered on The place where the North and the South poles of the earth meet the projected sphere (Celestial sphere) are the celestial North and South poles. North pole is near Polaris and is directly above North pole of the earth. Celestial equator is defined as the intersection of the earth’s equatorial plane with the celestial sphere. Celestial equator is a great circle on the celestial sphere, midway between the poles. The place where the projected earth’s equator meets the celestial sphere is the celestial equator. The apparent annual path of the sun among the stars is known as its orbit. When this orbit is expanded and the great circle formed by its intersection with the celestial sphere is known as ecliptic. Acually earth is revolving around the sun. The place where the plane of earth’s orbit meets the celestial sphere is the ecliptic. The plane of earth’s orbit is the plane of ecliptic. Mean angle between the planes of ecliptic and celestial equator is 23° 27’. The angle between these two plains varies from time to time and is called the obliquity of ecliptic. In figure - 4, APB is the celestial equator and EQF is the ecliptic. C is the centre. The angle between their planes or say diameters i.e. Our ancestors observed that the Moon and Planets were never at a great angular distance from the ecliptic. They, therefore, conceived an imaginary belt in the heavens extending about 9° on either side of the ecliptic. This belt is known as zodiac. Moon and other planets are found in this belt. Pluto sometimes goes out of this belt. The earth and the sun are naturally in the middle of this belt which is ecliptic. In figure - 4 of the celestial sphere, EQF is the ecliptic; JK and GH are circles parallel to the ecliptic at a distance of 9° above and below it. This belt JKHG is called as zodiac. In other words the space covered by JEG moving round the celestial sphere is zodiac. The definitions AS GIVEN ABOVE have been explained by a The celestial longitude of a heavenly body at any time is the angle of the arc measured along the ecliptic, from the first point of Aries to the foot of the perpendicular drawn on the ecliptic from the heavenly body.
Sea level rise(Redirected from Current sea level rise) A sea level rise is an increase in global mean sea level as a result of an increase in the volume of water in the world’s oceans. Sea level rise is usually attributed to global climate change by thermal expansion of the water in the oceans and by melting of ice sheets and glaciers on land. The melting of floating ice shelves and icebergs at sea would raise sea levels only by about 4 cm (1.6 in). Sea level rise at specific locations may be more or less than the global average. Local factors might include tectonic effects, subsidence of the land, tides, currents, storms, etc. Sea level rise is expected to continue for centuries. Because of long response times for parts of the climate system, it has been estimated that we are already committed to a sea-level rise within the next 2,000 years of approximately 2.3 metres (7.5 ft) for each degree Celsius of temperature rise. The International Panel on Climate Change (IPCC) Summary for Policymakers, AR5, 2014, predicts that the global mean sea level rise will continue during the 21st century, very likely at a faster rate than observed from 1971 to 2010. Projected rates and amounts vary. A January 2017 NOAA report suggests a range of GMSL rise of 0.3 – 2.5 m possible during the 21st century. Widespread coastal flooding would be expected if several degrees of warming is sustained for millennia. For example, sustained global warming of more than 2 °C (relative to pre-industrial levels) could lead to eventual sea level rise of around 1 to 4 m due to thermal expansion of sea water and the melting of glaciers and small ice caps. Two main mechanisms contribute to observed sea level rise: (1) thermal expansion: because of the increase in ocean heat content (ocean water expands as it warms); and (2) the melting of major stores of land ice like ice sheets and glaciers. Based on figures from between 1993–2008 two thirds (68%) of recent sea level rise has been attributed by melting ice, and roughly one third has come from thermal expansion. On the timescale of centuries to millennia, the melting of ice sheets could result in even higher sea level rise. Partial deglaciation of the Greenland ice sheet, and possibly the West Antarctic ice sheet, could contribute 4 to 6 m (13 to 20 ft) or more to sea level rise. Past changes in sea levelEdit Various factors affect the volume or mass of the ocean, leading to long-term changes in eustatic sea level. The two primary influences are temperature (because the density of water depends on temperature), and the mass of water locked up on land and sea as fresh water in rivers, lakes, glaciers and polar ice caps. Over much longer geological timescales, changes in the shape of oceanic basins and in land–sea distribution affect sea level. Since the Last Glacial Maximum about 20,000 years ago, sea level has risen by more than 125 m, with rates varying from tenths of a mm/yr to 10+mm/year, as a result of melting of major ice sheets. During deglaciation between about 19,000 and 8,000 calendar years ago, sea level rose at extremely high rates as the result of the rapid melting of the British-Irish Sea, Fennoscandian, Laurentide, Barents-Kara, Patagonian, Innuitian ice sheets and parts of the Antarctic ice sheet. At the onset of deglaciation about 19,000 calendar years ago, a brief, at most 500-year long, glacio-eustatic event may have contributed as much as 10 m to sea level with an average rate of about 20 mm/yr. During the rest of the early Holocene, the rate of sea level rise varied from a low of about 6.0–9.9 mm/yr to as high as 30–60 mm/yr during brief periods of accelerated sea level rise. Solid geological evidence, based largely upon analysis of deep cores of coral reefs, exists only for 3 major periods of accelerated sea level rise, called meltwater pulses, during the last deglaciation. They are Meltwater pulse 1A between circa 14,600 and 14,300 calendar years ago; Meltwater pulse 1B between circa 11,400 and 11,100 calendar years ago; and Meltwater pulse 1C between 8,200 and 7,600 calendar years ago. Meltwater pulse 1A was a 13.5 m rise over about 290 years centered at 14,200 calendar years ago and Meltwater pulse 1B was a 7.5 m rise over about 160 years centered at 11,000 years calendar years ago. In sharp contrast, the period between 14,300 and 11,100 calendar years ago, which includes the Younger Dryas interval, was an interval of reduced sea level rise at about 6.0–9.9 mm/yr. Meltwater pulse 1C was centered at 8,000 calendar years and produced a rise of 6.5 m in less than 140 years. Such rapid rates of sea level rising during meltwater events clearly implicate major ice-loss events related to ice sheet collapse. The primary source may have been meltwater from the Antarctic ice sheet. Other studies suggest a Northern Hemisphere source for the meltwater in the Laurentide ice sheet. Recently, it has become widely accepted that late Holocene, 3,000 calendar years ago to present, sea level was nearly stable prior to an acceleration of rate of rise that is variously dated between 1850 and 1900 AD. Late Holocene rates of sea level rise have been estimated using evidence from archaeological sites and late Holocene tidal marsh sediments, combined with tide gauge and satellite records and geophysical modeling. For example, this research included studies of Roman wells in Caesarea and of Roman piscinae in Italy. These methods in combination suggest a mean eustatic component of 0.07 mm/yr for the last 2000 years. Since 1880, the ocean began to rise briskly, climbing a total of 210 mm (8.3 in) through 2009 causing extensive erosion worldwide and costing billions. Sea level rose by 6 cm during the 19th century and 19 cm in the 20th century. Evidence for this includes geological observations, the longest instrumental records and the observed rate of 20th century sea level rise. For example, geological observations indicate that during the last 2,000 years, sea level change was small, with an average rate of only 0.0–0.2 mm per year. This compares to an average rate of 1.7 ± 0.5 mm per year for the 20th century. Baart et al. (2012) show that it is important to account for the effect of the 18.6-year lunar nodal cycle before acceleration in sea level rise should be concluded. Based on tide gauge data, the rate of global average sea level rise during the 20th century lies in the range 0.8 to 3.3 mm/yr, with an average rate of 1.8 mm/yr. Current state of the sea level changeEdit Hansen et al. 1981, published the study Climate impact of increasing atmospheric carbon dioxide, and predicted that anthropogenic carbon dioxide warming and its potential effects on climate in the 21st century could cause a sea level rise of 5 to 6 m, from melting of the West Antarctic ice-sheet alone. The 2007 Fourth Assessment Report (IPCC 4) projected century-end sea levels using the Special Report on Emissions Scenarios (SRES). SRES developed emissions scenarios to project climate-change impacts. The projections based on these scenarios are not predictions, but reflect plausible estimates of future social and economic development (e.g., economic growth, population level). The six SRES "marker" scenarios projected sea level to rise by 18 to 59 centimetres (7.1 to 23.2 in). Their projections were for the time period 2090–99, with the increase in level relative to average sea level over the 1980–99 period. This estimate did not include all of the possible contributions of ice sheets. Hansen (2007), assumed an ice sheet contribution of 1 cm for the decade 2005–15, with a potential ten year doubling time for sea-level rise, based on a nonlinear ice sheet response, which would yield 5 m this century. The average sea ice decline recorded from 1953 to 2006 is -7.8%±0.6%/decade, this is more than three times the size of the average forecast trend of -2.5%±0.2%/decade. Even the ‘worst case scenario’ models didn’t forecast the extent of the sea ice decline adequately. The quickest rate of sea ice decline from any of the models associated with the Intergovernmental Panel on Climate Change Fourth Assessment Report was -5.4%±0.4%/decade. Research from 2008 observed rapid declines in ice-mass balance from both Greenland and Antarctica, and concluded that sea-level rise by 2100 is likely to be at least twice as large as that presented by IPCC AR4, with an upper limit of about two meters. Projections assessed by the US National Research Council (2010) suggest possible sea level rise over the 21st century of between 56 and 200 cm (22 and 79 in). The NRC describes the IPCC projections as "conservative". In 2011, Rignot and others projected a rise of 32 centimetres (13 in) by 2050. Their projection included increased contributions from the Antarctic and Greenland ice sheets. Use of two completely different approaches reinforced the Rignot projection. In its Fifth Assessment Report (2013), The IPCC found that recent observations of global average sea level rise at a rate of 3.2 [2.8 to 3.6] mm per year is consistent with the sum of contributions from observed thermal ocean expansion due to rising temperatures (1.1 [0.8 to 1.4] mm per year), glacier melt (0.76 [0.39 to 1.13] mm per year), Greenland ice sheet melt (0.33 [0.25 to 0.41] mm per year), Antarctic ice sheet melt (0.27 [0.16 to 0.38] mm per year), and changes to land water storage (0.38 [0.26 to 0.49] mm per year). The report had also concluded that if emissions continue to keep up with the worst case IPCC scenarios, global average sea level could rise by nearly 1m by 2100 (0.52−0.98 m from a 1986-2005 baseline). If emissions follow the lowest emissions scenario, then global average sea level is projected to rise by between 0.28−0.6 m by 2100 (compared to a 1986−2005 baseline). The IPCC's projections are conservative, and may underestimate future sea level rise. Other estimates suggest that for the same period, global mean sea level could rise by 0.2 to 2.0 m (0.7–6.6 ft), relative to mean sea level in 1992. The Third National Climate Assessment (NCA), released May 6, 2014, projected a sea level rise of 1 to 4 feet (30–120 cm) by 2100. Decision makers who are particularly susceptible to risk may wish to use a wider range of scenarios from 8 inches to 6.6 feet (20–200 cm) by 2100. A 2015 study by sea level rise experts concluded that based on MIS 5e data, sea level rise could accelerate in the coming decades, with a doubling time of 10, 20 or 40 years. The study abstract explains: - "We argue that ice sheets in contact with the ocean are vulnerable to non-linear disintegration in response to ocean warming, and we posit that ice sheet mass loss can be approximated by a doubling time up to sea level rise of at least several meters. Doubling times of 10, 20 or 40 years yield sea level rise of several meters in 50, 100 or 200 years. Paleoclimate data reveal that subsurface ocean warming causes ice shelf melt and ice sheet discharge." - "Our climate model exposes amplifying feedbacks in the Southern Ocean that slow Antarctic bottom water formation and increase ocean temperature near ice shelf grounding lines, while cooling the surface ocean and increasing sea ice cover and water column stability. Ocean surface cooling, in the North Atlantic as well as the Southern Ocean, increases tropospheric horizontal temperature gradients, eddy kinetic energy and baroclinicity, which drive more powerful storms." However, Greg Holland from the National Center for Atmospheric Research, who reviewed the James (Jim) Hansen study, noted “There is no doubt that the sea level rise, within the IPCC, is a very conservative number, so the truth lies somewhere between IPCC and Jim.” One 2017 study's scenario, assuming high fossil fuel use for combustion and strong economic growth during this century, projects sea level rise of up to 1.32 metres (4.3 ft) on average — and an extreme scenario with as much as 1.89 metres (6.2 ft), by 2100. This could mean rapid sea level rise of up to 19 millimeters per year by the end of the century. The study also concluded that the Paris climate agreement emissions scenario, if met, would result in a median 0.52 metres (1.7 ft) of sea level rise by 2100. Melting of the Greenland ice sheet could contribute an additional 4 to 7.5 m over many thousands of years. It has been estimated that we are already committed to a sea-level rise of approximately 2.3 metres for each degree of temperature rise within the next 2,000 years. Warming beyond the 2 °C target would potentially lead to rates of sea-level rise dominated by ice loss from Antarctica. Continued CO2 emissions from fossil sources could cause additional tens of metres of sea level rise, over the next millennia and eventually ultimately eliminate the entire Antarctic ice sheet, causing about 58 metres of sea level rise. There is a widespread consensus that substantial long-term sea-level rise will continue for centuries to come even if the temperature stabilizes. IPCC AR4 estimated that at least a partial deglaciation of the Greenland ice sheet, and possibly the West Antarctic ice sheet, would occur given a global average temperature increase of 1–4 °C (relative to temperatures over the years 1990–2000). This estimate was given about a 50% chance of being correct. The estimated timescale was centuries to millennia, and would contribute 4 to 6 metres (13 to 20 ft) or more to sea levels over this period. There is the possibility of a rapid change in glaciers, ice sheets, and hence sea level. Predictions of such a change are highly uncertain due to insufficient scientific understanding. Modeling of the processes associated with a rapid ice-sheet and glacier change could potentially increase future projections of sea-level rise. Hansen (2007), concluded that paleoclimate ice sheet models generally do not include physics of ice streams, effects of surface melt descending through crevasses and lubricating basal flow, or realistic interactions with the ocean. The calibration of projected modelling for future sea-level rise is generally done with a linear projection of future sea level. It thus does not include potential nonlinear collapse of an ice sheet. Each year about 8 mm of precipitation (liquid equivalent) falls on the ice sheets in Antarctica and Greenland, mostly as snow, which accumulates and over time forms glacial ice. Much of this precipitation began as water vapor evaporated from the ocean surface. To a first approximation, the same amount of water appeared to return to the ocean in icebergs and from ice melting at the edges. Scientists previously had estimated which is greater, ice going in or coming out, called the mass balance, important because a nonzero balance causes changes in global sea level. High-precision gravimetry from satellites determined that Greenland was losing more than 200 billion tons of ice per year, in accord with loss estimates from ground measurement. The rate of ice loss was accelerating, having grown from 137 billion tons in 2002–2003. - The total global ice mass lost from Greenland, Antarctica and Earth's glaciers and ice caps during 2003–2010 was about 4300 billion tons (1,000 cubic miles), adding about 12 mm (0.5 in) to global sea level, enough ice to cover an area comparable to the United States 50 cm (1.5 ft) deep. - The melting of small glaciers on the margins of Greenland and the Antarctic Peninsula would increase sea level around 0.5 meter. At the extreme potential, according to the Third Assessment Report of the International Panel on Climate Change, the ice contained within the Greenland ice sheet entirely melted increases sea level by 7.2 meters (24 feet). The ice contained within the Antarctic ice sheet entirely melted would produce 61.1 meters (200 feet) of sea-level change, both totaling a sea-level rise of 68.3 meters (224 feet). It is estimated that fully melting Antarctica would contribute over 60 metres of sea level rise, and Greenland would contribute more than 7 metres. Small glaciers and ice caps on the margins of Greenland and the Antarctic Peninsula might contribute about 0.5 metres. The latter figure is much smaller than for Antarctica or Greenland, but it could occur relatively quickly (within the coming century), whereas full melting of Greenland would be slow (perhaps 1,500 years to fully deglaciate at the fastest likely rate) and Antarctica even slower. However, this calculation does not account for the possibility of accelerate melting as meltwater flows under and lubricates the larger ice sheets, which would begin to move much more rapidly towards the sea. In 2002, Eric Rignot and R.H. Thomas found that the West Antarctic and Greenland ice sheets were losing mass, while the East Antarctic ice sheet was close to in balance (they could not determine the sign of the mass balance for The East Antarctic ice sheet). Kwok and Comiso (J. Climate, v15, 487–501, 2002) also discovered that temperature and pressure anomalies around West Antarctica and on the other side of the Antarctic Peninsula correlate with recent Southern Oscillation events. In 2005 it was reported that during 1992–2003, East Antarctica thickened at an average rate of about 18 mm/yr while West Antarctica showed an overall thinning of 9 mm/yr. associated with increased precipitation. A gain of this magnitude is enough to slow sea-level rise by 0.12 ± 0.02 mm/yr. The large volume of ice on the Antarctic continent stores around 70% of the world's fresh water. This ice sheet is constantly gaining ice from snowfall and losing ice through outflow to the sea. Sheperd et al. 2012, found that different satellite methods for measuring ice mass and change were in good agreement and combining methods leads to more certainty with East Antarctica, West Antarctica, and the Antarctic Peninsula changing in mass by +14 ± 43, –65 ± 26, and –20 ± 14 gigatons (Gt) per year. The same group's 2018 systematic review study estimated that ice loss across the entire continent was 43 gigatons per year on average during the period from 1992 to 2002 but has accelerated to an average of 220 gigatons per year during the five years from 2012 to 2017. East Antarctic ice sheet (EAIS)Edit East Antarctica is a cold region with a ground-base above sea level and occupies most of the continent. This area is dominated by small accumulations of snowfall which becomes ice and thus eventually seaward glacial flows. The mass balance of the East Antarctic Ice Sheet as a whole over the period 1980-2004 is thought to be slightly positive (lowering sea level) or near to balance, with a large degree of uncertainty. However, increased ice outflow has been suggested in some regions. West Antarctic ice sheet (WAIS)Edit West Antarctica is currently experiencing a net outflow of glacial ice, which will increase global sea level over time. A review of the scientific studies looking at data from 1992 to 2006 suggested a net loss of around 50 gigatons of ice per year was a reasonable estimate (around 0.14 mm of yearly sea-level rise), although significant acceleration of outflow glaciers in the Amundsen Sea Embayment could have more than doubled this figure for the year 2006. Thomas et al. found evidence of an accelerated contribution to sea level rise from West Antarctica. The data showed that the Amundsen Sea sector of the West Antarctic Ice Sheet was discharging 250 cubic kilometres of ice every year, which was 60% more than precipitation accumulation in the catchment areas. This alone was sufficient to raise sea level at 0.24 mm/yr. Further, thinning rates for the glaciers studied in 2002–03 had increased over the values measured in the early 1990s. The bedrock underlying the glaciers was found to be hundreds of metres deeper than previously known, indicating exit routes for ice from further inland in the Byrd Subpolar Basin. Thus the West Antarctic ice sheet may not be as stable as has been supposed. A 2009 study found that the rapid collapse of West Antarctic Ice Sheet would raise sea level by 3.3 metres (11 ft). Observational and modelling studies of mass loss from glaciers and ice caps indicate a contribution to sea-level rise of 0.2–0.4 mm/yr, averaged over the 20th century. The results from Dyurgerov show a sharp increase in the contribution of mountain and subpolar glaciers to sea-level rise since 1996 (0.5 mm/yr) to 1998 (2 mm/yr) with an average of about 0.35 mm/yr since 1960. Of interest also is Arendt et al., who estimate the contribution of Alaskan glaciers of 0.14±0.04 mm/yr between the mid-1950s to the mid-1990s, increasing to 0.27 mm/yr in the middle and late 1990s. In 2004 Rignot et al. estimated a contribution of 0.04 ± 0.01 mm/yr to sea level rise from South East Greenland. In the same year, Krabill et al. estimate a net contribution from Greenland to be at least 0.13 mm/yr in the 1990s. Joughin et al. have measured a doubling of the speed of Jakobshavn Isbræ between 1997 and 2003. This is Greenland's largest outlet glacier; it drains 6.5% of the ice sheet, and is thought to be responsible for increasing the rate of sea-level rise by about 0.06 millimetres per year, or roughly 4% of the 20th-century rate of sea-level increase. In 2004, Rignot et al. estimated a contribution of 0.04±0.01 mm/yr to sea-level rise from southeast Greenland. Rignot and Kanagaratnam produced a comprehensive study and map of the outlet glaciers and basins of Greenland. They found widespread glacial acceleration below 66 N in 1996 which spread to 70 N by 2005; and that the ice sheet loss rate in that decade increased from 90 to 200 cubic km/yr; this corresponds to an extra 0.25–0.55 mm/yr of sea level rise. In July 2005 it was reported that the Kangerlussuaq Glacier, on Greenland's east coast, was moving towards the sea three times faster than a decade earlier. Kangerdlugssuaq is around 1,000 m thick, 7.2 km (4.5 miles) wide, and drains about 4% of the ice from the Greenland ice sheet. Measurements of Kangerdlugssuaq in 1988 and 1996 showed it moving at between 5 and 6 km/yr (3.1–3.7 miles/yr), while in 2005 that speed had increased to 14 km/yr (8.7 miles/yr). According to the 2004 Arctic Climate Impact Assessment, climate models project that local warming in Greenland will exceed 3 °C during this century. Also, ice-sheet models project that such a warming would initiate the long-term melting of the ice sheet, leading to a complete melting of the Greenland ice sheet over several millennia, resulting in a global sea level rise of about seven metres. Subsidence and effective sea level riseEdit Many ports, urban conglomerations, and agricultural regions are built on river deltas, where subsidence of land contributes to a substantially increased effective sea level rise. This is caused by both unsustainable extraction of groundwater (in some place also by extraction of oil and gas), and by levees and other flood management practices that prevent accumulation of sediments from compensating for the natural settling of deltaic soils. In many deltas this results in subsidence ranging from several millimeters per year up to possibly 25 centimeters per year in parts of the Ciliwung delta (Jakarta). Total anthropogenic-caused subsidence in the Rhine-Meuse-Scheldt delta (Netherlands) is estimated at 3 to 4 meters, over 3 meters in urban areas of the Mississippi River Delta (New Orleans), and over nine meters in the Sacramento-San Joaquin River Delta. The IPCC TAR WGII report (Impacts, Adaptation Vulnerability) notes that current and future climate change would be expected to have a number of impacts, particularly on coastal systems. Such impacts may include increased coastal erosion, higher storm-surge flooding, inhibition of primary production processes, more extensive coastal inundation, changes in surface water quality and groundwater characteristics, increased loss of property and coastal habitats, increased flood risk and potential loss of life, loss of non-monetary cultural resources and values, impacts on agriculture and aquaculture through decline in soil and water quality, and loss of tourism, recreation, and transportation functions. There is an implication that many of these impacts will be detrimental—especially for the three-quarters of the world's poor who depend on agriculture systems. The report does, however, note that owing to the great diversity of coastal environments; regional and local differences in projected relative sea level and climate changes; and differences in the resilience and adaptive capacity of ecosystems, sectors, and countries, the impacts will be highly variable in time and space. The IPCC report of 2007 estimated that accelerated melting of the Himalayan ice caps and the resulting rise in sea levels would likely increase the severity of flooding in the short term during the rainy season and greatly magnify the impact of tidal storm surges during the cyclone season. A sea-level rise of just 400 mm in the Bay of Bengal would put 11 percent of the Bangladesh's coastal land underwater, creating 7–10 million climate refugees. Sea level rise could also displace many shore-based populations: for example it is estimated that a sea level rise of just 200 mm could make 740,000 people in Nigeria homeless. Future sea-level rise, like the recent rise, is not expected to be globally uniform. Some regions show a sea-level rise substantially more than the global average (in many cases of more than twice the average), and others a sea level fall. However, models disagree as to the likely pattern of sea level change. IPCC assessments suggest that deltas and small island states are particularly vulnerable to sea-level rise caused by both thermal expansion and increased ocean water. Sea level changes have not yet been conclusively proven to have directly resulted in environmental, humanitarian, or economic losses to small island states, but the IPCC and other bodies have found this a serious risk scenario in coming decades. Maldives, Tuvalu, and other low-lying countries are among the areas that are at the highest level of risk. The UN's environmental panel has warned that, at current rates, sea level would be high enough to make the Maldives uninhabitable by 2100. Many media reports have focused on the island nations of the Pacific, notably the Polynesian islands of Tuvalu, which based on more severe flooding events in recent years, were thought to be "sinking" due to sea level rise. A scientific review in 2000 reported that based on University of Hawaii gauge data, Tuvalu had experienced a negligible increase in sea level of 0.07 mm a year over the past two decades, and that the El Niño Southern Oscillation (ENSO) had been a larger factor in Tuvalu's higher tides in recent years. A subsequent study by John Hunter from the University of Tasmania, however, adjusted for ENSO effects and the movement of the gauge (which was thought to be sinking). Hunter concluded that Tuvalu had been experiencing sea-level rise of about 1.2 mm per year. The recent more frequent flooding in Tuvalu may also be due to an erosional loss of land during and following the actions of 1997 cyclones Gavin, Hina, and Keli. A study conducted on the Jaluit Atoll, Marshall Islands demonstrated that significant geomorphologic events such as storms (i.e. Typhoon Ophelia in 1958) tend to have larger impacts on reef islands than the smaller-scale effects of sea level rise. These effects include the immediate erosion and subsequent regrowth process that may vary in length from decades to centuries, even resulting in land areas larger than pre-storm values. With an expected rise in the frequency and intensity of storms, they may become more significant in determining island shape and size than sea level rise. Besides the issues that flooding brings, such as soil salinisation, the island states themselves would also become dissolved over time, as the islands become uninhabitable or completely submerged by the sea. Once this happens, all rights on the surrounding area (sea) are removed. This area can be huge as rights extend to a radius of 224 nautical miles (414 km) around the entire island state. Any resources, such as fossil oil, minerals and metals, within this area can be freely dug up by anyone and sold without needing to pay any commission to the (now dissolved) island state. A study in the April, 2007 issue of Environment and Urbanization reports that 634 million people live in coastal areas within 30 feet (9.1 m) of sea level. The study also reported that about two thirds of the world's cities with over five million people are located in these low-lying coastal areas. Future sea level rise could lead to potentially catastrophic difficulties for shore-based communities in the next centuries: for example, many major cities such as Venice, London, New Orleans, and New York City already need storm-surge defenses, and will need more if the sea level rises; they also face issues such as subsidence. However, modest increases in sea level are likely to be offset when cities adapt by constructing sea walls or through relocating. Re-insurance company Swiss Re estimates an economic loss for southeast Florida in 2030, of $33 billion from climate-related damages. Miami has been listed as "the number-one most vulnerable city worldwide" in terms of potential damage to property from storm-related flooding and sea-level rise. Coastal and Polar habitats are facing drastic changes as consequence of rising sea levels. Loss of ice in the Arctic may force local species to migrate in search of a new home. If seawater continues to approach inland, problems related to contaminated soils and flooded wetlands may occur. Also, fish, birds, and coastal plants could lose parts of their habitat. In 2016 it was reported that the Bramble Cay melomys, which lived on a Great Barrier Reef island, had probably become extinct because of sea level rises. Extreme sea level rise eventsEdit Downturn of Atlantic meridional overturning circulation (AMOC), has been tied to extreme regional sea level rise (1-in-850 year event). Between 2009–2010, coastal sea levels north of New York City increased by 128 mm within two years. This jump is unprecedented in the tide gauge records, which have collected data for several centuries. Sea level measurementEdit Since the 1992 launch of TOPEX/Poseidon, altimetric satellites have been recording the change in sea level. Current rates of sea level rise from satellite altimetry have been estimated in the range of 2.9–3.4 ± 0.4–0.6 mm per year for 1993–2010. This exceeds those from tide gauges. It is unclear whether this represents an accelerated increase over the last decades, variability due to the sparse sampling of the tide gauges, true differences between satellites and tide gauges, or problems with satellite calibration. In 2015, a small calibration errors of the first altimetric satellite – Topex/Poseidon - was identified. It had caused a slight overestimation of the 1992-2005 sea levels, which masked the ongoing sea level rise acceleration. The longest running sea-level measurements, NAP or Amsterdam Ordnance Datum established in 1675, are recorded in Amsterdam, the Netherlands. About 25 percent of the Netherlands lies beneath sea level, while more than 50 percent of this nation's area would be inundated by temporary floods if it did not have an extensive levee system, see Flood control in the Netherlands. In Australia, data collected by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) show the current global mean sea level trend to be 3.2 mm/yr., a doubling of the rate of the total increase of about 210mm that was measured from 1880 to 2009, which reflected an average annual rise over the entire 129-year period of about 1.6 mm/year. Australian record collection has a long time horizon, including measurements by an amateur meteorologist beginning in 1837 and measurements taken from a sea-level benchmark struck on a small cliff on the Isle of the Dead near the Port Arthur convict settlement on 1 July 1841. These records, when compared with data recorded by modern tide gauges, reinforce the recent comparisons of the historic sea level rise of about 1.6 mm/year, with the sharp acceleration in recent decades. Continuing extensive sea level data collection by Australia's CSIRO is summarized in its finding of mean sea level trend to be 3.2 mm/yr. As of 2003 the National Tidal Centre of the Bureau of Meteorology managed 32 tide gauges covering the entire Australian coastline, with some measurements available starting in 1880. Tide gauges in the United States reveal considerable variation because some land areas are rising and some are sinking. For example, over the past 100 years, the rate of sea level rise varied from an increase of about 0.36 inches (9.1 mm) per year along the Louisiana Coast (due to land sinking), to a drop of a few inches per decade in parts of Alaska (due to post-glacial rebound). The rate of sea level rise increased during the 1993–2003 period compared with the longer-term average (1961–2003), although it is unclear whether the faster rate reflected a short-term variation or an increase in the long-term trend. One study showed no acceleration in sea level rise in US tide gauge records during the 20th century. However, another study found that the rate of rise for the US Atlantic coast during the 20th century was far higher than during the previous two thousand years. In 2008, the Dutch Delta Commission (Deltacommissie), advised in a report that the Netherlands would need a massive new building program to strengthen the country's water defenses against the anticipated effects of global warming for the next 190 years. The Dutch plans included drawing up worst-case plans for evacuations. The plan included more than €100 billion (US$144 bn), in new spending through the year 2100 to take measures, such as broadening coastal dunes and strengthening sea and river dikes. The commission said the country must plan for a rise in the North Sea up to 1.3 metres (4 ft 3 in) by 2100, rather than the previously projected 0.80 metres (2 ft 7 in), and plan for a 2–4 metre (6.5–13 feet) rise by 2200. The New York City Panel on Climate Change (NPCC), is an effort to prepare the New York City area for climate change. - "Climate Change Indicators in the United States: Sea level". United States Environmental Protection Agency. May 2014. - "Why the U.S. East Coast could be a major 'hotspot' for rising seas". The Washington Post. 2016. - Shennan, I., 2013. Sea Level Studies: Overview. In: Elias SA, Mock J (eds) Encyclopedia of Quaternary Science (Second Edition). Elsevier, Amsterdam, Netherlands, pp 369-376. ISBN 978-0-444-53643-3 - Noerdlinger, P.D. and Brower, K.R., 2007. The melting of floating ice raises the ocean level. Geophysical Journal International, 170(1), pp. 145-150. - Fischlin; et al., "Section 4.4.9: Oceans and shallow seas – Impacts", in IPCC AR4 WG2 2007, Chapter 4: Ecosystems, their Properties, Goods and Services, p. 234 - Anders Levermann, Peter U. Clark, Ben Marzeion, Glenn A. Milne, David Pollard, Valentina Radic, and Alexander Robinson (13 June 2013). "The multimillennial sea-level commitment of global warming". PNAS. 110: 13745–13750. Bibcode:2013PNAS..11013745L. doi:10.1073/pnas.1219414110. PMC . PMID 23858443. - Climate Change 2014 Synthesis Report Fifth Assessment Report, AR5 (Report). Intergovernmental Panel on Climate Change. 2014. - GLOBAL AND REGIONAL SEA LEVEL RISE SCENARIOS FOR THE UNITED STATES (PDF) (Report) (NOAA Technical Report NOS CO-OPS 083 ed.). National Oceanic and Atmospheric Administration. January 2017. Retrieved 25 January 2017. - BOX SYN-1: SUSTAINED WARMING COULD LEAD TO SEVERE IMPACTS, p. 5, in: Synopsis, in National Research Council 2011 Cite error: Invalid <ref>tag; name "us nrc 2011 long term slr" defined multiple times with different content (see the help page). - Bindoff, N.L., J. Willebrand, V. Artale, A, Cazenave, J. Gregory, S. Gulev, K. Hanawa, C. Le Quéré, S. Levitus, Y. Nojiri, C.K. Shum, L.D. Talley and A. Unnikrishnan (2007), "Section 5.5.1: Introductory Remarks", in IPCC AR4 WG1 2007, Chapter 5: Observations: Ocean Climate Change and Sea Level, ISBN 978-0-521-88009-1, retrieved 25 January 2017 - IPCC, FAQ 5.1: Is Sea Level Rising?, in IPCC AR4 WG1 2007. - Albritton et al., Technical Summary, Box 2: What causes sea level to change?, in IPCC TAR WG1 2001. - "Contributions to Global Sea-Level Rise". NAP. 2012. In the most recent estimate, for 1993–2008, the contribution from land ice increased to 68 percent, the contribution from thermal expansion decreased to 35 percent. - IPCC, Summary for Policymakers, Section C. Current knowledge about future impacts – Magnitudes of impact in IPCC AR4 WG2 2007. - Gornitz, Vivien (January 2007). "Sea Level Rise, After the Ice Melted and Today". Goddard Institute for Space Studies. Retrieved 10 September 2015. - Cronin, T. M. (2012) Invited review: Rapid sea-level rise. Quaternary Science Reviews. 56:11-30. - Blanchon, P. (2011a) Meltwater Pulses. In: Hopley, D. (Ed), Encyclopedia of Modern Coral Reefs: Structure, form and process. Springer-Verlag Earth Science Series, p. 683-690. ISBN 978-90-481-2638-5 - Blanchon, P. (2011b) Backstepping. In: Hopley, D. (Ed), Encyclopedia of Modern Coral Reefs: Structure, form and process. Springer-Verlag Earth Science Series, p. 77-84. ISBN 978-90-481-2638-5 - Blanchon, P., and Shaw, J. (1995) Reef drowning during the last deglaciation: evidence for catastrophic sea-level rise and icesheet collapse. Geology, 23:4–8. - GILLIS, JUSTIN (22 February 2016). "Seas Are Rising at Fastest Rate in Last 28 Centuries". New York Times. Retrieved 29 February 2016. - Jevrejeva, Svetlana; J. C. Moore; A. Grinsted; P. L. Woodworth (April 2008). "Recent global sea level acceleration started over 200 years ago?". Geophysical Research Letters. 35 (8). Bibcode:2008GeoRL..35.8715J. doi:10.1029/2008GL033611. - Bindoff et al., Chapter 5: Observations: Oceanic Climate Change and Sea Level, Executive summary, in IPCC AR4 WG1 2007. - BAART, F.; VAN GELDER, P.H.A.J.M.; DE RONDE, J.; VAN KONINGSVELD, M., & WOUTERS, B. (September 20, 2011). "The effect of the 18.6-year lunar nodal cycle on regional sea-level rise estimates". - Anisimov et al., Chapter 11: Changes in Sea Level, Table 11.9, in IPCC TAR WG1 2001. - Climate Change: Vital Signs of the Planet: Sea Level SATELLITE DATA: 1993-PRESENT on nasa.gov - Global Mean Sea Level Data This file contains Global Mean Sea Level (GMSL) variations computed at the NASA Goddard Space Flight Center (averaged column 12 bi-monthly, normalized to 1993.0 epoch) - This article incorporates public domain material from the NOAA document: NOAA GFDL, Geophysical Fluid Dynamics Laboratory – Climate Impact of Quadrupling CO2, Princeton, NJ, USA: NOAA GFDL - Hansen, J.; et al. (1981). "Climate impact of increasing atmospheric carbon dioxide". Science. 231: 957–966. Bibcode:1981Sci...213..957H. doi:10.1126/science.213.4511.957. - Karl, TR; et al., eds. (2009). Global Climate Change Impacts in the United States. 32 Avenue of the Americas, New York, NY 10013-2473, USA: Cambridge University Press. pp. 22–24. ISBN 978-0-521-14407-0. Retrieved 2011-04-28. - IPCC AR4, Glossary P-Z: "Projection", in IPCC AR4 WG1 2007. - Morita et al., Chap. 2: Greenhouse Gas Emission Mitigation Scenarios and Implications, Section 2.2.1: Introduction to Scenarios, in IPCC TAR WG3 2001. - IPCC, Topic 3, Section 3.2.1: 21st century global changes, p. 45, in IPCC AR4 SYR 2007. - J E Hansen (2007). "Scientific reticence and sea level rise". Environmental Research Letters. IOPScience. 2: 024002. arXiv: . Bibcode:2007ERL.....2b4002H. doi:10.1088/1748-9326/2/2/024002. - Stroeve, Julienne (1 May 2007). "Arctic sea ice decline: Faster than forecast". Geophysical Research Letters. 34: 1–5. Bibcode:2007GeoRL..3409501S. doi:10.1029/2007gl029703. - Allison; et al. (2009). "The Copenhagen Diagnosis, 2009: Updating the World on the Latest Climate Science". - America's Climate Choices: Panel on Advancing the Science of Climate Change, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, NATIONAL RESEARCH COUNCIL OF THE NATIONAL ACADEMIES (2010). "7 Sea Level Rise and the Coastal Environment". Advancing the Science of Climate Change. Washington, D.C.: The National Academies Press. pp. 243–250. ISBN 978-0-309-14588-6. Retrieved 2011-06-17. (From pg 250) Even if sea-level rise were to remain in the conservative range projected by the IPCC (0.6–1.9 feet [0.18–0.59 m])—not considering potentially much larger increases due to rapid decay of the Greenland or West Antarctic ice sheets—tens of millions of people worldwide would become vulnerable to flooding due to sea-level rise over the next 50 years (Nicholls, 2004; Nicholls and Tol, 2006). This is especially true in densely populated, low-lying areas with limited ability to erect or establish protective measures. In the United States, the high end of the conservative IPCC estimate would result in the loss of a large portion of the nation's remaining coastal wetlands. The impact on the east and Gulf coasts of the United States of 3.3 feet (1 m) of sea-level rise, which is well within the range of more recent projections for the 21st century (e.g., Pfeffer et al., 2008; Vermeer and Rahmstorf, 2009), is shown in pink in Figure 7.7. Also shown, in red, is the effect of 19.8 feet (6 m) of sea-level rise, which could occur over the next several centuries if warming were to continue unabated. - Rignot E.; I. Velicogna; M. R. van den Broeke; A. Monaghan; J. Lenaerts (2011). "Acceleration of the contribution of the Greenland and Antarctic ice sheets to sea level rise". Geophysical Research Letters. 38 (5). Bibcode:2011GeoRL..3805503R. doi:10.1029/2011GL046583. Considerable disparity remains between these estimates due to the inherent uncertainties of each method, the lack of detailed comparison between independent estimates, and the effect of temporal modulations in ice sheet surface mass balance. Here, we present a consistent record of mass balance for the Greenland and Antarctic ice sheets over the past two decades, validated by the comparison of two independent techniques over the past eight years: one differencing perimeter loss from net accumulation, and one using a dense time series of timevariable gravity. We find excellent agreement between the two techniques for absolute mass loss and acceleration of mass loss. - Romm, Joe (10 Mar 2011). "JPL bombshell: Polar ice sheet mass loss is speeding up, on pace for 1 foot sea level rise by 2050". Climate Progress. Center for American Progress Action Fund. Retrieved 16 April 2012. - Churchs, John; Clark, Peter. "Chapter 13: Sea Level Change – Final Draft Underlying Scientific-Technical Assessment" (PDF). climatechange2013.org. IPCC Working Group I. Retrieved January 21, 2015. - Projections of Future Sea Level Rise, pp. 243–44, in: Ch. 7. Sea Level Rise and the Coastal Environment, in National Research Council 2010 - 4. Global Mean Sea Level Rise Scenarios, in: Main Report, in Parris & others 2012, p. 12 - "Sea Level Rise Key Message Third National Climate Assessment". National Climate Assessment. Retrieved 25 June 2014. - J. Hansen; M. Sato; P. Hearty; R. Ruedy; M. Kelley; V. Masson-Delmotte; G. Russell; G. Tselioudis; J. Cao; E. Rignot; I. Velicogna; E. Kandiano; K. von Schuckmann; P. Kharecha; A. N. Legrande; M. Bauer; K.-W. Lo (2015). "Ice melt, sea level rise and superstorms: evidence from paleoclimate data, climate modeling, and modern observations that 2◦C global warming is highly dangerous" (PDF). Atmospheric Chemistry and Physics (ACP). 15: 20059–20179. Bibcode:2015ACPD...1520059H. doi:10.5194/acpd-15-20059-2015. - "James Hansen's controversial sea level rise paper has now been published online". Washington Post. 2015. - Chris Mooney (October 26, 2017). "New science suggests the ocean could rise more — and faster — than we thought". The Chicago Tribune. - Alexander Nauels; Joeri Rogelj; Carl-Friedrich Schleussner; Malte Meinshausen; Matthias Mengel (26 October 2017). "Linking sea level rise and socioeconomic indicators under the Shared Socioeconomic Pathways". Environmental Research Letters. 12 (11). Bibcode:2017ERL....12k4002N. doi:10.1088/1748-9326/aa92b6. - Anders Levermann; Peter U. Clark; Ben Marzeion; Glenn A. Milne; David Pollard; Valentina Radic; Alexander Robinson (13 June 2013). "The multimillennial sea-level commitment of global warming". PNAS. 110: 13745–50. Bibcode:2013PNAS..11013745L. doi:10.1073/pnas.1219414110. PMC . PMID 23858443. - Ricarda Winkelmann; Anders Levermann; Andy Ridgwell; Ken Caldeira (11 September 2015). "Combustion of available fossil fuel resources sufficient to eliminate the Antarctic Ice Sheet". Bibcode:2015SciA....1E0589W. doi:10.1126/sciadv.1500589. - America's Climate Choices: Panel on Advancing the Science of Climate Change, Board on Atmospheric Sciences and Climate, Division on Earth and Life Studies, NATIONAL RESEARCH COUNCIL OF THE NATIONAL ACADEMIES (2010). "7 Sea Level Rise and the Coastal Environment". Advancing the Science of Climate Change. Washington, D.C.: The National Academies Press. p. 245. ISBN 978-0-309-14588-6. Retrieved 2011-06-17. - IPCC AR4, Summary for Policymakers, Section C. Current knowledge about future impacts – Magnitudes of impact in IPCC AR4 WG2 2007 - IPCC AR4, Summary for Policymakers, Endbox 2. Communication of Uncertainty, in IPCC AR4 WG2 2007 - "Scientists say Antarctic melting could double sea level rise. Here's what that looks like". - U.S. Climate Change Science Program: Synthesis and Assessment Report 3.4: Abrupt Climate Change: Summary and Findings (PDF). Reston, VA: US Geological Survey. 2008. p. 2. Retrieved 2010-08-20. - Skeptical Science: Is Greenland gaining or losing ice? - Sea level rise overflowing estimates; Feedback mechanisms are speeding up ice melt November 8, 2012 Science News - Velicogna, I. (2009). "Increasing rates of ice mass loss from the Greenland and Antarctic ice sheets revealed by GRACE". Geophysical Research Letters. 36 (19). Bibcode:2009GeoRL..3619503V. doi:10.1029/2009GL040222. - "NASA Mission Takes Stock of Earth's Melting Land Ice". NASA/JPL-Caltech/University of Colorado. NASA. February 2012. Retrieved 25 April 2013. - Anisimov et al., Section 126.96.36.199: Models of thermal expansion, Table 1.3, in IPCC TAR WG1 2001. - Anisimov et al., Chapter 11. Changes in Sea Level, Section 188.8.131.52: Models of thermal expansion, Table 1.3, in IPCC TAR WG1 2001. - Zwally H.J.; et al. (2002). "Surface Melt-Induced Acceleration of Greenland Ice-Sheet Flow". Science. 297 (5579): 218–222. Bibcode:2002Sci...297..218Z. doi:10.1126/science.1072708. PMID 12052902. - "Greenland Ice Sheet flows faster during summer melting". Goddard Space Flight Center (press release). 2006-06-02. - Rignot, E; Thomas, RH (2002). "Mass Balance of Polar Ice Sheets". Science. 297 (5586): 1502–1506. Bibcode:2002Sci...297.1502R. doi:10.1126/science.1073888. PMID 12202817. - Davis, Curt H.; Yonghong Li; Joseph R. McConnell; Markus M. Frey; Edward Hanna (24 June 2005). "Snowfall-Driven Growth in East Antarctic Ice Sheet Mitigates Recent Sea-Level Rise". Science. 308 (5730): 1898–1901. Bibcode:2005Sci...308.1898D. doi:10.1126/science.1110662. PMID 15905362. - "How Stuff Works: polar ice caps". howstuffworks.com. Retrieved 2006-02-12. - Shepherd, Andrew; Ivins, Erik; et al. (IMBIE team) (2012-11-30). "A Reconciled Estimate of Ice-Sheet Mass Balance". Science. 338 (6111): 1183–1189. Bibcode:2012Sci...338.1183S. doi:10.1126/science.1228102. - Shepherd, Andrew; Ivins, Erik; et al. (IMBIE team) (2018-06-13). "Mass balance of the Antarctic Ice Sheet from 1992 to 2017". Nature. 558: 219–222. doi:10.1038/s41586-018-0179-y. Lay summary – Ars Technica (2018-06-13). - Shepherd, A.; Wingham, D. (2007). "Recent Sea-Level Contributions of the Antarctic and Greenland Ice Sheets". Science. 315 (5818): 1529–1532. Bibcode:2007Sci...315.1529S. doi:10.1126/science.1136776. PMID 17363663. - Rignot, E.; Bamber, J. L.; Van Den Broeke, M. R.; Davis, C.; Li, Y.; Van De Berg, W. J.; Van Meijgaard, E. (2008). "Recent Antarctic ice mass loss from radar interferometry and regional climate modelling". Nature Geoscience. 1 (2): 106–110. Bibcode:2008NatGe...1..106R. doi:10.1038/ngeo102. PMC . - Chen, J. L.; Wilson, C. R.; Tapley, B. D.; Blankenship, D.; Young, D. (2008). "Antarctic regional ice loss rates from GRACE". Earth and Planetary Science Letters. 266 (1–2): 140–148. Bibcode:2008E&PSL.266..140C. doi:10.1016/j.epsl.2007.10.057. - Thomas, R; et al. (2004). "Accelerated Sea-Level Rise from West Antarctica". Science. 306 (5694): 255–258. Bibcode:2004Sci...306..255T. doi:10.1126/science.1099650. PMID 15388895. - Bamber J.L.; Riva R.E.M.; Vermeersen B.L.A.; LeBroq A.M. (2009). "Reassessment of the potential sea-level rise from a collapse of the West Antarctic Ice Sheet". Science. 324 (5929): 901–3. Bibcode:2009Sci...324..901B. doi:10.1126/science.1169335. PMID 19443778. - Dyurgerov, Mark. 2002. Glacier Mass Balance and Regime: Data of Measurements and Analysis. INSTAAR Occasional Paper No. 55, ed. M. Meier and R. Armstrong. Boulder, CO: Institute of Arctic and Alpine Research, University of Colorado. Distributed by National Snow and Ice Data Center, Boulder, CO. A shorter discussion is at - Arendt, AA; et al. (July 2002). "Rapid Wastage of Alaska Glaciers and Their Contribution to Rising Sea Level". Science. 297 (5580): 382–386. Bibcode:2002Sci...297..382A. doi:10.1126/science.1072497. PMID 12130781. - Earth Observatory (2009) Melting Anomalies in Greenland in 2007 - Rignot, E.; et al. (2004). "Rapid ice discharge from southeast Greenland glaciers". Geophysical Research Letters. 31 (10): L10401. Bibcode:2004GeoRL..3110401R. doi:10.1029/2004GL019474. - Krabill, W; et al. (21 July 2000). "Greenland Ice Sheet: High-Elevation Balance and Peripheral Thinning". Science. 289 (5478): 428–430. Bibcode:2000Sci...289..428K. doi:10.1126/science.289.5478.428. PMID 10903198. - Joughin, I; et al. (December 2004). "Large fluctuations in speed on Greenland's Jakobshavn Isbræ glacier". Nature. 432 (7017): 608–610. Bibcode:2004Natur.432..608J. doi:10.1038/nature03130. PMID 15577906. - Report shows movement of glacier has doubled speed | SpaceRef – Your Space Reference - Rignot, E; Kanagaratnam, P (2006). "Changes in the Velocity Structure of the Greenland Ice Sheet". Science. 311 (5763): 986–90. Bibcode:2006Sci...311..986R. doi:10.1126/science.1121381. PMID 16484490. - Connor, Steve (2005-07-25). "Melting Greenland glacier may hasten rise in sea level". The Independent. London. Retrieved 2010-04-30. - Bucx et al. 2010, p. 88;Tessler et al. 2015, p. 638 - Bucx et al. 2010, p. 81 - Bucx et al. 2010, pp. 81, 88,90 - IPCC TAR WG1 2001.[page needed] - "Climate Shocks: Risk and Vulnerability in an Unequal World." Human Development report 2007/2008. hdr.undp.org/media/hdr_20072008_summary_english.pdf - Klaus Paehler. "Nigeria in the Dilemma of Climate Change". Retrieved 2008-11-04. - ??, in IPCC TAR WG1 2001.[verification needed] - Fig. 11?, in IPCC TAR WG1 2001.[verification needed] - The Future Oceans – Warming Up, Rising High, Turning Sour - Megan Angelo (1 May 2009). "Honey, I Sunk the Maldives: Environmental changes could wipe out some of the world's most well-known travel destinations". - Kristina Stefanova (19 April 2009). "Climate refugees in Pacific flee rising sea". - Levine, Mark (December 2002). "Tuvalu Toodle-oo". Outside Magazine. Retrieved 2005-12-19. - Patel, Samir S. (April 5, 2006). "A Sinking Feeling". Nature. 440: 734–736. Bibcode:2006Natur.440..734P. doi:10.1038/440734a. PMID 16598226. Retrieved 2007-11-15. - Hunter, J.A. (August 12, 2002). "A Note on Relative Sea Level Rise at Funafuti, Tuvalu" (PDF). Archived from the original (PDF) on October 7, 2011. - Field, Michael J. (December 2001). "Sea Levels Are Rising". Pacific Magazine. Archived from the original on 2005-12-18. Retrieved 2005-12-19. - Ford, Murray R.; Kench, Paul S. "Spatiotemporal variability of typhoon impacts and relaxation intervals on Jaluit Atoll, Marshall Islands". Geology. 44 (2): 159–162. Bibcode:2016Geo....44..159F. doi:10.1130/g37402.1. - Klein, Alice. "Five Pacific islands vanish from sight as sea levels rise". New Scientist. Retrieved 2016-05-09. - Alfred Henry Adriaan Soons (1989). Zeegrenzen en zeespiegelrijzing : volkenrechtelijke beschouwingen over de effecten van het stijgen van de zeespiegel op grenzen in zee : rede, uitgesproken bij de aanvaarding van het ambt van hoogleraar in het volkenrecht aan de Rijksuniversiteit te Utrecht op donderdag 13 april 1989 [Sea borders and rising sea levels: international law considerations about the effects of rising sea levels on borders at sea: speech, pronounced with the acceptance of the post of professor in international law at the University of Utrecht on 13 April 1989] (in Dutch). Kluwers. ISBN 978-90-268-1925-4. - "Policy Implications of Sea Level Rise: The Case of the Maldives". Proceedings of the Small Island States Conference on Sea Level Rise. November 14–18, 1989. Malé, Republic of Maldives. Edited by Hussein Shihab. Retrieved 2007-01-12. - Jacobson, Rebecca. "Engineers Consider Barriers to Protect New York From Another Sandy". PBS. Retrieved 26 November 2012. - ??, in IPCC TAR WG1 2001.[verification needed] - "IPCC's New Estimates for Increased Sea-Level Rise". Yale. 2013. - "Sea rise threatens Florida coast, but no statewide plan". Yahoo. 10 May 2015. - "Climate Change and resilience building: a reinsurer's perspective" (pdf). Miamidade.gov. 2014. - Jeff Goodell (June 20, 2013). "Goodbye, Miami". Rolling Stone. Retrieved June 21, 2013. The Organization for Economic Co-operation and Development lists Miami as the number-one most vulnerable city worldwide in terms of property damage, with more than $416 billion in assets at risk to storm-related flooding and sea-level rise. - "Sea Level Rise" National Geographic. - Smith, Lauren (2016-06-15). "Extinct: Bramble Cay melomys". Australian Geographic. Retrieved 2016-06-17. - Jianjun Yin & Stephen Griffies (March 25, 2015). "Extreme sea level rise event linked to AMOC downturn". CLIVAR. - Paul B. Goddard, Jianjun Yin, Stephen M. Griffies & Shaoqing Zhang (24 February 2015). "An extreme event of sea-level rise along the Northeast coast of North America in 2009–2010". Nature Communications. 6: 6346. Bibcode:2015NatCo...6E6346G. doi:10.1038/ncomms7346. PMID 25710720. - "Ocean Surface Topography from Space". NASA/JPL. - Nerem; R. S.; et al. (2010). "Estimating Mean Sea Level Change from the TOPEX and Jason Altimeter Missions". Marine Geodesy. 33: 435–446. doi:10.1080/01490419.2010.491031. - CUSLRG (2011-07-19). "2011_rel2: Global Mean Sea Level Time Series (seasonal signals removed)". CU Sea Level Research Group (CUSLRG). Colorado Center for Astrodynamics Research at the University of Colorado at Boulder. Retrieved 2011-02-10. - CNES/CLS (2011). "AVISO Global Mean Sea Level Estimate". Centre National d'Etudes Spatiales/Collecte Localisation Satellites (CNES/CLS): Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO). Retrieved 2011-07-29. - White, N. (2011-07-29). "CSIRO Global Mean Sea Level Estimate". Commonwealth Scientific and Industrial Research Organisation (CSIRO) / Wealth from Oceans National Research Flagship and the Antarctic Climate and Ecosystems Cooperative Research Centre (ACE CRC). Retrieved 2011-07-29. - LSA (2011-03-16). "Laboratory for Satellite Altimetry / Sea level rise". NOAA: National Environmental Satellite, Data, and Information Service (NESDIS), Satellite Oceanography and Climatology Division, Laboratory for Satellite Altimetry (LSA). Retrieved 2011-07-29. - IPCC AR3. "Mean sea level change from satellite altimeter observations". - Michael Le Page (11 May 2015). "Apparent slowing of sea level rise is artefact of satellite data". - "Other Long Records not in the PSMSL Data Set". PSMSL. Retrieved 11 May 2015. - "Historical Sea Level Changes". CSIRO. Retrieved 25 April 2013. - Neil, White. "Historical Sea Level Changes". CSIRO. Retrieved 25 April 2013. - Hunter, John; R. Coleman; D. Pugh (April 2003). "The Sea Level at Port Arthur, Tasmania, from 1841 to the Present". Geophysical Research Letters. 30 (7). Bibcode:2003GeoRL..30.1401H. doi:10.1029/2002GL016813. - "Landmark study confirms rising Australian sea level" (PDF) (Press release). CSIRO Marine and Atmospheric Research. 2003-01-23. Retrieved 2012-07-19. - National Tidal Centre (2003). "Australian Mean Sea Level Survey" (PDF). Australian Government Bureau of Meteorology. Retrieved 2010-12-18. - "Sea Level Changes". United States Environmental Protection Agency. Retrieved Jan 5, 2012. - Houston, J. R.; Dean, R. G. (2011). "Sea-Level Acceleration Based on U.S. Tide Gauges and Extensions of Previous Global-Gauge Analyses". Journal of Coastal Research. 27: 409–417. doi:10.2112/JCOASTRES-D-10-00157.1. - Kemp, A. C.; Horton, B. P.; Donnelly, J. P.; Mann, M. E.; Vermeer, M.; Rahmstorf, S. (2011). "Climate related sea-level variations over the past two millennia" (PDF). Proceedings of the National Academy of Sciences. 108 (27): 11017–11022. Bibcode:2011PNAS..10811017K. doi:10.1073/pnas.1015619108. PMC . PMID 21690367. - "Dutch draw up drastic measures to defend coast against rising seas" - "$500 million, 5-year plan to help Miami Beach withstand sea-level rise". 6 April 2015. - Ipcc ar4 wg1 (2007), Solomon, S.; Qin, D.; Manning, M.; Chen, Z.; Marquis, M.; Averyt, K.B.; Tignor, M.; Miller, H.L., eds., Climate Change 2007: The Physical Science Basis, Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88009-1 (pb: 978-0-521-70596-7). - Ipcc ar4 wg2 (2007), Parry, M.L.; Canziani, O.F.; Palutikof, J.P.; van der Linden, P.J.; Hanson, C.E., eds., Climate Change 2007: Impacts, Adaptation and Vulnerability, Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88010-7 (pb: 978-0-521-70597-4). - Ipcc ar4 wg3 (2007), Metz, B.; Davidson, O.R.; Bosch, P.R.; Dave, R.; Meyer, L.A., eds., Climate Change 2007: Mitigation of Climate Change, Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88011-4 (pb: 978-0-521-70598-1). - Ipcc ar4 syr (2007), Core Writing Team; Pachauri, R.K; and Reisinger, A., eds., Climate Change 2007: Synthesis Report, Contribution of Working Groups I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, IPCC, ISBN 92-9169-122-4 . - Ipcc tar wg1 (2001), Houghton, J.T.; Ding, Y.; Griggs, D.J.; Noguer, M.; van der Linden, P.J.; Dai, X.; Maskell, K.; Johnson, C.A., eds., Climate Change 2001: The Scientific Basis, Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 0-521-80767-0, archived from the original on 2016-03-30 (pb: 0-521-01495-6). - Ipcc tar wg2 (2001), McCarthy, J. J.; Canziani, O. F.; Leary, N. A.; Dokken, D. J.; White, K. S., eds., Climate Change 2001: Impacts, Adaptation and Vulnerability, Contribution of Working Group II to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 0-521-80768-9, archived from the original on 2016-05-14 (pb: 0-521-01500-6). - Ipcc tar wg3 (2001), Metz, B.; Davidson, O.; Swart, R.; Pan, J., eds., Climate Change 2001: Mitigation, Contribution of Working Group III to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 0-521-80769-7, archived from the original on 2012-01-13 (pb: 0-521-01502-2). - Bucx, T.; Marchand, M.; Makaske, A.; van de Guchte, C. (December 2010), Comparative assessment of the vulnerability and resilience of 10 deltas: synthesis report, Delta Alliance report number 1, Delft-Wageningen, The Netherlands: Delta Alliance International, ISBN 978-94-90070-39-7 - Hanson, S.; Nicholls, R.; Ranger, N.; Hallegatte, S.; Corfee-Morlot, J.; Herweijer, C.; Chateau, J. (2011), "A global ranking of port cities with high exposure to climate extremes", Climatic Change, 104 (1): 89–111, doi:10.1007/s10584-010-9977-4 - Tessler, Z. D.; Vörösmarty, C. J.; Grossberg, M.; Gladkova, I.; Aizenman, H.; Syvitski, J. P. M.; Foufoula-Georgiou, E. (7 August 2015), "Profiling risk and sustainability in coastal deltas of the world.", Science, 349 (6248): 638–43, Bibcode:2015Sci...349..638T, doi:10.1126/science.aab3574, PMID 26250684 - "Sea Level Rise Key Message". Third National Climate Assessment. Retrieved 25 June 2014. - Byravan, S.; Rajan, S. C. (2010). "The ethical implications of sea-level rise due to climate change". Ethics and International Affairs. 24 (3): 239–60. doi:10.1111/j.1747-7093.2010.00266.x. - Cazenave, A.; Nerem, R. S. (2004). "Present-day sea level change: Observations and causes". Rev. Geophys. 42 (3): RG3001. Bibcode:2004RvGeo..42.3001C. doi:10.1029/2003RG000139. - Emery, K.O. & D. G. Aubrey (1991). Sea levels, land levels, and tide gauges. New York: Springer-Verlag. ISBN 0-387-97449-0. - "Sea Level Variations of the United States 1854–1999" (PDF). NOAA Technical Report NOS CO-OPS 36. Archived from the original (PDF) on 17 November 2004. Retrieved 20 February 2005. - Clark, P. U., Mitrovica, J. X., Milne, G. A. & Tamisiea (2002). "Sea-Level Fingerprinting as a Direct Test for the Source of Global Meltwater Pulse 1A". Science. 295 (5564): 2438–2441. Bibcode:2002Sci...296..553B. doi:10.1126/science.1069017. PMID 11896236. - Eelco J. Rohling, Robert Marsh, Neil C. Wells, Mark Siddall and Neil R. Edwards (2004). "Similar meltwater contributions to glacial sea level changes from Antarctic and northern ice sheets". Nature. 430 (August 26): 1016–1021. Bibcode:2004Natur.430.1016R. doi:10.1038/nature02859. PMID 15329718. - Walter Munk (2002). "Twentieth century sea level: An enigma". Proceedings of the National Academy of Sciences. 99 (10): 6550–6555. Bibcode:2002PNAS...99.6550M. doi:10.1073/pnas.092704599. PMC . PMID 12011419. - Menefee, Samuel Pyeatt (1991). "Half Seas Over: The Impact of Sea Level Rise on International Law and Policy". U.C.L.A. Journal of Environmental Law & Policy. 9: 175–218. - Laury Miller & Bruce C. Douglas (2004). "Mass and volume contributions to twentieth-century global sea level rise". Nature. 428 (6981): 406–409. Bibcode:2004Natur.428..406M. doi:10.1038/nature02309. PMID 15042085. - Bruce C. Douglas & W. Richard Peltier (2002). "The Puzzle of Global Sea-Level Rise". Physics Today. 55 (3): 35–41. Bibcode:2002PhT....55c..35D. doi:10.1063/1.1472392. Archived from the original on 13 February 2005. Retrieved 24 March 2005. - B. C. Douglas (1992). "Global sea level acceleration". J. Geophys. Res. 7 (c8): 12699. Bibcode:1992JGR....9712699D. doi:10.1029/92JC01133. - Warrick, R. A., C. L. Provost, M. F. Meier, J. Oerlemans, and P. L. Woodworth (1996). "Changes in sea level". In Houghton, John Theodore. Climate Change 1995: The Science of Climate Change. Cambridge, UK: Cambridge University Press. pp. 359–405. ISBN 0-521-56436-0. - R. Kwok; J. C. Comiso (2002). "Southern Ocean Climate and Sea Ice Anomalies Associated with the Southern Oscillation" (PDF). Journal of Climate. 15 (5): 487–501. Bibcode:2002JCli...15..487K. doi:10.1175/1520-0442(2002)015<0487:SOCASI>2.0.CO;2. ISSN 1520-0442. - Colorado Center for Astrodynamics Research, "Mean Sea Level" Accessed December 19, 2005 - Fahnestock, Mark (December 4, 2004), "Report shows movement of glacier has doubled speed", University of New Hampshire press release. Accessed December 19, 2005 - Leuliette, E.W.; R.S. Nerem; G.T. Mitchum (2004). "Calibration of TOPEX/Poseidon and Jason Altimeter Data to Construct a Continuous Record of Mean Sea Level Change". Marine Geodesy. 27 (1–2): 79–94. doi:10.1080/01490410490465193. - National Snow and Ice Data Center (March 14, 2005), "Is Global Sea Level Rising?". Accessed December 19, 2005 - INQUA commission on Sea Level Changes and Coastal Evolution. "IPCC again". Archived from the original (PDF) on April 16, 2005. Retrieved 2004-07-25. - Connor, Steve (2005-07-25). "Independent Online Edition". The Independent. London. Retrieved 2005-12-19. - Maumoon Abdul Gayoom. "Address by his Excellency Mr. Maumoon Abdul Gahoom, President of the Republic of Maldives, at thenineteenth special session of the United Nations General Assembly for the purpose of an overall review and appraisal of theimplementation of agenda 21 – June 24, 1997". Archived from the original on June 13, 2006. Retrieved 2006-01-06. - Pilkey, Orrin and Robert Young, The Rising Sea, Shearwater, July 2009 ISBN 978-1-59726-191-3 - Douglas, Bruce C. (1995). "Global sea level change: Determination and interpretation". Reviews of Geophysics. 33: 1425–1432. Bibcode:1995RvGeo..33.1425D. doi:10.1029/95RG00355. |The Wikibook Historical Geology has a page on the topic of: Sea level variations| - Third National Climate Assessment Sea Level Rise Key Message - "University of Colorado at Boulder Sea Level Change". - Incorporating Sea Level Change Scenarios at the Local Level Outlines eight steps a community can take to develop site-appropriate scenarios - Sea Level Rise:Understanding the past – Improving projections for the future - The Global Sea Level Observing System (GLOSS) - Sea Level Rise Viewer (NOAA) - on YouTube - on YouTube – National Geographic film based on the 2007 book Six Degrees: Our Future on a Hotter Planet - on YouTube – Discovery Channel - Sea Ice news – National Snow and Ice Data Center (NSIDC) - Global Sea Level Rise Map